WO2007057808A2 - Blur estimation - Google Patents

Blur estimation Download PDF

Info

Publication number
WO2007057808A2
WO2007057808A2 PCT/IB2006/054081 IB2006054081W WO2007057808A2 WO 2007057808 A2 WO2007057808 A2 WO 2007057808A2 IB 2006054081 W IB2006054081 W IB 2006054081W WO 2007057808 A2 WO2007057808 A2 WO 2007057808A2
Authority
WO
WIPO (PCT)
Prior art keywords
blur
image
filter
input image
computing
Prior art date
Application number
PCT/IB2006/054081
Other languages
French (fr)
Other versions
WO2007057808A3 (en
Inventor
Gerard De Haan
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2007057808A2 publication Critical patent/WO2007057808A2/en
Publication of WO2007057808A3 publication Critical patent/WO2007057808A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Definitions

  • the invention relates to a blur estimator for estimating a blur measure for a region of an input image.
  • the invention further relates to an image processing apparatus, comprising: means for receiving an input signal representing an input image; such a blur estimator; and a processing unit for computing an output image on basis of the input image and the blur measure.
  • the invention further relates to a method of estimating a blur measure for a region of an input image.
  • the invention further relates to a computer program product to be loaded by a computer arrangement, comprising instructions to estimate a blur measure for a region of an input image, the computer arrangement comprising processing means and a memory
  • Focal blur, or out-of-focus blur, in images and videos occurs when objects in the scene are placed out of the focal range of the camera. See e.g. the article "A New Sense for Depth of Field", by A. Pentland, in IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 9, No. 4, pp. 523-531, July 1987.
  • Out-of-focus blur is sometimes used by photographers to draw the viewers' attention to objects in focus, but in many cases it is desirable to remove the blur and restore the original scene faithfully. As objects at varying distances are differently blurred in the image, accurate blur estimation is essential.
  • focal blur has also become an important topic in many other applications, such as restoring the blurred background part of images and videos, digital auto- focusing systems and 2-D to 3-D image conversion.
  • Focal blur is usually modeled as Gaussian blurring. See e.g. the article "A Simple Real Time Range Camera", by A. Pentland, T. Darrell, M. Turk and W. Huang, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 256- 261, 1989. Therefore, the problem of blur estimation is to identify the Gaussian point spread function (PSF). Many techniques have been proposed to address the problem. Early blur estimation methods examine the regular pattern of zeros from the blurred image in the frequency domain. These methods can only identify a certain class of PSFs, but not truncated Gaussian PSFs, which do not have zeros in the frequency domain.
  • PSF Gaussian point spread function
  • the blurred edge signal is convolved with a filter that is the second derivative of a Gaussian function and the response has a positive and a negative peak. The distance between these peak positions is used to determine the blur radius.
  • a problem of Elder's method is that the blur estimation is easily deteriorated by the response of neighboring edges.
  • the blur estimator comprises: a first filter for computing a first filtered image on basis of the input image; a second filter for computing a second filtered image on basis of the input image, the second filter being different from the first filter; a ratio computing unit for computing ratio values between pixel values of the first filtered image and respective pixel values of the second filtered image; and a maximum operator for outputting the blur measure by determining a maximum ratio value of the set of ratio values corresponding to the region of the input image.
  • the insight on which the current invention is based starts from the observation that the Fourier analysis of a perfect edge shows harmonics in a fixed ratio. Focal blur will change the ratio of the harmonics by weakening the higher frequencies.
  • an estimate of the focal blur (the blur measure) can be obtained.
  • the first filtered image corresponds to the energy in a first frequency band, i.e. a first part of the spatial spectrum of the region of the input image
  • the second filtered image corresponds to the energy in a second frequency band, i.e. the second part of the spatial spectrum of the region of the input image. That means that the first filter and the second filter are arranged to extract mutually different portions of the spatial spectrum of the input image. If the ratio between the energy in the first frequency band and the energy in a second frequency band is relatively high, then the value of the blur measure is also relatively high.
  • the blur measure is based on the ratio between two different re-blurred versions of the input image. This blur measure is independent from the edge amplitude or position of the edge. The maximum of the ratio appears at the edge position.
  • the blur estimator according to the invention does not require special means for the detection of the edge position and the angle in order to estimate the blur measure on basis of the detected edge.
  • the blur estimator according to the invention demonstrates robust estimation even in areas with multiple neighboring edges. Experimental results on synthetic and natural images of the blur estimator according to invention show favorable results over known blur estimators.
  • the first and second filters are respective high-pass filters or bandpass filters.
  • one or both high-pass filters are based on low-pass filters.
  • the first filter comprises a low-pass filter for computing an intermediate image on basis of the input image and subtraction means for computing the first filtered image by subtracting the input image and the intermediate image from each other.
  • an embodiment of the blur estimator according to the invention further comprises a further low pass-filter for computing a low- pass filtered input image, whereby the output of the further low-pass filter is connected to the input of the first filter.
  • the first filter and the second filter are adjustable.
  • the blur estimator can be configured to the expected ranges of blur measures.
  • the image is rescaled into a number of images with respective resolutions and the number of images are analyzed by the blur estimator according to the invention. That means that a so-called multi- scale approach is taken to determine the blur measure of the input image.
  • the expected range of blur measures typically depends on the resolution of the input image and on the type of input image: natural or synthetic, inside or outside, dark or bright, etcetera.
  • the second filtered image is computed by an additional filtering of the first filtered image.
  • the output of the first filter is connected to the input of the second filter. That means that there is a cascade of filters for computing the first and second filtered image. At the end of the cascade of filters the second filtered image is provided by the cascade of filters, while halfway of the cascade of filters the first filtered image is provided.
  • An embodiment of the image processing apparatus comprises a de-blur unit, which is controlled by the blur measure.
  • a de-blur unit is arranged to enhance the image. The aim is to obtain an output image representing the scene being captured without or with a relatively low amount of blur, i.e. an output image which is relatively close to such an output image.
  • An example of a de-blur unit is an edge enhancement unit. The amount of de-blurring and/or edge enhancement is based on the computed blur measure.
  • the processing unit comprises: a depth estimator for assigning depth values to respective regions of the input image on basis of respective blur measures; a rendering unit for computing an output image on basis of the depth values and the input image.
  • the method comprises: computing a first filtered image on basis of the input image; computing a second filtered image on basis of the input image, the second filter being different from the first filter; computing ratio values between pixel values of the first filtered image and respective pixel values of the second filtered image; and outputting the blur measure by determining a maximum ratio value of the set of ratio values corresponding to the region of the input image.
  • the computer program product after being loaded, provides said processing means with the capability to carry out: computing a first filtered image on basis of the input image; computing a second filtered image on basis of the input image, the second filter being different from the first filter; computing ratio values between pixel values of the first filtered image and respective pixel values of the second filtered image; and outputting the blur measure by determining a maximum ratio value of the set of ratio values corresponding to the region of the input image.
  • Fig. 1 shows a step edge/(x) , a blurred edge b(x) based on the step edge f(x) and two re-blurred versions b a (x) , b b (x) of the blurred edge b(x) ;
  • Fig. 2 shows the ratio between the re-blurred versions b a (x) , b b (x) among the edge;
  • Fig. 3 schematically shows a first embodiment of the blur estimator according to the invention
  • Fig. 4 schematically shows a second embodiment of the blur estimator according to the invention, comprising a low-pass filter at the input;
  • Fig. 5 schematically shows a third embodiment of the blur estimator according to the invention, whereby the first filter comprises a low-pass filter and subtraction means;
  • Fig. 6 schematically shows a fourth embodiment of the blur estimator according to the invention, comprising a cascade of low-pass filters.
  • Fig. 7 schematically shows an image processing apparatus according to the invention.
  • the method of estimating a blur measure according to the invention will be explained on basis of an ideal edge and a focal blur kernel.
  • the edge f(x) is modeled as a step function with amplitude A and offset B.
  • the edge f(x) shown in Fig. 1 is specified in Equation 1.
  • the x-axis of Fig. 1 corresponds to position x.
  • the y-axis corresponds to the function value of the edge f(x) .
  • Fig. 1 also shows a blurred edge b(x) , which is based on the edge f(x) .
  • the focal blur kernel being used to achieve the blurred edge b(x) on basis of the edge f(x) is unknown.
  • FIG. 1 also shows a first re-blurred version of the blurred edge b(x) , which is called b a (x) and a second re-blurred version of the blurred edge b(x) , which is called b b (x)
  • the respective blur kernels with blur radius ⁇ a and ⁇ b for the computation of the blurred edge b(x) are known a priori.
  • A+B x ⁇ O fix ) ,XG I (1)
  • Equation 2 The focal blur kernel is modeled by the normalized Gaussian function as specified Equation 2:
  • is the unknown blur radius, i.e. blur measure to be estimated.
  • the ratio r ⁇ x) of the differences between the original blurred edge and the two re-blurred versions for every position x is calculated:
  • Equation 9 When ⁇ a , ⁇ fe » ⁇ , some approximations as specified in Equation 9 can be used to simplify Equation 8 to obtain Equation 10.
  • Equation 11 shows that blur radius ⁇ , i.e. related to the blur measure can be calculated from the maximum ratio r '(X) 1113x and re-blur radius ⁇ a , Ob, independent of the edge amplitude A and offset B.
  • the identification of the local maximum ratio Kx) 013x can n °t om y estimate the blur radius but also locate the edge position, which implies the blur estimation requires not any edge detection pre-processing. This helps to keep the complexity relatively low.
  • 2-D Gaussian blur kernels for the re-blurring.
  • any direction of a 2-D Gaussian function is a 1-D Gaussian function, the proposed blur estimation is also applicable.
  • 2-D Gaussian kernels for the estimation avoids detecting the angle of the edge or gradient, which is required in alternative methods according to the prior art.
  • An alternative ratio between re-blurred versions results into an alternative value for the blur measure.
  • An alternative ratio is specified in Equation 12
  • the blur radius which is related to the local blur measure can be estimated by determining a ratio between two filtered versions of the input signal, whereby two different filters each directed to another part of the spatial spectrum of the input image are used.
  • Fig. 3 schematically shows a first embodiment of the blur estimator 300 according to the invention.
  • the blur estimator 300 is arranged to compute blur measures Bm for respective regions of an input image Inp.
  • the regions may comprise a relatively low number of pixels compared to the size of the input image Inp.
  • a region may extend to the total size of the input image Inp.
  • the regions may overlap, but typically the regions are adjacent to each other.
  • the regions correspond to blocks of pixels.
  • an input image Inp is provided, i.e. a signal that represents the pixel values, is provided.
  • the blur estimator 300 is arranged to provide at its output connector 312 a number of blur measures Bm for respective regions of the input image Inp.
  • the blur estimator 300 comprises: a first filter 302 for computing a first filtered image Fill on basis of the input image Inp; a second filter 304 for computing a second filtered image Fil2 on basis of the input image Inp, the second filter 304 being different from the first filter 302; a ratio computing unit 306 for computing ratio values between pixel values of the first filtered image Fill and respective pixel values of the second filtered image Fil2; and a maximum operator 308 for outputting the blur measures Bm by determining the maximum ratio values of the sets of ratio values corresponding to the respective regions of the input image Inp.
  • the working of the blur estimator 300 according to the invention is as follows.
  • the input image Inp is filtered with a first a high-pass filter to achieve a first filtered image Fill, which represents a first portion of the spatial spectrum of the input image Inp.
  • the filtering is performed by means of a convolution with a two-dimensional kernel.
  • the filtering is performed by subsequent convolutions with multiple one-dimensional kernels.
  • the input image Inp is also filtered with a second high-pass filter to achieve a second filtered image Fil2, which represents a second portion of the spatial spectrum of the input image Inp.
  • convolution with one or more kernels is the technique for filtering. Instead of high pass filtering, band pass filtering may be applied.
  • the pixel values of the first filtered image and the second filtered image correspond to luminance and/or color.
  • the values of the corresponding pixels of the first filtered image Fill and the second filtered image Fil2 are compared. That means that the ratio between pixel values of the first filtered image Fill and the second filtered image Fil2 having mutual equal spatial coordinates are computed, by the ratio computing unit 306.
  • the output of the ratio-computing unit 306 is a two-dimensional matrix of ratio values. Typically, the two-dimensional matrix of ratio values has a size, which is equal to the size of the input image Inp.
  • the output of the ratio computing unit 306, i.e. the two-dimensional matrix of ratio values is provided to the maximum operator 308, which is arranged to determine the maximum values of predetermined portions of the two-dimensional matrix of ratio values which correspond to respective predetermined regions of the input image Inp.
  • the different maximum values are related to the blur measures for the respective regions of the input image Inp.
  • the different maximum values may optionally multiplied by a value, typically a constant value to get the actual blur measures.
  • the first filter 302, the second filter 304, the ratio computing unit 306 and the maximum operator 308 may be implemented using one processor. Normally, these functions are performed under control of a software program product. During execution, normally the software program product is loaded into a memory, like a RAM, and executed from there. The program may be loaded from a background memory, like a ROM, hard disk, or magnetical and/or optical storage, or may be loaded via a network like Internet. Optionally an application specific integrated circuit provides the disclosed functionality.
  • the coefficients of the first filter 302 and/or of a second filter 304 are preferably not constant. That means that the first filter 302 and/or the second filter 304 are preferably adjustable.
  • the first filter 302 is arranged to extract a first portion of the spatial spectrum of the input image Inp and the second filter 304 a second portion.
  • a tuning of the coefficients of the first filter 302 and/or the second filter 304 is needed. That means that a matching of the first filter 302 and/or the second filtered 304 to the input image Inp is performed.
  • an analysis of the input image has to be performed. Typically the analysis comprises computations of parameters like contrast and noise in the input image Inp.
  • Fig. 4 schematically shows a second embodiment of the blur estimator 400 according to the invention, comprising a low-pass filter at the input.
  • the second embodiment of the blur estimator 400 is substantially equal to the first embodiment of the blur estimator 300 as described in connection with Fig. 3.
  • the difference is that the input image Inp is first low-pass filtered by a low-pass filter 402, in order to reduce the effect of noise in the estimation of the blur measure.
  • Fig. 5 schematically shows a third embodiment of the blur estimator of 500 according to the invention.
  • the third embodiment of the blur estimator 400 is substantially equal to the second embodiment of the blur estimator 400 as described in connection with Fig. 4.
  • the first filter 302 comprises a first low-pass filter 502 for computing a first intermediate image on basis of the input image Inp and first subtraction means 504 for computing the first filtered image Fill by subtracting the low-pass filtered input image Fnlnp and the first intermediate image from each other.
  • the output of the first subtraction means 504 is the output of the first filter 302, which is provided to the ratio-computing unit 306.
  • the second filter 304 comprises a second low-pass filter 506 for computing a second intermediate image on basis of the input image Inp and second subtraction means 508 for computing the second filtered image Fil2 by subtracting the low-pass filtered input image Fnlnp and the second intermediate image from each other.
  • the output of the second subtraction means 508 is the output of the second filter 304, which is provided to the ratio - computing unit 306.
  • Fig. 6 schematically shows a fourth embodiment of the blur estimator 600 according to the invention, comprising a cascade of low-pass filters 502 and 506.
  • the fourth embodiment of the blur estimator 500 is substantially equal to the third embodiment of the blur estimator 500 as described in connection with Fig. 5.
  • the output of the first low-pass filter 502 is connected to the input of the second low-pass filter 506. That means that a first portion of the spatial spectrum of the input image Inp is extracted from the input image Inp by means of the first low pass filter 502, while the second portion of the spatial spectrum of the input image Inp is extracted from the input image Inp by means of both the first low-pass filter 502 and the second low-pass filter 506.
  • FIG. 7 schematically shows an image processing apparatus 700 according to the invention, comprising: receiving means 702 for receiving a signal representing input images; a blur estimator 708 as described in connection with any of the Figs. 3-6; an image processing unit 704 for computing the output images on basis of input images and the blur measures as provided by the blur estimator 708; and a display device 406 for displaying the output images of the image processing unit 704.
  • the signal may be a broadcast signal received via an antenna or cable but may also be a signal from a storage device like a VCR (Video Cassette Recorder) or Digital Versatile Disk (DVD).
  • the signal is provided at the input connector 710.
  • the image processing apparatus 700 might e.g. be a TV.
  • the image processing apparatus 700 does not comprise the optional display device but provides the output images to an apparatus that does comprise a display device 706.
  • the image processing apparatus 700 might be e.g. a set top box, a satellite-tuner, a VCR player, a DVD player or recorder.
  • the image processing apparatus 700 comprises storage means, like a hard-disk or means for storage on removable media, e.g. optical disks.
  • the image processing apparatus 700 might also be a system being applied by a film-studio or broadcaster.
  • the image processing unit 704 is an edge enhancement unit that is controlled by the blur measures as provided by the blur estimator 708.
  • the various values of estimated blur are used to enhance a first group of regions of the input image Inp more and to enhance a second group of regions of the input image Inp less.
  • the image processing unit 704 comprises: depth estimator for assigning depth values to respective regions of the input images on basis of respective blur measures; and rendering unit for computing the output images on basis of the depth values and the input images.
  • the amount of blur is related to the distance from a particular object in a scene, to be imaged, to the focal plane.
  • the rendering unit is arranged to generate a sequence of multi-view images on basis of a sequence of input images.
  • the rendering unit is provided with a stream of video images at the input and provides two correlated streams of video images at the output. These two correlated streams of video images are to be provided to a multi-view display device 706 which is arranged to visualize a first series of views on basis of the first one of the correlated streams of video images and to visualize a second series of views on basis of the second one of the correlated streams of video images. If a user, i.e. viewer, observes the first series of views by his left eye and the second series of views by his right eye he notices a 3-D impression.
  • the first one of the correlated streams of video images corresponds to the sequence of video images as received and that the second one of the correlated streams of video images is rendered on basis of the sequence of video images as received.
  • the rendering comprises shifting pixel values of the input images whereby the amount of shift is related to the amount of estimated depth.

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

A blur estimator (300) for estimating a blur measure (Bm) for a region of an input image (Inp) is disclosed. The blur estimator (300) comprises: a first filter (302) for computing a first filtered image (Fill) on basis of the input image (Inp); a second filter (304) for computing a second filtered image (Fil2) on basis of the input image (Inp), the second filter (Fil2) being different from the first filter (Fill); a ratio computing unit (306) for computing ratio values between pixel values of the first filtered image (Fill) and respective pixel values of the second filtered image (Fil2); and a maximum operator (308) for outputting the blur measure (Bm) by determining a maximum ratio value of the set of ratio values corresponding to the region of the input image (Inp).

Description

Blur estimation
The invention relates to a blur estimator for estimating a blur measure for a region of an input image.
The invention further relates to an image processing apparatus, comprising: means for receiving an input signal representing an input image; such a blur estimator; and a processing unit for computing an output image on basis of the input image and the blur measure.
The invention further relates to a method of estimating a blur measure for a region of an input image.
The invention further relates to a computer program product to be loaded by a computer arrangement, comprising instructions to estimate a blur measure for a region of an input image, the computer arrangement comprising processing means and a memory
Focal blur, or out-of-focus blur, in images and videos occurs when objects in the scene are placed out of the focal range of the camera. See e.g. the article "A New Sense for Depth of Field", by A. Pentland, in IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 9, No. 4, pp. 523-531, July 1987. Out-of-focus blur is sometimes used by photographers to draw the viewers' attention to objects in focus, but in many cases it is desirable to remove the blur and restore the original scene faithfully. As objects at varying distances are differently blurred in the image, accurate blur estimation is essential.
The estimation of focal blur has also become an important topic in many other applications, such as restoring the blurred background part of images and videos, digital auto- focusing systems and 2-D to 3-D image conversion.
Focal blur is usually modeled as Gaussian blurring. See e.g. the article "A Simple Real Time Range Camera", by A. Pentland, T. Darrell, M. Turk and W. Huang, in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 256- 261, 1989. Therefore, the problem of blur estimation is to identify the Gaussian point spread function (PSF). Many techniques have been proposed to address the problem. Early blur estimation methods examine the regular pattern of zeros from the blurred image in the frequency domain. These methods can only identify a certain class of PSFs, but not truncated Gaussian PSFs, which do not have zeros in the frequency domain.
More recently parametric methods based on autoregressive moving-average (ARMA) models have been proposed by R. L. Lagendijk, A. M. Tekalp, and J. Biemond, in the article "Maximum Likelihood Image and Blur Identification: A Unifying Approach", in Optical Engineering, vol. 28 (5), pp. 422-435, 1990 and by G. Pavlovic, A. M. Tekalp, in the article "Maximum likelihood parametric blur identification based on a continuous spatial domain model", in IEEE Transactions on Image Processing, vol. 1, Issue 4, pp. 496-504, Oct. 1992. The blur estimation becomes the identification of the ARMA model and a maximum likelihood (ML) estimation algorithm is employed for the estimation. However these methods are computationally intensive and lack a direct solution for the blur estimation.
Recently, a blur estimation method from the work of Elder (See J. H. Elder and S. W. Zucker, in the article "Local Scale Control for Edge Detection and Blur Estimation", in IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 7, July 1998) received considerable attention (See M. Basu, "Gaussian-based edge- detection methods-a survey", IEEE Transactions on Systems, Man and Cybernetics, Part C, Vol. 32, Issue 3, pp. 252-260, Aug. 2002. and see also K. Suzuki, I. Horiba and N. Sugie, "Neural edge enhancer for supervised edge enhancement from noisy images", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, Issue 12, pp. 1582-1596, Dec. 2003.
In Elder's method, the blurred edge signal is convolved with a filter that is the second derivative of a Gaussian function and the response has a positive and a negative peak. The distance between these peak positions is used to determine the blur radius. A problem of Elder's method is that the blur estimation is easily deteriorated by the response of neighboring edges.
It is an object of the invention to provide a blur estimator of the kind described in the opening paragraph, which can be applied even if the different edges in the image are near to each other.
This object of the invention is achieved in that the blur estimator comprises: a first filter for computing a first filtered image on basis of the input image; a second filter for computing a second filtered image on basis of the input image, the second filter being different from the first filter; a ratio computing unit for computing ratio values between pixel values of the first filtered image and respective pixel values of the second filtered image; and a maximum operator for outputting the blur measure by determining a maximum ratio value of the set of ratio values corresponding to the region of the input image.
The insight on which the current invention is based, starts from the observation that the Fourier analysis of a perfect edge shows harmonics in a fixed ratio. Focal blur will change the ratio of the harmonics by weakening the higher frequencies. By measuring the ratio of the energy in different frequency bands, an estimate of the focal blur (the blur measure) can be obtained. The first filtered image corresponds to the energy in a first frequency band, i.e. a first part of the spatial spectrum of the region of the input image, while the second filtered image corresponds to the energy in a second frequency band, i.e. the second part of the spatial spectrum of the region of the input image. That means that the first filter and the second filter are arranged to extract mutually different portions of the spatial spectrum of the input image. If the ratio between the energy in the first frequency band and the energy in a second frequency band is relatively high, then the value of the blur measure is also relatively high.
By operating in the frequency domain instead of in the spatial domain of the input image, degradation of the measurement caused by interference is smaller.
The blur measure is based on the ratio between two different re-blurred versions of the input image. This blur measure is independent from the edge amplitude or position of the edge. The maximum of the ratio appears at the edge position. Hence the blur estimator according to the invention does not require special means for the detection of the edge position and the angle in order to estimate the blur measure on basis of the detected edge. Furthermore, the blur estimator according to the invention demonstrates robust estimation even in areas with multiple neighboring edges. Experimental results on synthetic and natural images of the blur estimator according to invention show favorable results over known blur estimators.
Preferably, the first and second filters are respective high-pass filters or bandpass filters. Optionally, one or both high-pass filters are based on low-pass filters. Hence, in an embodiment of the blur estimator according to the invention, the first filter comprises a low-pass filter for computing an intermediate image on basis of the input image and subtraction means for computing the first filtered image by subtracting the input image and the intermediate image from each other.
In order to reduce the effect of noise in the input image, low-pass filtering at the input may be performed as pre-processing. Hence an embodiment of the blur estimator according to the invention further comprises a further low pass-filter for computing a low- pass filtered input image, whereby the output of the further low-pass filter is connected to the input of the first filter.
In an embodiment of the blur estimator according to the invention, the first filter and the second filter are adjustable. By adjusting the coefficients of the filters the blur estimator can be configured to the expected ranges of blur measures. Alternatively the image is rescaled into a number of images with respective resolutions and the number of images are analyzed by the blur estimator according to the invention. That means that a so-called multi- scale approach is taken to determine the blur measure of the input image. The expected range of blur measures typically depends on the resolution of the input image and on the type of input image: natural or synthetic, inside or outside, dark or bright, etcetera.
Preferably, the second filtered image is computed by an additional filtering of the first filtered image. To achieve that, in an embodiment of the blur estimator according to the invention, the output of the first filter is connected to the input of the second filter. That means that there is a cascade of filters for computing the first and second filtered image. At the end of the cascade of filters the second filtered image is provided by the cascade of filters, while halfway of the cascade of filters the first filtered image is provided.
It is advantageous to apply the blur estimator according to the invention in an image processing apparatus. An embodiment of the image processing apparatus according to the invention comprises a de-blur unit, which is controlled by the blur measure. A de-blur unit is arranged to enhance the image. The aim is to obtain an output image representing the scene being captured without or with a relatively low amount of blur, i.e. an output image which is relatively close to such an output image. An example of a de-blur unit is an edge enhancement unit. The amount of de-blurring and/or edge enhancement is based on the computed blur measure.
In other embodiment of the image processing apparatus according to the invention, the processing unit comprises: a depth estimator for assigning depth values to respective regions of the input image on basis of respective blur measures; a rendering unit for computing an output image on basis of the depth values and the input image.
It is another object of the invention to provide a method of estimating a blur measure of the kind described in the opening paragraph, which can be applied even if the edges in the image are near to each other.
This object of the invention is achieved in that the method comprises: computing a first filtered image on basis of the input image; computing a second filtered image on basis of the input image, the second filter being different from the first filter; computing ratio values between pixel values of the first filtered image and respective pixel values of the second filtered image; and outputting the blur measure by determining a maximum ratio value of the set of ratio values corresponding to the region of the input image.
It is another object of the invention to provide a computer program product of the kind described in the opening paragraph, which can be applied even if the edges in the image are near to each other.
This object of the invention is achieved in that the computer program product, after being loaded, provides said processing means with the capability to carry out: computing a first filtered image on basis of the input image; computing a second filtered image on basis of the input image, the second filter being different from the first filter; computing ratio values between pixel values of the first filtered image and respective pixel values of the second filtered image; and outputting the blur measure by determining a maximum ratio value of the set of ratio values corresponding to the region of the input image.
Modifications of the blur estimator and variations thereof may correspond to modifications and variations thereof of the image processing apparatus, the method and the computer program product, being described.
These and other aspects of the blur estimator, of the image processing apparatus, the method and the computer program product, according to the invention will become apparent from and will be elucidated with respect to the implementations and embodiments described hereinafter and with reference to the accompanying drawings, wherein:
Fig. 1 shows a step edge/(x) , a blurred edge b(x) based on the step edge f(x) and two re-blurred versions ba(x) , bb(x) of the blurred edge b(x) ;
Fig. 2 shows the ratio between the re-blurred versions ba(x) , bb(x) among the edge;
Fig. 3 schematically shows a first embodiment of the blur estimator according to the invention;
Fig. 4 schematically shows a second embodiment of the blur estimator according to the invention, comprising a low-pass filter at the input;
Fig. 5 schematically shows a third embodiment of the blur estimator according to the invention, whereby the first filter comprises a low-pass filter and subtraction means;
Fig. 6 schematically shows a fourth embodiment of the blur estimator according to the invention, comprising a cascade of low-pass filters; and
Fig. 7 schematically shows an image processing apparatus according to the invention.
Same reference numerals are used to denote similar parts throughout the Figures.
The method of estimating a blur measure according to the invention will be explained on basis of an ideal edge and a focal blur kernel. The edge f(x) is modeled as a step function with amplitude A and offset B. For a discrete signal, the edge f(x) shown in Fig. 1 is specified in Equation 1. The x-axis of Fig. 1 corresponds to position x. The y-axis corresponds to the function value of the edge f(x) . Fig. 1 also shows a blurred edge b(x) , which is based on the edge f(x) . The focal blur kernel being used to achieve the blurred edge b(x) on basis of the edge f(x) is unknown. Fig. 1 also shows a first re-blurred version of the blurred edge b(x) , which is called ba(x) and a second re-blurred version of the blurred edge b(x) , which is called bb(x) The respective blur kernels with blur radius σa and σb for the computation of the blurred edge b(x) , are known a priori. A+B x≥O fix) = ,XG I (1) B, x<0
where x is the position. The focal blur kernel is modeled by the normalized Gaussian function as specified Equation 2:
Figure imgf000008_0001
where σ is the unknown blur radius, i.e. blur measure to be estimated. For the normalized Gaussian function, we have:
Figure imgf000008_0002
Then the blurred edge b(x) will be:
f(l+ £ g(n,σ)) + B, x≥O b(x) = ∑f(x-n)g(n,c) = n=x+\
, XG I (4) n≡I i(l-∑g(n,σ)) + B, x<0
As the convolution of two Gaussian functions with blur radiuses G1, σ2is:
g(n,σλ)* g(n,σ2) = g(n,^ +σ2 2) (5)
re-blurring the blurred edge using Gaussian blur kernels with blur radius σ a and ϋbba), results in two re-blurred versions ba(x) and bb(x) :
Figure imgf000008_0003
f(l+ £g(/ιs>26 2)) + i?,x>0 n=-x\ bb(x) = -x-l f(l- £g(«,Vσ2+σ,2)) + 5,x<0 n=x+l
(6)
To make the blur estimation independent of the amplitude and offset of edges, the ratio r{x) of the differences between the original blurred edge and the two re-blurred versions for every position x is calculated:
Figure imgf000009_0001
where, in practice absolute differences may be used and a small fraction may be added in the denominator to prevent division by zero. The difference ratio peaks at the edge position x = -1, 0 as shown in Fig.2. So, that results into:
1 I σ σ 2 a r{x)m = K-I) = KO) = V σ 2+
1 1 (8) oΔ+σ: o '+σ;
Whenσa, σfe»σ , some approximations as specified in Equation 9 can be used to simplify Equation 8 to obtain Equation 10.
Figure imgf000009_0002
Kx)1 Gr - I) -O1 (10) σ . -σ
From Equation 10 the following can be derived that, i.e. rewriting:
Figure imgf000010_0001
Equation 11 shows that blur radius σ, i.e. related to the blur measure can be calculated from the maximum ratio r '(X)1113x and re-blur radius σa, Ob, independent of the edge amplitude A and offset B. The identification of the local maximum ratio Kx)013x can n°t omy estimate the blur radius but also locate the edge position, which implies the blur estimation requires not any edge detection pre-processing. This helps to keep the complexity relatively low. For the blur estimation in 2-D images, it is preferred to apply 2-D Gaussian blur kernels for the re-blurring. As any direction of a 2-D Gaussian function is a 1-D Gaussian function, the proposed blur estimation is also applicable. Using 2-D Gaussian kernels for the estimation avoids detecting the angle of the edge or gradient, which is required in alternative methods according to the prior art.
Alternative blur kernels, i.e. non-Gaussian result into similar results.
An alternative ratio between re-blurred versions results into an alternative value for the blur measure. In other words, there are alternatives to compute a ratio between a first part of the spatial spectrum and a second part of the spatial spectrum, whereby the ratio is independent of the amplitude A and offset B. An alternative ratio is specified in Equation 12
r (χ) = b(x) - ba(x) (12) b(x) - bb(x)
resulting into an alternative expression for the blur radius, as specified in Equation 13
σ = G aGbr (X)m∞ + G gG b ^) But, again it appears that the blur radius which is related to the local blur measure can be estimated by determining a ratio between two filtered versions of the input signal, whereby two different filters each directed to another part of the spatial spectrum of the input image are used.
Fig. 3 schematically shows a first embodiment of the blur estimator 300 according to the invention. The blur estimator 300 is arranged to compute blur measures Bm for respective regions of an input image Inp. The regions may comprise a relatively low number of pixels compared to the size of the input image Inp. Alternatively, a region may extend to the total size of the input image Inp. The regions may overlap, but typically the regions are adjacent to each other. Preferably, the regions correspond to blocks of pixels.
At the input connector 310 of the blur estimator 300 an input image Inp is provided, i.e. a signal that represents the pixel values, is provided. The blur estimator 300 is arranged to provide at its output connector 312 a number of blur measures Bm for respective regions of the input image Inp.
The blur estimator 300 comprises: a first filter 302 for computing a first filtered image Fill on basis of the input image Inp; a second filter 304 for computing a second filtered image Fil2 on basis of the input image Inp, the second filter 304 being different from the first filter 302; a ratio computing unit 306 for computing ratio values between pixel values of the first filtered image Fill and respective pixel values of the second filtered image Fil2; and a maximum operator 308 for outputting the blur measures Bm by determining the maximum ratio values of the sets of ratio values corresponding to the respective regions of the input image Inp.
The working of the blur estimator 300 according to the invention is as follows. The input image Inp is filtered with a first a high-pass filter to achieve a first filtered image Fill, which represents a first portion of the spatial spectrum of the input image Inp. Typically, the filtering is performed by means of a convolution with a two-dimensional kernel. Alternatively, the filtering is performed by subsequent convolutions with multiple one-dimensional kernels. The input image Inp is also filtered with a second high-pass filter to achieve a second filtered image Fil2, which represents a second portion of the spatial spectrum of the input image Inp. Typically convolution with one or more kernels is the technique for filtering. Instead of high pass filtering, band pass filtering may be applied. The pixel values of the first filtered image and the second filtered image correspond to luminance and/or color.
The values of the corresponding pixels of the first filtered image Fill and the second filtered image Fil2 are compared. That means that the ratio between pixel values of the first filtered image Fill and the second filtered image Fil2 having mutual equal spatial coordinates are computed, by the ratio computing unit 306. The output of the ratio-computing unit 306 is a two-dimensional matrix of ratio values. Typically, the two-dimensional matrix of ratio values has a size, which is equal to the size of the input image Inp.
The output of the ratio computing unit 306, i.e. the two-dimensional matrix of ratio values is provided to the maximum operator 308, which is arranged to determine the maximum values of predetermined portions of the two-dimensional matrix of ratio values which correspond to respective predetermined regions of the input image Inp. The different maximum values are related to the blur measures for the respective regions of the input image Inp. The different maximum values may optionally multiplied by a value, typically a constant value to get the actual blur measures.
The first filter 302, the second filter 304, the ratio computing unit 306 and the maximum operator 308 may be implemented using one processor. Normally, these functions are performed under control of a software program product. During execution, normally the software program product is loaded into a memory, like a RAM, and executed from there. The program may be loaded from a background memory, like a ROM, hard disk, or magnetical and/or optical storage, or may be loaded via a network like Internet. Optionally an application specific integrated circuit provides the disclosed functionality.
The coefficients of the first filter 302 and/or of a second filter 304 are preferably not constant. That means that the first filter 302 and/or the second filter 304 are preferably adjustable. The first filter 302 is arranged to extract a first portion of the spatial spectrum of the input image Inp and the second filter 304 a second portion. Depending on the actual characteristics of the input image Inp, a tuning of the coefficients of the first filter 302 and/or the second filter 304 is needed. That means that a matching of the first filter 302 and/or the second filtered 304 to the input image Inp is performed. To determine the actual characteristics of the input image Inp an analysis of the input image has to be performed. Typically the analysis comprises computations of parameters like contrast and noise in the input image Inp.
Fig. 4 schematically shows a second embodiment of the blur estimator 400 according to the invention, comprising a low-pass filter at the input. The second embodiment of the blur estimator 400 is substantially equal to the first embodiment of the blur estimator 300 as described in connection with Fig. 3. The difference is that the input image Inp is first low-pass filtered by a low-pass filter 402, in order to reduce the effect of noise in the estimation of the blur measure.
Fig. 5 schematically shows a third embodiment of the blur estimator of 500 according to the invention. The third embodiment of the blur estimator 400 is substantially equal to the second embodiment of the blur estimator 400 as described in connection with Fig. 4.
The first filter 302 comprises a first low-pass filter 502 for computing a first intermediate image on basis of the input image Inp and first subtraction means 504 for computing the first filtered image Fill by subtracting the low-pass filtered input image Fnlnp and the first intermediate image from each other. The output of the first subtraction means 504 is the output of the first filter 302, which is provided to the ratio-computing unit 306.
The second filter 304 comprises a second low-pass filter 506 for computing a second intermediate image on basis of the input image Inp and second subtraction means 508 for computing the second filtered image Fil2 by subtracting the low-pass filtered input image Fnlnp and the second intermediate image from each other. The output of the second subtraction means 508 is the output of the second filter 304, which is provided to the ratio - computing unit 306.
In general, subtracting a low-pass filtered version of an image from the image results into a high-pass filtered image.
Fig. 6 schematically shows a fourth embodiment of the blur estimator 600 according to the invention, comprising a cascade of low-pass filters 502 and 506. The fourth embodiment of the blur estimator 500 is substantially equal to the third embodiment of the blur estimator 500 as described in connection with Fig. 5. In the fourth embodiment of the blur estimator 600 the output of the first low-pass filter 502 is connected to the input of the second low-pass filter 506. That means that a first portion of the spatial spectrum of the input image Inp is extracted from the input image Inp by means of the first low pass filter 502, while the second portion of the spatial spectrum of the input image Inp is extracted from the input image Inp by means of both the first low-pass filter 502 and the second low-pass filter 506. It will be clear that, although the configuration of various filters in the fourth embodiment of the blur estimator 600 is different from the configuration of various filters in the third embodiment of blur estimator 500, the operations are substantially mutually equal. Fig. 7 schematically shows an image processing apparatus 700 according to the invention, comprising: receiving means 702 for receiving a signal representing input images; a blur estimator 708 as described in connection with any of the Figs. 3-6; an image processing unit 704 for computing the output images on basis of input images and the blur measures as provided by the blur estimator 708; and a display device 406 for displaying the output images of the image processing unit 704.
The signal may be a broadcast signal received via an antenna or cable but may also be a signal from a storage device like a VCR (Video Cassette Recorder) or Digital Versatile Disk (DVD). The signal is provided at the input connector 710. The image processing apparatus 700 might e.g. be a TV. Alternatively the image processing apparatus 700 does not comprise the optional display device but provides the output images to an apparatus that does comprise a display device 706. Then the image processing apparatus 700 might be e.g. a set top box, a satellite-tuner, a VCR player, a DVD player or recorder. Optionally the image processing apparatus 700 comprises storage means, like a hard-disk or means for storage on removable media, e.g. optical disks. The image processing apparatus 700 might also be a system being applied by a film-studio or broadcaster.
In an embodiment of the image processing apparatus 700 according to the invention, the image processing unit 704 is an edge enhancement unit that is controlled by the blur measures as provided by the blur estimator 708. The various values of estimated blur are used to enhance a first group of regions of the input image Inp more and to enhance a second group of regions of the input image Inp less.
In an other embodiment of the image processing apparatus according to the invention, the image processing unit 704 comprises: depth estimator for assigning depth values to respective regions of the input images on basis of respective blur measures; and rendering unit for computing the output images on basis of the depth values and the input images.
As explained above, the amount of blur is related to the distance from a particular object in a scene, to be imaged, to the focal plane. The bigger the difference, the higher the amount of blur. That means that there is a direct relation between depth, i.e. a distance relative to a particular plane, and the amount of blur in the acquired image. By estimating the blur measure for a number of regions in the image corresponding to different objects in the scene, the corresponding depth values can be determined.
The rendering unit is arranged to generate a sequence of multi-view images on basis of a sequence of input images. The rendering unit is provided with a stream of video images at the input and provides two correlated streams of video images at the output. These two correlated streams of video images are to be provided to a multi-view display device 706 which is arranged to visualize a first series of views on basis of the first one of the correlated streams of video images and to visualize a second series of views on basis of the second one of the correlated streams of video images. If a user, i.e. viewer, observes the first series of views by his left eye and the second series of views by his right eye he notices a 3-D impression. It might be that the first one of the correlated streams of video images corresponds to the sequence of video images as received and that the second one of the correlated streams of video images is rendered on basis of the sequence of video images as received. Typically, the rendering comprises shifting pixel values of the input images whereby the amount of shift is related to the amount of estimated depth.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be constructed as limiting the claim. The word 'comprising' does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention can be implemented by means of hardware comprising several distinct elements and by means of a suitable programmed computer. In the unit claims enumerating several means, several of these means can be embodied by one and the same item of hardware or software. The usage of the words first, second and third, etcetera do not indicate any ordering. These words are to be interpreted as names.

Claims

CLAIMS:
1. lur estimator (300) for estimating a blur measure (Bm) for a region of an input image (Inp), comprising: a first filter (302) for computing a first filtered image (Fill) on basis of the input image (Inp); a second filter (304) for computing a second filtered image (Fil2) on basis of the input image (Inp), the second filter (Fil2) being different from the first filter (Fill); a ratio computing unit (306) for computing ratio values between pixel values of the first filtered image (Fill) and respective pixel values of the second filtered image (Fil2); and a maximum operator (308) for outputting the blur measure (Bm) by determining a maximum ratio value of the set of ratio values corresponding to the region of the input image (Inp).
2. A blur estimator as claimed in claim 1, wherein the first filter is a high-pass filter.
3. A blur estimator as claimed in any of the claims above, wherein the second filter is a further high-pass filter.
4. A blur estimator as claimed in claim 1, wherein the first filter comprises a low- pass filter for computing an intermediate image on basis of the input image and subtraction means for computing the first filtered image by subtracting the input image and the intermediate image from each other.
5. A blur estimator as claimed in any of the claims above, further comprising a further low pass-filter for computing a low-pass filtered input image, whereby the output of the further low-pass filter is connected to the input of the first filter.
6. A blur estimator as claimed in any of the claims above, whereby the first filter is adjustable.
7. A blur estimator as claimed in any of the claims above, whereby the output of the first filter is connected to the input of the second filter.
8. An image processing apparatus, comprising: means for receiving an input signal representing an input image; a blur estimator as claimed in any of the claims above for estimating a blur measure for a region of the input image; and a processing unit for computing an output image on basis of the input image and the blur measure.
9. An image processing apparatus as claimed in claim 8, whereby the processing unit is a de-blur unit, which is controlled by the blur measure.
10. An image processing apparatus as claimed in claim 8, whereby the processing unit comprises: a depth estimator for assigning depth values to respective regions of the input image on basis of respective blur measures; a rendering unit for computing the output image on basis of the depth values and the input image.
11. A method of estimating a blur measure for a region of an input image, comprising: computing a first filtered image on basis of the input image; computing a second filtered image on basis of the input image, the second filter being different from the first filter; computing ratio values between pixel values of the first filtered image and respective pixel values of the second filtered image; and outputting the blur measure by determining a maximum ratio value of the set of ratio values corresponding to the region of the input image.
12. A computer program product to be loaded by a computer arrangement, comprising instructions to estimate a blur measure for a region of an input image, the computer arrangement comprising processing means and a memory, the computer program product, after being loaded, providing said processing means with the capability to carry out: computing a first filtered image on basis of the input image; computing a second filtered image on basis of the input image, the second filter being different from the first filter; computing ratio values between pixel values of the first filtered image and respective pixel values of the second filtered image; and outputting the blur measure by determining a maximum ratio value of the set of ratio values corresponding to the region of the input image.
PCT/IB2006/054081 2005-11-16 2006-11-03 Blur estimation WO2007057808A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP05110792.8 2005-11-16
EP05110792 2005-11-16

Publications (2)

Publication Number Publication Date
WO2007057808A2 true WO2007057808A2 (en) 2007-05-24
WO2007057808A3 WO2007057808A3 (en) 2008-01-31

Family

ID=37836605

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2006/054081 WO2007057808A2 (en) 2005-11-16 2006-11-03 Blur estimation

Country Status (1)

Country Link
WO (1) WO2007057808A2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8045046B1 (en) 2010-04-13 2011-10-25 Sony Corporation Four-dimensional polynomial model for depth estimation based on two-picture matching
US8194995B2 (en) 2008-09-30 2012-06-05 Sony Corporation Fast camera auto-focus
US8280194B2 (en) 2008-04-29 2012-10-02 Sony Corporation Reduced hardware implementation for a two-picture depth map algorithm
US8335390B2 (en) 2010-03-22 2012-12-18 Sony Corporation Blur function modeling for depth of field rendering
US8553093B2 (en) 2008-09-30 2013-10-08 Sony Corporation Method and apparatus for super-resolution imaging using digital imaging devices
US8698878B2 (en) 2009-07-02 2014-04-15 Sony Corporation 3-D auto-convergence camera
CN104200480A (en) * 2014-09-17 2014-12-10 西安电子科技大学宁波信息技术研究院 Image fuzzy degree evaluation method and system applied to intelligent terminal
US9560259B2 (en) 2014-06-27 2017-01-31 Sony Corporation Image processing system with blur measurement and method of operation thereof
CN113628192A (en) * 2021-08-12 2021-11-09 北京百度网讯科技有限公司 Image blur detection method, device, apparatus, storage medium, and program product

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
CHARDON S ET AL: "A comparative study between parametric blur estimation methods" ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 1999. PROCEEDINGS., 1999 IEEE INTERNATIONAL CONFERENCE ON PHOENIX, AZ, USA 15-19 MARCH 1999, PISCATAWAY, NJ, USA,IEEE, US, 15 March 1999 (1999-03-15), pages 3233-3236, XP010328169 ISBN: 0-7803-5041-3 *
HU H ET AL: "Low cost robust blur estimator" 2006 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING IEEE PISCATAWAY, NJ, USA, 8 October 2006 (2006-10-08), page 4 pp., XP002446789 ISBN: 1-4244-0481-9 *
KAYARGADDE V ET AL: "ESTIMATION OF PERCEIVED IMAGE BLUR USING EDGE FEATURES" INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, WILEY AND SONS, NEW YORK, US, vol. 7, no. 2, 21 June 1996 (1996-06-21), pages 102-109, XP000637442 ISSN: 0899-9457 *
MARZILIANO P ET AL: "A no-reference perceptual blur metric" PROCEEDINGS 2002 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING. ICIP 2002. ROCHESTER, NY, SEPT. 22 - 25, 2002, INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, NEW YORK, NY : IEEE, US, vol. VOL. 2 OF 3, 22 September 2002 (2002-09-22), pages 57-60, XP010607509 ISBN: 0-7803-7622-6 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8280194B2 (en) 2008-04-29 2012-10-02 Sony Corporation Reduced hardware implementation for a two-picture depth map algorithm
US8194995B2 (en) 2008-09-30 2012-06-05 Sony Corporation Fast camera auto-focus
US8553093B2 (en) 2008-09-30 2013-10-08 Sony Corporation Method and apparatus for super-resolution imaging using digital imaging devices
US8698878B2 (en) 2009-07-02 2014-04-15 Sony Corporation 3-D auto-convergence camera
US8335390B2 (en) 2010-03-22 2012-12-18 Sony Corporation Blur function modeling for depth of field rendering
US8045046B1 (en) 2010-04-13 2011-10-25 Sony Corporation Four-dimensional polynomial model for depth estimation based on two-picture matching
US9560259B2 (en) 2014-06-27 2017-01-31 Sony Corporation Image processing system with blur measurement and method of operation thereof
CN104200480A (en) * 2014-09-17 2014-12-10 西安电子科技大学宁波信息技术研究院 Image fuzzy degree evaluation method and system applied to intelligent terminal
CN113628192A (en) * 2021-08-12 2021-11-09 北京百度网讯科技有限公司 Image blur detection method, device, apparatus, storage medium, and program product
CN113628192B (en) * 2021-08-12 2023-07-11 北京百度网讯科技有限公司 Image blur detection method, apparatus, device, storage medium, and program product

Also Published As

Publication number Publication date
WO2007057808A3 (en) 2008-01-31

Similar Documents

Publication Publication Date Title
Hu et al. Low cost robust blur estimator
WO2007057808A2 (en) Blur estimation
KR101298642B1 (en) Method and apparatus for eliminating image noise
US9324153B2 (en) Depth measurement apparatus, image pickup apparatus, depth measurement method, and depth measurement program
US20130278784A1 (en) Adaptive PSF Estimation Technique Using a Sharp Preview and a Blurred Image
Vankawala et al. A survey on different image deblurring techniques
Mosleh et al. Image deconvolution ringing artifact detection and removal via PSF frequency analysis
Abdullah-Al-Mamun et al. A new full-reference image quality metric for motion blur profile characterization
Yin et al. Blind source separation and genetic algorithm for image restoration
Kao et al. Mltistage bilateral noise filtering and edge detection for color image enhancement
Choudhary et al. A novel approach for edge detection for blurry images by using digital image processing
JP6584083B2 (en) Image processing apparatus and image processing program
Bhagat et al. Novel Approach to Estimate Motion Blur Kernel Parameters and Comparative Study of Restoration Techniques
Narasimharao et al. Advanced Techniques for Color Image Blind Deconvolution to Restore Blurred Images
Tico Adaptive block-based approach to image stabilization
Robinson et al. Blind deconvolution of Gaussian blurred images containing additive white Gaussian noise
Moghaddam A robust noise independent method to estimate out of focus blur
Deepthi Jordhana et al. Various filter algorithms using impulse noise removal in digital images with image fusion technique
CN112614100A (en) Ocean plankton image segmentation method
Shah et al. Hough transform and cepstrum based estimation of spatial-invariant and variant motion blur parameters
Ferris et al. Extension of no-reference deblurring methods through image fusion
Harrity et al. No-reference multi-scale blur metric
Javaran Blur length estimation in linear motion blurred images using evolutionary algorithms
Favorskaya et al. No-reference quality assessment of blurred frames
Ahmed et al. Improved phase correlation matching

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06821308

Country of ref document: EP

Kind code of ref document: A2