GB2377842A - Automated linear image reconstruction - Google Patents

Automated linear image reconstruction Download PDF

Info

Publication number
GB2377842A
GB2377842A GB0211468A GB0211468A GB2377842A GB 2377842 A GB2377842 A GB 2377842A GB 0211468 A GB0211468 A GB 0211468A GB 0211468 A GB0211468 A GB 0211468A GB 2377842 A GB2377842 A GB 2377842A
Authority
GB
United Kingdom
Prior art keywords
image
components
kernel
psf
dft
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0211468A
Other versions
GB0211468D0 (en
GB2377842B (en
Inventor
Mark David Cahill
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Publication of GB0211468D0 publication Critical patent/GB0211468D0/en
Publication of GB2377842A publication Critical patent/GB2377842A/en
Application granted granted Critical
Publication of GB2377842B publication Critical patent/GB2377842B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

A process of correcting a blurred image of a scene involves dividing the discrete Fourier transform coefficients of that scene by the discrete Fourier transformed coefficients of two point spread functions(kernels). The results are then inverse transformed and the estimated likely to be the sharpest taken as the true image.

Description

<Desc/Clms Page number 1>
AUTOMATED LINEAR IMAGE RECONSTRUCTION This invention relates to a process which improves the clarity of images, such as terrestrial photographs.
Many techniques exist for the clarification of images which have been blurred, for example by poor focus, movement of a photographic camera, or turbulence in the atmosphere. These usually involve either a considerable amount of computer power, or make restrictive assumptions about the cause of the blurring, such as that any movement of the camera is uniform and in a straight line.
In order to remove blurring from an image of a real scene, it is necessary to know or to
estimate the manner in which a typical point in the real scene would have been
rendered in the image. This is described by a"point spread function", also known as a oft,, PS r tv "kernel"which can simply be viewed as the appearance which a single point of light in
a real scene would have, in an image which had been blurred in the same manner as the image under consideration. For example, an image in which the camera has been moved might have a kernel which is a single straight line. The invention described here assumes that the point spread function is substantially the same for all points in the scene The present patent application claims priority from a previous UK provisional patent application GB 0112397. 5, from which it differs as follows Mathematical statements have been removed from section 4 of Appendix 1 which were eroneous in describing as examples, processes applied to a Fourier transform which should have been applied to the image itself. Since example 2 of the main body of the description in that previous application describes choosing of an estimate which is likely to be sharpest
<Desc/Clms Page number 2>
("for example, that with the greatest sum of fourth powers of the first difference"), the
final part of the paragraph preceding the examples in the main body commencing"and
if necessary'has been removed being considered redundant. A better function is < ? PPPF/vX < included as an example in equation 3, and finally, a note has been added to the first Á
paragraph of Appendix 1 concerning prior publication, and a second Appendix has been added, which defines terms well known to students of mathematics and relevant to the claims.
This invention comprises the process of first estimating the kernel of an image by comparing the amplitudes of the components of the Fourier transform of the image with those which would be expected of an image of an ideal scene, as described in Appendix 1, spatially filtering the ratios of these two sets of amplitudes to derive an estimate of the amplitudes of the components of the Fourier transform of the kernel, as described in section 1 of Appendix 1, and deducing the phases of the components of the Fourier transform of the kernel, up to a twofold uncertainty in the sign of the phase, by minimising the spatial extent of the kernel subject to the constraint that the amplitude of its Fourier transform must equal those estimates of the amplitudes of the components of the kernel, as described in section 2 of Appendix 1, and then using these two estimates of the kernel to estimate the appearance of the original scene from the blurred image, excluding excessive error from that estimate by restricting the amplification of random noise to a pre-defined level, as described in section 3 of Appendix 1.
Specific examples of the possible use of this process are as follow;
<Desc/Clms Page number 3>
1. A photograph may be rendered digitally, and supplied to a program in a digital computer, which derives the kernel, and derives a filtered arithmetic inverse of the Fourier transform of the kernel, such as to restrict the amplification of random noise, as described in section 3 of Appendix 1. The program then derives an estimate of the appearance of the real scene, by multiplying numerically the components of the Fourier transform of the image by those of the filtered arithmetic inverse of the Fourier transform of the kernel. The result of this may then be inverse Fourier transformed to supply an estimate of the appearance of the real scene.
2. Since the kernel may be asymmetrical, and the process described above and in sections 1 and 2 of Appendix 1 does not distinguish between kernels which are not identical after being rotated by a half turn, a second estimate of the appearance of the real scene may be derived by rotating the kernel by a half turn, prior to dividing the components of the Fourier transform of the image, as described in example 1 above. Of these two, that estimate which is likely to be sharpest, for example, that with the greatest sum of fourth powers of the first difference, as described in section 4 of Appendix 1, can the be chosen as the best estimate.
3. In the process described in examples I and 2 above, the kernel may be filtered to decrease errors in its estimation, by removing components with small values, as described in section 5 of Appendix 1.
<Desc/Clms Page number 4>
4. In the process described in examples 1 to 3 above, the kernel may be filtered to decrease errors in its estimation, by removing components distant from its centre, as described in section 5 of Appendix 1.
5. If the kernel estimated in examples 1 to 4 has a spatial extent larger than that which is expected, then it may be taken that the"activity index", as defined in section 1 of Appendix 1, is too small, and a larger value may be chosen, prior to deriving the kernel again, as described in section 5 of Appendix 1.
6. A computer program may be constructed which, after parameters such as the permitted amplification of noise and expected extent of the kernel have been defined, will automatically recover scenes from digitally rendered blurred images which are supplied to it, by methods described in one or all of examples 1 to 4 above, and display the recovered scenes, or save images of these scenes for later reproduction.
<Desc/Clms Page number 5>
APPENDIX 1 What follows is a description of the procedures necessary to implement the invention, expressed in a manner which may be understood by a suitably qualified person, who should be familiar with the mathematics of two-dimensional Fourier transforms, and of image processing by digital algorithms. Sections 1 and 2 describe the estimation of the kernel, and may be considered as prior art, being present in UK patent application 0019048.8, and sections 3 and 4 describe the estimation of the original scene. Section 5 elucidates matters referred to in examples 2,3 and 5 above.
1. Estimating the Amplitude of the Kernel.
Unless stated otherwise, it is assumed for simplicity that the image is greyscale, that is, it is not coloured. Coloured images can usually be considered as composed from three separate images, with the same or different kernels, but where appropriate, comments are made concerning the extension of the algorithm to coloured images.
Assume that an ideal image with arbitrarily great extent and with arbitrarily fine resolution, containing sufficient discernible features, has a Fourier transform (FT) the amplitude of which is largely invariant under rotations, and self-similar under scale transformations ;
for all a and k, where IF (k) l is the modulus of the Fourier transform of the image, k is the modulus of the spatial frequency, and g (a) is an undetermined function of the scale parameter, a, then we can deduce that
<Desc/Clms Page number 6>
where K is a constant related to the brightness of the image, and I is a quantity, assumed herein to be of order unity, describing the relative amount of detail at different scales, which will hereinafter be termed the"activity index". In practice, starfields have I of approximately zero, and landscapes and images of people against plain backgrounds appear to have I of around 0.7 and 1.3, respectively.
Thus, multiplying the FT of an image by the arithmetic inverse of the RHS of equation (2) may be expected to produce a field with approximately constant amplitude, so long as the image is not blurred, and systematic deviation from this may be considered to be due to a nontrivial kernel (i. e. one which is not a delta-finction). Choice of an activity index differing by a small amount, of the order of 0.3, from that which might be expected, does not greatly mar reconstructed images which result from the processes described below, and K can be estimated from the low frequency components of the FT of the image itself Its precise value is of no consequence, since it affects only the brightness and contrast of the reconstructed scene, which can be adjusted automatically prior to viewing.
In practice,'images of interest are finite and are rendered by discrete pixels, so that their FTs are discrete and periodic. The following procedure is a suitable discrete and periodic version of the above procedure ; Let
where
and
<Desc/Clms Page number 7>
Here F (i, j) is the discrete Fourier transform (DFT) of the image, nonnalised to
gl, m) = (2. Pi)-l. Sum (i, j) {F (i, j) exp (2Pi. i. I/w). exp (2Pi. i. j. m/h)), (5) indexed by i and j, the period of i being w, and that of j being h, and f (i, j) being the original image."Sum (i, j)" denotes the sum over possible values of i and j, B is the mean-squared modulus of F (i, j), i is the square root of-1, and Pi is the usual geometric constant which the present word processor cannot render with the conventional Greek character. The quantity A (L j), which for want of a better term will hereinafter be termed the"activity"of the image, after smoothing as described below, will be taken to approximate the square of the amplitude of the DFT of the kernel. As noted above, the precise value of C is in fact not relevant, the above choice merely ensuring that the components ofU (i, j) are not excessively large or small.
Other functions can be used in place of U (i j) in equation (4a), if the specific techniques by which the image is digitised are sufficiently well understood.
Note however that the zero-frequency component of A (i, j) is identically zero; all information as to the mean value of the image is lost, but this is appropriate, since while the algorithm is linear, and assumes that the image is a linear representation of the blurred scene, a rendered photograph may not be strictly proportional to the intensity of light falling on the photographic film. The mean brightness of each colour needs to be recovered in the last stage of the procedure. Note also that the DFT treats the image as periodic, and to avoid discontinuities at the boundaries, which would corrupt A (i, j), it is advisable, first to subtract the mean value of each colour component from the image, then to shade each colour component smoothly towards zero near the edges. The mean activity for each (i, j), of all colour components can
<Desc/Clms Page number 8>
then be substituted for A (i, j) in equation 3, if the image has more than one colour component.
To smooth this final A (i, j), it is first inverse Fourier transformed, to obtain an estimate of the autocorrelation function of the kernel. Then a window is defined about the centre of the configuration space, larger than the expected extent of the autocorrelation function of the kernel. A function is defined by which to multiply the autocorrelation function, in order to mask it, which is unity near the centre of this region, so that the principal features of the autocorrelation function are retained, but
varying toward zero near the edge of the window. One choice for the function is ;
g (i, j) = 1, r < c g (i, j) = O. 5+O. 5cos (n (r-c)/d), c < r < c+d g (i, j) = 0, r > c+d, (6)
where c and d are constants and r-=d"+j-. In order to smooth the edge of the autocorrelation function after masking (since it originally has zero mean, it has non- zero value near the edge), a quantity proportional to l-g (i, j) can be subtracted from the resulting, windowed autocorrelation function, such as to give it zero mean.
This final estimate of the kernel's autocorrelation function can now be discrete Fourier transformed again, in the new, smaller window, to obtain a compact estimate of the square of the amplitude, L (i, j), of the FT of the kernel. Information is still lacking as to the zero-frequency component.
2. Recovering the Phase of the Kernel.
The phase of the kernel may be recovered, if it is, in reality, sufficiently compact, by minimising a periodic equivalent of the fourth moment of its square, subject to the
<Desc/Clms Page number 9>
amplitude of its Fourier transform complying with the values derived above. This is equivalent to minimising the squared modulus of a second difference of its FT, i. e. to requiring smoothness of the complex Fourier transform of the kernel. The second moment seems unsuitable, because it possesses unwelcome local minima in function space, but other measures of smoothness of the complex FT might be used.
A suitable method for this is as follows; First, construct a first guess as to the FT of the kernel, K (i, j, 0,0), as the square root of the calculated squared modulus, L (i, j), above, giving each component a random phase, subject to the requirement that the complex conjugate ofK (i, j, 0,0) must equal K (-i, -j, 0, 0). It is possible that some values for L (i, j), which is constructed by transforming a filtered autocorrelation function, may be slightly negative. In this case, make the first guess a small positive value, such as the square root of its modulus, and apply a random phase, as above. Similarly, make the first guess of the zero frequency component K (O, 0,0, 0) the mean of moduli of adjacent values. (The Fourier space is considered as periodic.) Now iterate by two nested loops, the inner being signified by the third index ofK, m, the outer by the fourth index, n: 1) Derive a fourth derivative, which might be termed a"logarithmic fourth derivative", by;
<Desc/Clms Page number 10>
Ifi=0 and j=0, or if L(i,j) is negative, then the amplitude is considered unknown. In this case,
Otherwise,
Here E is a small number, 10-12 seems suitable for most purposes, and
otherwise
In equation 14, T (n) is a weighting which varies from 1 initially, to reach zero before the termination of the outer loop. This governs the relative weighting given to the 'dynamical'term which requires K (i, j, m, n) to be smooth, and the'potential term' which requires the modulus of the FT of the kernel, K (i, j, m, n) to equal its estimated value.
2) Calculate the scalar product of the fourth derivative with itself;
<Desc/Clms Page number 11>
y (O, n) = Sum (i, j) {RO (iJ, 0, n) 2 + R1 (i, j, 0, n) 2} (17)
3) Apply this fourth derivative to obtain an estimate for K (i, j, m+1, n) ;
K ( j, m+ 1, n) = K (i, j, m, n). exp (a (m, n). [RO (i, j, 0, n) +i. Rl ( j, 0, n) ]), (18) where a (O, n) can be taken to be unity. i is the square-root of-1.
4) Evaluate the fourth derivative RO (i, j, 1, n) and RI (i, j, 1, n) of K (i, j, 1, n), as in step 1 above.
5) Calculate the scalar product of the new fourth derivative with the original
6) Regarding y (m, n) as an approximately linear function of a (m, n), derive a (2, n) which will make y (2, n) zero, and repeat step 3, equation 18 to derive the corresponding value of the kernel, K (i, j, 2, n) from the original derivative.
7) Setting K (i, j, 0, n+ 1) = K (i, j, 2, n), repeat from step 1, for increasing indices n.
About 256it rations of this outer loop are usually sufficient to recover the phase of a kernel derived in a window of 32 by 32 elements.
8) The components of the FT of the recovered kernel can then be taken as
for some large N, such as 256.
3. Deconvolution.
<Desc/Clms Page number 12>
The preceding algorithm recovers a phase for the kernel, as a function of spatial frequency, and the amplitude of the zero-frequency component, up to a twofold ambiguity in the sign of the phase. Effectively there are two kernels, which in configuration space ("normal"space) are images of each other under a 180 degree rotation.
Deconvolving the image, might crudely be described as dividing the FT of the image by the FT of the kernel. However, as is well known, this can result in the excessive amplification of random errors in the original image, due to components of the FT of the kernel which have small amplitude. It is therefore convenient to use a form of Wiener filter to limit the amplification of noise in the recovered image. Such a filter always contains at least one parameter, which controls this limitation, and a significant part of this invention is that this parameter is chosen to control the root-mean-square (RMS) of the amplification of random noise over the entire recovered image, rather than, for example, the expected amplitude of noise at each spatial frequency.
Many forms of filter can be chosen, depending for example on whether the error ("noise") is considered as evenly distributed over spatial frequencies or proportional to the signal. In what follows a simple case is considered, which appears to be effective, in which the amplitude of noise is considered constant over all spatial frequencies.
In this case, a filtered arithmetic inverse of the FT if the kernel may be obtained as
<Desc/Clms Page number 13>
where N is a control parameter, describing the level of noise, and limiting the value of the arithmetic inverse, and* denotes the complex conjugate. K (O, 0) is the real-valued zero-frequency component of the FT of the kernel.
A suitable value for N can be estimated by noting that the mean squared noise in the estimated scene will be amplified by the root-mean-square value of V (i j), which may be written
Amplification = (Sum (i, j) {IV (i, j) 12}/w/h) O. 5. (22)
Given this, it is a simple and efficient matter in a modem digital computer to find the value of N which will give the desired level of amplification of noise, for a given kernel, using in iterative algorithm.
The filter in equation 21 was chosen so as to limit greatly the amplification of spatial frequencies for which the amplitude of the FT of the kernel is particularly small. A consequence of this is that a small choice for the permitted amplification of RMS noise can be sufficient to improve the image, since in this case noise can be reduced at spatial frequencies for which there is little information in the image, balanced by the raising of noise levels at spatial frequencies for which more reliable information (higher signal to noise ratio) is available. Values for "Amplification" between 1 and 3 seem suitable.
The two estimates of the scene can then be derived as follows;
<Desc/Clms Page number 14>
First, subtract out the mean brightness of the image, and shade the values of pixels near the edges to zero as described above, in section 1. Then Fourier transform the result to obtain F (i, j).
The two estimates of the original scene can be recovered by deriving their Fourier transforms SO (i, j) and S 1 (i, j) as
The extension of this to coloured images is effectively trivial.
4. Choice of Reconstruction.
The procedures of sections 1 to 3 above produce two estimates of the scene. It is now desirable to choose between them. In order to do this, it is necessary to find some measure of the sharpness of the resulting images. In general, this is a complicated task, but is simplified by noting that the amplitudes of the components of the Fourier transforms of the two reconstructed images are identically equal, and choosing the reconstructed image with the largest sharpness to be that for which the variation is concentrated in the smallest or fewest regions. This image can then be taken as the most accurate, corresponding to the correct phase of the kernel. Other definitions of sharpness are possible.
Finally, each component should be multiplied by a suitable constant value, and have another suitable constant value added, so as to place the values of the recovered image in a suitable range for display or storage, for example between 0 and 255. At this
<Desc/Clms Page number 15>
stage, the original mean value of the components of each colour of the image may be restored.
5. Filtering the Kernel.
Finally, there follows a note on methods for improving the estimate of the kernel.
The methods described above for estimating the kernel depend on broad assumptions about the image, in particular about the invariance of its FT under transformations of scale and under rotations. In practice, images rarely have perfectly regular Fourier Transforms, and this introduces errors into the estimated kernel. These are broadly of two kinds; random errors, which may be considered as"noise"in the kernel itself, and systematic errors, such as extended features in the estimated kernel due to extended structures in the images such as the horizon, or stands of vertical trees or masts.
Two techniques can, on occasion, reduce these errors. Random errors can be reduced by excluding low amplitude components in the kernel, in much the manner of the Wiener filter. If C (i, j) is the kernel (in configuration space, not the FT of the kernel), then the filtered value can be taken as C 1 (i, j), where
N2 is a constant of the order of the Nth power of the RMS value of C (i, j), and N is an even power to which the kernel value is raised, 4 being effective.
The above method may be particularly effective embedded in the algorithm of section 2 for deducing the phase of the kernel ; the result of the filtering, equation 27, can be
<Desc/Clms Page number 16>
used as a new estimate of the phase of the kernel, prior to running the phase recovery algorithm a second time.
Systematic errors in the kernel are harder to remove, but a suitable method is simply to remove parts of the kernel most distant from its centre. For example, if a kernel is believed to have a spatial radius of 4 pixels, it can be represented within a grid 8 pixels across, and its autocorrelation function will require a region 16 pixels across. One might then derive the kernel in a window 32 pixels across, to avoid errors due to the filter of equation 6, with c in that equation being 16 and d being 32. Once the kernel has been recovered, in order to limit systematic errors in the recovered kernel one can then apply the same filter, equation 6, to the kernel, but with c equal to 4 and d equal to 8.
Finally, it may be possible to estimate a suitable value for the"activity index", I, of equations 4a and 4b. For example, if the spatial extent of the recovered kernel is significantly greater than that which is expected, a measure of which might be that the use, as described above, of the filter given by equation 6 significantly changes the reconstrueted kernel, it might be taken that the"activity index", I, is too small, that is that the true scene does not possess a great deal of detail at high spatial frequencies, and so the quantity I might be increased and the entire process of sections 1 and 2 repeated with the increased value of I.
<Desc/Clms Page number 17>
APPENDIX 2 Certain basic concepts familiar to students of mathematics are defined here, because of their importance to the specification of this invention.
1. Discrete Fourier Transform Under certain reasonable circumstances, a periodic function of one or more discrete variables may be expressed as a sum of functions of that variable. For the purposes of this document, if the brightness of any given pixel in a rectangular image is f (l, m), 1 being the horizontal index of a pixel, and m being the vertical index, and if the width
and height of the image are respectively w and h pixels, it may be expressed as
f { m) = K. Sum (i, j) {F (i, j) exp (21t. i. i. I/w). exp (21t. i. j. mIh)}, (1)
where i is the square root of-1, and Sum (i, j) {...} represents a sum over values of i between 0 and w-1, and over j between 0 and h-1. K is a constant the precise value of which is not important, but K= (w. h)' is used in this document. expO is the conventional exponential function.
Note that discrete Fourier transforms are often defined which use cosines or sines instead of complex exponential functions. While complex exponentials are convenient to use in the process in question, and this document is couched in terms of them, the term discrete Fourier transform is intended to include also those which use sines and cosines explicitly.
<Desc/Clms Page number 18>
Note also the difference between the characters"1" ("el") and"1" ("one"), which should in any event be obvious from context, and the distinction between the characters"i", used as an index or co-ordinate, ofF used for the square root of-1.
The complex-valued coefficients F (L j) are collectively termed the discrete Fourier transform (also referred to in this document as the"DFT"), or more loosely the Fourier transform, off (l, m), and may be regarded as periodic in with period w, and in j, with
period h. Each F (L j), for a given pair of values of i and j, is a"component"of the discrete Fourier transform of the image, and its magnitude M (i, j) and phase Theta (i, j) are given by
F (i, j) = M (i, j). exp (i. Theta (i, j)), (2)
M (i, j) and Theta (i, j) being real-valued quantities.
Similarly, It !, m) is termed the inverse discrete Fourier transform (also referred to in this document as the"IDFT"), or simply the inverse Fourier transform, and "i", (i, j).
The indices i and j are measures of spatial frequency in two orthogonal directions, and the magnitude of spatial frequency can be considered as given by the positive square root of the sum of the squares ofi2 andj2, where i2 is the smallest ofi and w-i, and j2 is the smallest ofj and h-j.
The IDFT of the squared magnitude of the DFT of a function is termed the autocorrelation function of that function.
<Desc/Clms Page number 19>
2. Linear Variation A quantity, A, is said to vary linearly with respect another, B, if for all values of B, A is equal to the product of B and a constant, plus a second constant, i. e.
A = c. B+d (3) where c and d are constants.
3. Spatial Filtering
If the function F (i, j) in section 1 of this Appendix is derived from f (l, m), multiplied by another, usually more slowly varying or simpler, function of i and j, and the IDFT of the result is calculated, then f (l, m) is said to have been"spatially filtered".
Similarly, if the function f (l, m) in section 1 of this Appendix is derived from F (i, j), multiplied by another, usually more slowly varying or simpler, function ofl and m, and the DFT of the result is calculated, then F (i, j) is said to have been"spatially filtered".
In the context of this document, spatial filtering is used to smooth or average rapidly varying functions, to gain an indication of their broad behaviour.
4. Spatial Extent.
The spatial extent of a function of two dimensions which has no clear boundaries may be measured by what is known as a"moment". In effect, the sum is calculated of the products of the values of the function with another function, which second function has larger values at points farther from an arbitrary centre. This sum is then divided by the sum of the values of the first function, the result being the"moment".
<Desc/Clms Page number 20>
As is indicated in Appendix 1, a convenient measure of the spatial extent of the point spread function is a moment of its squared value.

Claims (4)

  1. AUTOMATED LINEAR IMAGE RECONSTRUCTION CLAIMS 1) A process in which two estimates of the appearance of a scene are derived, from a blurred image of that scene, by dividing the components of the DFT, as hereinbefore defined, of a numerical representation of that image by those of the DFTs of two PSFs, as hereinbefore defined, one of which results by estimating the magnitudes of the components of its DFT to be those resulting from spatially filtering, as hereinbefore defined, the ratios of the squared
    magnitudes of the components of the DFT of a numerical representation of the
    same r J blurred image to expected squared magnitudes, and taking as
    the phases of the components of the DFT of that PSF, those which minimise a measure of the spatial extent of the resulting estimate of the PSF, the second PSF differing from this first PSF only in that the phases of the components of its DFT differ in sign from those of the first, exceptionally small values of the magnitudes of the components of the DFTs of the PSFs being replaced by larger ones in the aforesaid divisions, in such a manner as to restrict the amplification of random errors in the image in the process of division, the results of those divisions then being inverse discrete Fourier transformed, as hereinbefore defined, to obtain the two estimates of the appearance of the original scene, that estimate of the appearance of the original scene which exhibits the most abrupt variations of brightness being chosen as the best estimate.
  2. 2) A process as described in claim 1 above, I which the PSF is filtered to decrease errors in its estimation, by removing components with small values.
    <Desc/Clms Page number 22>
  3. 3) A process as described in any of claims 1 or 2 above, in which the PSF is spatially filtered to decrease errors in its estimation, by removing components distant from its centre.
  4. 4) A process as described in any of claims 1 to 3 above, in which if the PSF estimated has a spatial extent larger than that which is expected, then the expected squared amplitudes of claim 1 are adjusted, prior to deriving the PSF again.
GB0211468A 2001-05-22 2002-05-20 Automated linear image reconstruction Expired - Fee Related GB2377842B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GBGB0112397.5A GB0112397D0 (en) 2001-05-22 2001-05-22 Automated linear image reconstruction

Publications (3)

Publication Number Publication Date
GB0211468D0 GB0211468D0 (en) 2002-06-26
GB2377842A true GB2377842A (en) 2003-01-22
GB2377842B GB2377842B (en) 2005-07-06

Family

ID=9915050

Family Applications (2)

Application Number Title Priority Date Filing Date
GBGB0112397.5A Ceased GB0112397D0 (en) 2001-05-22 2001-05-22 Automated linear image reconstruction
GB0211468A Expired - Fee Related GB2377842B (en) 2001-05-22 2002-05-20 Automated linear image reconstruction

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GBGB0112397.5A Ceased GB0112397D0 (en) 2001-05-22 2001-05-22 Automated linear image reconstruction

Country Status (1)

Country Link
GB (2) GB0112397D0 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2387728A (en) * 2002-03-02 2003-10-22 Mark David Cahill Rapid linear reconstruction of coloured images
ES2291129A1 (en) * 2006-08-03 2008-02-16 Consejo Superior De Investigaciones Cientificas Rivet manufactured for non-metallic materials

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2367440A (en) * 2000-08-04 2002-04-03 Mark David Cahill Linear image reconstruction

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2367440A (en) * 2000-08-04 2002-04-03 Mark David Cahill Linear image reconstruction

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2387728A (en) * 2002-03-02 2003-10-22 Mark David Cahill Rapid linear reconstruction of coloured images
ES2291129A1 (en) * 2006-08-03 2008-02-16 Consejo Superior De Investigaciones Cientificas Rivet manufactured for non-metallic materials
WO2008020109A1 (en) * 2006-08-03 2008-02-21 Consejo Superior De Investigaciones Científicas Method for restoration of images which are affected by imperfections, device for implementation of this, and the corresponding applications

Also Published As

Publication number Publication date
GB0211468D0 (en) 2002-06-26
GB2377842B (en) 2005-07-06
GB0112397D0 (en) 2001-07-11

Similar Documents

Publication Publication Date Title
US10325358B2 (en) Method and system for image de-blurring
US10672112B2 (en) Method and system for real-time noise removal and image enhancement of high-dynamic range images
US9262808B2 (en) Denoising of images with nonstationary noise
Tiwari et al. Review of motion blur estimation techniques
US6611627B1 (en) Digital image processing method for edge shaping
CN108090886B (en) High dynamic range infrared image display and detail enhancement method
BRPI0808277A2 (en) &#34;COMPUTER IMPLEMENTED METHOD FOR PROCESSING SYNTHETIC OPENING RADAR IMAGES (SAR) AND COMPUTER SYSTEM FOR PROCESSING SYNTHETIC OPENING RADAR IMAGES (SAR)
BRPI0808278A2 (en) &#34;COMPUTER IMPLEMENTED METHOD FOR PROCESSING INTERFEROMETRIC OPEN SYNTHETIC RADAR (SAR) IMAGES AND COMPUTER SYSTEM FOR PROCESSING INTERFEROMETRIC SYNTHETIC OPEN RADAR (SAR) IMAGES&#34;
BRPI0808279A2 (en) &#34;COMPUTER IMPLEMENTED METHOD FOR COMPRESSING SYNTHETIC OPENING RADAR (SAR) IMAGES AND COMPUTER SYSTEM FOR COMPRESSING SYNTHETIC OPENING RADAR (SAR) IMAGES&#34;
BRPI0808280A2 (en) &#34;COMPUTER IMPLEMENTED METHOD FOR UNCOMPRESSING SYNTHETIC OPENING RADAR (SAR) IMAGES AND COMPUTER SYSTEM FOR DECOMPRESSING SYNTHETIC OPENING RADAR (SAR) IMAGES&#34;
BRPI0808283A2 (en) &#34;COMPUTER IMPLEMENTED METHOD FOR RECORDING SYNTHETIC OPENING RADAR (SAR) IMAGES AND COMPUTER SYSTEM FOR RECORDING SYNTHETIC OPENING RADAR (SAR) IMAGES&#34;
CN113962908B (en) Pneumatic optical effect large-visual-field degraded image point-by-point correction restoration method and system
BRPI0808276A2 (en) &#34;COMPUTER IMPLEMENTED METHOD FOR PROCESSING SYNTHETIC OPENING RADAR (SAR) IMAGES AND COMPUTER SYSTEM FOR PROCESSING COMPLEX SYNTHETIC OPENING RADAR (SAR) IMAGES&#34;
Li et al. Blind motion image deblurring using nonconvex higher-order total variation model
GB2377842A (en) Automated linear image reconstruction
Wong et al. Regularization-based modulation transfer function compensation for optical satellite image restoration using joint statistical model in curvelet domain
Wang et al. An airlight estimation method for image dehazing based on gray projection
Nancy et al. Comparative analysis and implementation of image enhancement techniques using MATLAB
Xue et al. SURE-LET image deconvolution using multiple Wiener filters
Muhammad et al. Matlab Program for Sharpening Image due to Lenses Blurring Effect Simulation with Lucy Richardson Deconvolution
GB2367440A (en) Linear image reconstruction
Farzana et al. Bilateral filtering with adaptation to phase coherence and noise
Zhu Measuring spatially varying blur and its application in digital image restoration
Grachev et al. Development of algorithms and software of flat X-ray image processing
WO2020250412A1 (en) Image processing device and image processing method

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20120520