GB2379113A - Image data analysis - Google Patents

Image data analysis Download PDF

Info

Publication number
GB2379113A
GB2379113A GB0116468A GB0116468A GB2379113A GB 2379113 A GB2379113 A GB 2379113A GB 0116468 A GB0116468 A GB 0116468A GB 0116468 A GB0116468 A GB 0116468A GB 2379113 A GB2379113 A GB 2379113A
Authority
GB
United Kingdom
Prior art keywords
edge
data
image
gaussian
equation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0116468A
Other versions
GB0116468D0 (en
Inventor
Clemens Mair
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to GB0116468A priority Critical patent/GB2379113A/en
Publication of GB0116468D0 publication Critical patent/GB0116468D0/en
Publication of GB2379113A publication Critical patent/GB2379113A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

A method for analysing a set of data to obtain characteristics defining the data is proposed. Numerical values for the characteristics are obtained by expressing the characteristics in terms of partial derivatives and local differences of the data with respect to the characteristics. The order of the derivatives and differences are chosen to be as low as possible. The characteristics may include spatial parameters that define the position of a data point relative to an event such as an edge or corner in the data, and a blur parameter. Therefore the method can be used for edge detection and blur estimation. The method can also be part of a system to estimate object distances from penumbral blur. A method for obtaining the characteristic of non-linear imaging sensors from signal blurring is also described.

Description

<Desc/Clms Page number 1>
Data Analysis The present invention relates to methods of analysing data such as for example image data to locate events such as peaks, corners or edges in the data and/or to determine the extent of blurring of an image.
Data such as image data can be expressed as a function of image intensity along one or more axes. Various characteristics of the data can be obtained by analysing the data and these characteristics have many useful applications including edge detection to locate objects within an image and measuring the extent of blurring or defocus of an image. The estimated characteristics of the data may also have applications in the analysis of other types of data such as for example radio signals.
Consequently, many ways of estimating these characteristics from image data have been suggested in the past.
M. C. Chiang and T. E. Boult"Local blur estimation and super-resolution,"Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 821-826,1997 describes a method of fitting a third order polynomial to a set of image data, and solving to find an estimate of the characteristics of the image data using a set of equations obtained from the third order polynomial, the first derivative of the polynomial and the third derivative of the polynomial. This method has the disadvantage however that it is mathematically complex to fit the third order polynomial to the data. Thus the method is time consuming and potentially inaccurate if the polynomial is not accurately fitted to the data.
J. H. Elder and S. W. Zucker,"Local scale control for
<Desc/Clms Page number 2>
edge detection and blur estimation", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 20, no. 7, pp. 699-716,1998 discloses a method in which the data is modelled by a Gaussian curve and edges are deemed to exist at the points at which the second derivative of the function is zero. Thus, in this method, the results obtained are dependent on the scaling factors in the gradient of the data and are consequently less accurate than is desirable.
From a first aspect, the present invention provides a method of analysing a set of data to obtain characteristics defining the data in which the data is modelled by a Gaussian curve and derivatives of the Gaussian curve are used to solve for the numerical values of the characteristics.
In one embodiment, the Gaussian curve is obtained by convolving the equation of a straight step-edge with a Gaussian blurring kernel.
Alternatively, the Gaussian curve may be obtained by convolving a Gaussian blurring kernel with the equation of a logarithmic step response such that the characteristics of the data may be obtained for a data set representing a logarithmic sensor response.
In a further alternative embodiment, each edge of a corner is modelled as a Gaussian blurred step edge and the model of the corner obtained is differentiated along each edge.
Preferably, the first, second and third derivatives of the Gaussian are used to solve for the characteristics.
In an alternative embodiment, the Gaussian curve is obtained by convolving the equation of a perfect impulse
<Desc/Clms Page number 3>
with a Gaussian blurring kernel.
In this case, the Gaussian curve and its first and second derivatives are preferably used to solve for the characteristics.
Preferably, the characteristics found include an indication of the location of sharp transitions in intensity of the data or edges within the data.
Still more preferably the characteristics found include a measure of the extent of blurring of the image data.
In one preferred embodiment of the invention, the measure of the extent of blurring of the image data is used to estimate the depth of an object in the image.
The invention has been defined above in terms of the data being modelled by a Gaussian curve. However, in an alternative embodiment of the invention in which discrete data is analysed, the data could be modelled by a modified Bessel function. Further, it is believed that any function for which the defining parameters can be expressed in terms of the derivatives with respect to the spatial parameters could be used to model the data in order to find the numerical values of the characteristics of the data.
From a further aspect therefore, the present invention provides a method of analysing a set of data to obtain characteristics defining the data in which the data is modelled by a function and the function is solved using the first, second and third derivatives thereof to obtain the characteristics of the data.
By using function derivatives, the solution is as local as possible. Furthermore, by using the first, second
<Desc/Clms Page number 4>
and third derivatives of the function, the errors in the solution produced by signal noise are reduced.
Preferably, the signal analysed by the methods of the invention is passed through a Gaussian smoothing filter before or during analysis.
This will allow the inaccuracies in the solution due to signal noise to be further reduced.
Still more preferably, the methods of the invention may be used to analyse a discrete data set and a point in the data set is found to be at an event such as a peak, edge or corner in the data if the distance of the point from the event is less than a predetermined threshold.
Further, tolerances may be introduced into the constraints on the estimated characteristics to allow for signal noise and/or sampling noise.
Still more preferably, the direct component of the signal being sampled is minimised to reduce errors in the estimated characteristics caused by signal bias.
Alternatively, a Gaussian derivative kernel from which the function modelling the data is obtained may be normalised to reduce the errors in the estimated characteristics caused by signal bias.
The estimated characteristics obtained over several signal points may be averaged to improve the accuracy of the results obtained.
From a further aspect, the present invention provides a method of analysing a discrete set of data to obtain characteristics defining the data in which the data is modelled by a Gaussian curve and the first and second
<Desc/Clms Page number 5>
derivatives of the Gaussian, together with the difference in values between adjacent points in the data are used to solve for the characteristics of the data.
From a yet further aspect, the present invention provides a method of analysing a discrete set of data to obtain characteristics defining the data in which the data is modelled by a function obtained by convolving the equation of a double step edge with a Gaussian blurring kernel, and solving to obtain the characteristics of the data using the first to fifth derivatives of the function.
This has the advantage of allowing objects whose width in an image is less than the width of the blurring filter applied to the data to be detected.
From a still further aspect, the present invention provides a method of characterising a non-linear imaging sensor response in which an image containing step edges of known spatial distribution is defocussed before being provided to the non-linear imaging sensor and the output of the non-linear imaging sensor is analysed to obtain the characteristics of the non-linear imaging sensor.
Thus, this method allows the characteristics of a nonlinear image sensor to be inferred solely by changing the blurring of an image caused by a defocussed optical system and analysing the signals obtained by the sensor.
Preferably, the characteristics obtained may be used to convert the output of the non-linear imaging sensor to a linear response type signal so that linear-type vision algorithms may be applied to the signal.
From a further aspect, the present invention provides a system for obtaining the characteristics of a non-linear
<Desc/Clms Page number 6>
imaging sensor including a test signal generator, focussing means for blurring the output of the test signal generator and a signal analysis unit for analysing the output from the non-linear imaging sensor which receives the blurred test signals.
From a still further aspect, the present invention provides a method of estimating the distance of an object in an image from a light source, in which the object is illuminated by a light source which provides an occluding edge at a known distance therefrom, an image of the illuminated object is obtained, the width of the area of penumbral blur in the image is measured and the distance of the object from the light source is then calculated using the measured width and the known distance of the occluding edge from the light source.
Preferably, the width of the area of penumbral blur in the image is obtained by analysing the image data to obtain the characteristics thereof using one of the methods described above in which the data is modelled by a Gaussian curve, and calculating the width as being dependent on the square root of the variance of the Gaussian blurring kernel.
From a further aspect, the present invention provides software for implementing any of the methods described above.
Preferred embodiments of the invention will now be described by way of example only and with reference to the accompanying drawings in which: Figure 1 is an image of a straight blurred edge; Figure 2 is the intensity function for the image shown in Figure 1 ;
<Desc/Clms Page number 7>
Figure 3 is an edge image of the image of Figure 1 ; Figure 4 shows a one-dimensional step edge h (x) and Gaussian blurring kernel g (x) ; Figure 5 shows schematically the provision of noise smoothing by Gaussian filtering ; Figure 6 shows the penumbral blur of an occluding edge which is not parallel to the shadow plane of a light source; Figure 7 shows a simulated blurred Gaussian step edge; Figures 8a to 8e show the results of the analysis of the image of Figure 7; Figure 9 is a plot of the blur parameter t estimates along the edge pixels of Figure 8c ; Figure 10 is a photograph of a woman; Figure 11 shows an edge image obtained for Figure 10 ; Figure 12 shows an improved edge image for Figure 10 ; Figure 13 shows another improved edge image for Figure 10 ; Figure 14 shows a section of an image of a contact wire ; Figure 15 shows the intensity profile along a horizontal scan line half way down Figure 14 ; Figure 16 shows the gradient profile along the scanline of Figure 15 after smoothing ;
<Desc/Clms Page number 8>
Figure 17 shows the results of the analysis of figure 14 ; Figure 18 shows an ideal step line ; Figure 19 shows schematically an imaging process using a lens and a finite aperture ; Figure 20 shows an apparatus for measuring depth from penumbral blur; Figure 21 is an image of a blurred corner ; Figure 22 is an intensity function of the image of Figure 21 ; Figure 23 is an edge image of the corner of Figure 21 ; Figure 24 shows a Gaussian function across one edge of the corner ; Figure 25 shows a Gaussian function across the other edge of the corner ; and Figure 26 shows the corner of Figure 21 and the edge base coordinate systems thereof.
Figure 1 shows a grey-level image consisting of individual pixels arranged on a square grid. The spatial coordinates along the columns and rows of the grid are denoted by x and y respectively. The size of the image of Figure 1 is 50 by 50 pixels. The image has
a lower intensity value ho=0. 2, an edge height Ah=1. 3, a higher intensity value ho+6h=1. 5, and a variance of the Gaussian blurring kernel of t=20.
The grey level values of the individual pixels can be
<Desc/Clms Page number 9>
described by the intensity function a (x, y) as shown in Figure 2. The intensity a (x, y) in this case is the quantity measured by a monochrome area scan camera.
Given the intensity function a (x, y) for an image, the present invention provides a means of identifying edge pixels, i. e. pixels which lie on the centre line of a transition from the dark to the bright area. For the image of Figure 1, the edge image obtained will be as shown in Figure 3, with the edge pixels in white and the non-edge pixels in black.
The edge pixels in the image are detected by modelling the image data as a Gaussian blurred one dimensional step edge. In general terms, Gaussian edge characterisation considers a blurred edge as the result of an ideal straight step edge which is low pass filtered with a Gaussian kernel. The aim of Gaussian edge characterisation is to compute four parameters at every image location (x, y) given only the intensity values a (x, y) of an image. The four parameters computed are : 1. the shortest (perpendicular) distance |s| of a pixel at position (x, y) from an edge in the image; 2. the edge blurring t, i. e. the variance of the Gaussian blurring kernel; 3. the edge height 6hi and 4. the lower grey level value ho.
To derive a Gaussian blurred one dimensional step edge, an ideal, i. e. unblurred, step edge at position Xo in a one-dimensional signal is considered. The onedimensional step edge to which a one-dimensional Gaussian blurring kernel g (x) has been applied is shown in Figure 4. The equation of the ideal step edge is given by
<Desc/Clms Page number 10>
where 6h=hl-ho, Uc denotes the one-dimensional continuos step function and ho and h, are the constant intensity
values for x < Xo and x > Xo respectively.
A spatial variable s is defined which starts from Xo, i. e. s=x-Xo. Substituting s into equation 1, the equation becomes
The equation of a one-dimensional Gaussian blurring kernel g (s, t) is given by
The blur parameter t is equal to the variance of the Gaussian kernel and is always positive. The equation of the blurred step edge a is found by convolving h and g (i. e. equations 2 and 3) to give
Convolution with the step function uc is equivalent to integration and so equation 4b can alternatively be written as
where # (s, t) denotes the cumulative density function of the Gaussian distribution.
In order to recover each of the four edge parameters listed above unambiguously, four equations are required.
<Desc/Clms Page number 11>
In order to minimise the effects of noise distorting the data obtained, the four equations are obtained from equation 5 and the first three derivatives thereof with respect to s. For precise estimates of the edge parameters, accurate estimates of the image derivatives are needed. Differentiation enhances signal noise and so ideally the signal noise is smoothed prior to obtaining derivative estimations.
To smooth the signal noise, the signal is passed through a Gaussian smoothing filter. This will remove image noise of a smaller scale than the signal of interest.
The Gaussian filtering of the image however means that the scale parameter of the filter tn is added to the normal scale parameter te of the edge to yield the new edge scale
Thus, the original edge blur te of the image data can be calculated simply by subtracting the known value of tn from the value t computed by the edge characterisation.
Apart from providing noise suppression, the initial Gaussian filtering of the image signal a is also advantageous as all unblurred step edges adopt the blur parameter tn such that even edges which were unblurred in the original data can then be analysed.
When analysing the signal, the Gaussian smoothing and differentiation may be carried out in a single step.
The reason for this is that convolution and differentiation commute such that the signal can be convolved with the n-th order derivative of the Gaussian filter kernel to obtain the n-th order derivative of the data after smoothing.
Equation 4a above denotes the response of a linear
<Desc/Clms Page number 12>
imaging device to a blurred edge. Differentiating a once with respect to s yields
where 6 (s) denotes the one-dimensional Dirac function.
Thus
The second derivative a''is
Combining equations 7 and 8, the step height Ah can be eliminated
Differentiating equation 9 once again yields
Equations 9 and 10 may be combined into the system of linear equations
with
<Desc/Clms Page number 13>
and the parameter vector p = (t s) T. Solving equation 11 for s and t gives
Once s and t are known, equation 7 allows to calculate 6h, i. e.
Finally, the offset value ho can be obtained from equation 5.
As discussed above, an edge in the image is deemed to be located at any point where s=0.
In a discreet data set, it is more appropriate to consider a point an edge point if it lies within a certain distance from the edge. If s is scaled such that the sampling points are a unit distance apart, a reasonable threshold for s is IslO. 5. If the edge happens to lie exactly between two neighbouring sampling points, both points would be marked as edge points. If a single edge response is desirable, an edge tracking strategy searching the path of minimal s-values or binary thinning of the edge image can be used to eliminate double responses.
In the presence of signal and sampling noise, the constraints on the edge parameter estimates can be relaxed in order to allow a certain amount of noise on the parameter estimates. Noise tolerant versions of the t-constraint and the s-threshold can be formulated as
<Desc/Clms Page number 14>
respectively. The tolerance parameters toll and tolus can be set, for example, to allow for a 10% error on the parameter estimates, i. e. tolt = O. lt and toJ = 0.05, respectively. Note that non-zero tolerance values also help to detect non-Gaussian edges which do not comply with the edge model of equation 5. Moreover they can improve the connectivity at edge junctions, too.
If toJ is set too high, it is likely that multiple edge pixels are marked at a single edge location. In this case an edge tracking algorithm which extracts the path of minimum s values, or binary thinning can help to solve the problem.
Having found the path of minimal s values along an edge, it is also possible to apply a hysteresis thresholding as proposed by J. Canny,"A computational approach to edge detection"IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMl-8, No. 6, pp. 679- 698,1986 to the s values. This is equivalent to having a variable rather than a fixed tolus along the edge.
The s-image also seems to be the appropriate domain to derive shape descriptions for individual objects in the image. This can be achieved by fitting geometric shape models to the s data, edge tracking or deformable contours. Similarly, the parameter t gives a useful measure of the blurring or defocus of the image and can be used for example to estimate the depth of an object in an image as will be described further below. The t value can also be helpful in distinguishing between object edges and the edges of shadows. As Figure 6 shows, the penumbral blur along a shadow edge is not
<Desc/Clms Page number 15>
constant if the object edge creating the shadow is not parallel to the light emitting plane or the plane on which the shadow is cast. The exit windows of many artificial light sources provide examples of occluding edges which are non-parallel to the shadow plane.
If it is assumed that the blur value t of an object point is independent, t can also assist in the point and edge matching process between several views of the same scene. The t constancy assumption of course requires the t-values at the different viewpoints to be normalised by the optical transfer functions and sensor resolutions of the imaging systems.
Further, if it is assumed that the image brightness remains constant over several frames, the edge height Ah and offset values hO can be used just like the t value for image feature matching between different viewpoints.
For a Gaussian blurred edge, the square root of the blur parameter t equals the distance between the edge points and the inflection points of the edge as S= (t) 1/2 at the inflection points. Therefore one possible way of estimating blur is to locate the inflection points in addition to the edge points.
An estimate for edge blur is provided by the local estimate of the edge parameter t at any point in the edge neighbourhood. Note however that by definition t must be positive, i. e.
Equation 16 shall be called the t-constraint. It can be used to identify locations where no reliable estimation of the edge parameters s, t and Ah can be made. In terms of equation 11, the t-constraint means
<Desc/Clms Page number 16>
i. e. A must have one positive and one negative eigenvalue. Explicitly the condition may be written as
If Gaussian filtering is applied to the signal prior to estimating the edge parameters, the t-constraint can obviously be applied to the original edge blur parameter
te.
Edge detection using constraint 16 will detect all edges regardless of their height, blur or offset. In practice, however, one may only be interested in edges for which the values for Ah, t and ho lie within a certain range. Such edge selection criteria can help to distinguish between relevant and irrelevant edges.
Many common edge detection methods use the first signal derivative a'for edge selection. They compare a'at every edge point with pre-defined global thresholds as part of a simple or hysteresis thresholding process.
However, a'is a compound measure which depends on Ah, t and s. Thus a'is actually not necessarily a measure for the edge height Ah. The main advantage of considering a'rather than Ah lies in the fact that the former can be estimated much more reliably in the presence of noise than the latter.
Although the methods described above assume that the signals being analysed are continuous and that convolution is carried out in the continuous domain, it will often be more practical for signals to be sampled so that a discrete convolution can be carried out. In
<Desc/Clms Page number 17>
order for the discrete convolution to be an acceptable replacement for its continuous counterpart, the sampling rate must be sufficiently high. Assuming the constant sampling interval equals the unit distance, a high sampling rate is equivalent to large values of the blur parameters te and to'For ideal results, it is believed that tn should be at least 2.5 times as large as te.
In the continuous domain, the Gaussian derivative kernels g (kl (S), k > 0, enclose a zero area, i. e.
Sampled Gaussian derivative kernels gs (k) (n), k > 0, however, may not satisfy the zero area properties of equation 20, i. e.
Thus sampling may also introduce an error associated with a non-normalised kernel gs(k). A non-zero b(k) may actually alter significantly the result of a convolution operation with a sampled signal a (n) containing a large signal offset. In order to illustrate this, let a (n) be divided into its d. c. and a. c. component
Convolution with gslk) yields
<Desc/Clms Page number 18>
Thus for ao ?'0 and bg (k)'* 0, a bias term ab'is effectively being added to a,, (k).
If the bias is of the same order of magnitude as avs (k) ; it can severely affect the result of some image analysis algorithms such as edge detection. Assuming edges are detected by locating zero crossing of the second derivative, the bias will cause a shift of the detected edge. Even worse, if the term aojbis larger in magnitude than avs(@), no edge may be detected at all. In a similar manner the bias a0bg(k) may cause estimates of the distance magnitude Isl to become minimal close to the edge, yet without going below the critical value of 0. 5.
There are two ways of minimizing the effect of the bias term. The first one is to ensure that ao has as small a
magnitude as possible. A second approach is to normalise g-n, so that b''= 0. This can be achieved by subtracting an appropriate portion of-b-' from every sample. The individual portions can be made proportional to the absolute value of the individual samples, for instance, i. e.
Averaging the edge parameter estimates over several
<Desc/Clms Page number 19>
signal points can help to obtain more accurate results. From all the edge parameters, averaging is easiest for t and h, because these two parameters are supposed to be constant in an edge neighbourhood.
The most straightforward averaging approach is to determine the median over a small signal region P.
Median filtering has the main advantages of simplicity and robustness in the case of non-uniform noise.
Alternatively, an optimal solution for t in the Least Squares (LS) sense can be obtained by rewriting equation 13 as
with
Assuming t is only estimated at a finite number of discrete points, the aim is to minimise the error
which yields the best estimate t as
Note however that the coefficients At and Bt are in general biased, not independent and of non-uniform variance over the region P. These undesirable properties may impair the efficiency of the LS solution of equation 26.
<Desc/Clms Page number 20>
In the same way as it has been shown for t, a LS solution for Ah can be derived from equation 14. The final result is
where
What has been said about the statistical properties of the coefficients At and Bt applies in a similar manner to sAsh and B, too.
The methods described above relate to the analysis of one-dimensional signals. In order to extract features from two-dimensional images, the edge characterisation of one-dimensional signals presented so far needs to be extended to two dimensions.
In general, a two dimensional (2D) Gaussian blurred step edge a (x, y) is the result of the 2D convolution
of an ideal 2D step edge h (x, y, Ah, ho)
with the 2D Gaussian kernel g (x, y, t)
<Desc/Clms Page number 21>
The coefficients and v determine the edge direction.
Due to the separability of the 2D Gaussian kernel, the signal along any scanline across a 2D Gaussian blurred edge is a one-dimensional Gaussian regardless of the direction of the scanline. Thus the one-dimensional edge characterisation method can be used to determine the directional edge parameters sx, sy, and tx, ty, along the orthogonal x and y coordinate axes. The edge blur parameter t and the distance s perpendicular to the edge can be calculated
Unfortunately this approach runs into difficulties if the edge happens to be aligned with the x or y axis. In this case all the directional derivatives in the corresponding direction are close to zero. Thus in the presence of noise or numerical inaccuracies, the directional parameter estimates along the axis are very unreliable because of the non-linear nature of equations 12 and 13.
Directional singularities can be avoided if the onedimensional edge characterisation method is applied perpendicular to the edge. Since for the edge model of equation 28 the direction u perpendicular to the edge coincides with the maximal gradient direction, the first
<Desc/Clms Page number 22>
three derivatives of a (x, y) perpendicular to the edge are given by
where subscripts denote partial derivatives with respect to the subscribed variable.
If the image a (x, y) is only available at discrete points in space and the sampling points are arranged in a squared grid, the maximal perpendicular distance of a sampling point from the true edge location depends on the orientation of the edge. Therefore the distance threshold for edge detection needs to be adjusted accordingly.
Without loss of generality it can again be assumed that the sampling intervals along the x and y axis define the unit distance. If the edge is aligned with either the rows or the columns of the grid, the maximal distance is evidently the same as in the one-dimensional case, i. e. the threshold is again Isl < 0.5. For a diagonal edge, however, the maximal distance becomes 1/2. For an edge in a general direction, the threshold is given by
The one-dimensional noise averaging methods described above can easily be extended to two-dimensions.
<Desc/Clms Page number 23>
Additionally it is possible in 2D to carry out a geometric averaging of the t parameter for long straight edges. As explained above the square root of t can be measured as the distance between the edge points and the inflection points. While the edge location can be determined accurately by fitting a line through the points that fulfil condition 34, the position of the inflection points can be recovered by fitting two lines through the points satisfying
Evidently there is one line of inflection points on either side of the edge. In order to cope with the noise on the parameter estimates, a robust line fitting method such as the Hough transform is required.
Alternatively a 2D signal offers the opportunity to average the parameter values along a contour of constant s. Since the parameter estimates are typically more reliable close to the edge, it is preferable to carry out the averaging over the detected edge pixels. For simple shapes such as straight lines, again the Hough transform can be used to group edge pixels into contours.
The following describes one possible implementation of an edge detector based on Gaussian edge characterisation. The sole inputs of the algorithm are the image of an edge a (x, y, te) and the variance of the Gaussian noise suppression filter tn, Since the degree of smoothing in the x and y direction may vary for some of the intermediate images, the notation a (x, y, tx, ty) is introducted for an image a (x, y) which is smoothed with tx and ty in x and y direction, respectively.
<Desc/Clms Page number 24>
1. Smooth the image a (x, y, te) in one dimension with a 1D Gaussian filter of variance tn (a) Compute the 1D Gaussian kernel gl of variance tn.
(b) Convolve the image a (x, y, te) with g] along the x-axis in order to obtain the image a (x, y, te + t,, t,) which is smoothed along the x axis.
(c) Convolve the image a (x, y, te) with g] along the y-axis in order to obtain the image a (x, y, te, te + tn) which is smoothed along the y-axis.
2. Smooth the image obtained in step 1 in the second dimension with the 1D Gaussian filter of variance tn and calculate partial spacial derivatives along x and y axis (a) Compute the first three spacial derivatives
g 5 5r-of g-,.
(b) Convolve a (x,y, te,te + tn) with g'1, g"1, g"1 along the x-axis in order to obtain the first three spatial derivatives ax (x, y, te + tn), axx y,te + tn), axxx(x,y,te + tj along the x- axis of the 2D smoothed image.
(c) Convolve a (x, y, te + , t : g with gr'j, g, (y along the y-axis in order to obtain the first three spatial derivatives ay (x, y, te + tn), ayy (x. y, te + ayyy (x, y, te + t) along the y- axis of the 2D smoothed image.
3. Compute the first three spatial derivatives
<Desc/Clms Page number 25>
afy, + t : J, a, y, + tn) I y, y, te + tn) in the gradient direction using equations 31 to 32.
4. Consider only pixels in the further computations whose gradient value is above a certain gradient threshold Tas in order to eliminate pixels dominated by noise.
5. Compute local estimates of the total blur paramter t = te + tn and the perpendicular edge distance s using the relationships 12 and 13. respectively.
6. Eliminate all pixels from further considerations whose original blurring value te = t-t, is smaller than a certain threshold tolet, which is ideally 0.
7. Mark all pixels whose s value meet the condition as edge pixels. All other pixels are non-edge pixels.
For the image of figure 1 this alogrithm produces the edge image of figure 3.
In the following example, three separate 2D images were analysed using the method described above. These images were a simulated Gaussian blurred step edge, a photo of a woman and a real image of a railway overhead contact wire.
Figure 7 shows a simulated Gaussian blurred step edge with additive Gaussian white noise. The results in figure 8 have been obtained by Gaussian edge characterisation. Figure 8a shows the detected edge pixels after applying the t-constraint and the 2D s-
<Desc/Clms Page number 26>
thresholding condition. Figures 8b and 8c employ additional simple global gradient and ssh-thresholding, respectively, in order to eliminate edge responses due to noise. Figures 8d and 8e depict the edge pixels together with the inversion pixels detected by condition 35 after simple gradient and 6h-thresholding, respectively. The detection of the inversion pixels turns out to be much more sensitive to noise than the detection of the edge pixels.
Figure 9 is a plot of the t-estimates along the edge pixels of figure 8c. The estimates fluctuate around the true value t = te + tn = 9. In order to obtain an averaged t-estimate for the whole edge, a modified Hough transform was applied to the compound image 8e, using weighted median filtering over the t values of the pixels associated with the individual bins of the Hough transform. The averaged t estimate obtained for the left inversion line, the edge line and the right inversion line are 8.64, 9.37 and 9.05, respectively.
Since the Hough transform yields also the line equations of the edge and inversion lines, a second way of recovering the edge blur parameter t is to calculate the distances between the edge and the inversion lines.
With a distance resolution of 0. 1 pixel, the corresponding values obtained for the left and right inversion line were 3.0 and 3. 1, respectively. This is consistent with the expectation that the distance should equal It. The main disadvantage of this second method is the fact that a very high distance resolution in the parameter space of the Hough transform is required in order to obtain sufficiently precise estimates of the edge blur.
The image of the woman was chosen as a general real
<Desc/Clms Page number 27>
world test image for Gaussian edge characterisation because it contains straight and partially blurred edges in the background while the foreground shows curved image details of various sizes (Figure 10).
Figure 11 presents the result of edge detection using Gaussian edge characterisation and enforcing the t and s constraints with tolerances of tolt = O. lt and tolu = 0.05 respectively. Moreover a gradient threshold of 10% of the maximal image gradient has been applied. Even though the edge detection performs well on a large image structure, it obviously runs into difficulties when dealing with image details which are of a similar size as the smoothing kernel. This may be expressed in scale space terms by saying that the image structure must be of a larger scale than the filter scale tn in order for Gaussian edge characterisation to work well.
Equivalently, the difficulties with small image details may be blamed on an insufficient sampling rate.
The edge detection performance for small image structure can be improved by relaxing the Gaussian edge model and increasing tolt. Figure 12 uses the same parameter settings as figure 11 except that this time tolt = 2t.
If Ah rather than gradient-thresholding is applied, the maximal t tolerance is limited to tolt = t, because t must be greater than zero in order to reconstruct Ah from equation 14. Therefore the minimum size of the image structure detected by Ah-thresholding is slightly larger than for gradient-thresholding with tolt = 2t, as can be seen in figure 13. Nevertheless one can notice the attempt of th-thresholding to recover more blurred edges, even though the larger variance of the Ah estimates compared to the variance of the gradient
<Desc/Clms Page number 28>
estimates claims its toll.
Finally the Gaussian edge characterisation was applied to the images of a railway overhead contact wire (CW) as they are produced by an Overhead Line Geometry Measurement System (OLGMS). To this end a backilluminated image of a CW has been taken on a test rig for the OLGMS. Afterwards the image was corrected for fixed pattern noise (FPN), linearised and median filtered. A section of the CW image is shown in figure 14.
Figure 15 depicts the intensity profile along a horizontal scanline half way down in figure 14. In order to reduce the signal noise, Gaussian smoothing with a two-dimensional kernel of width tn = 6 pixels2 is carried out. The gradient profile along the scanline of figure 15 after the Gaussian smoothing is shown in figure 16.
The extracted edge pixels and inversion pixels using a simple gradient-threshold are shown in figure 17. Again using the modified Hough transform, the weighted median of the t values of the pixels that compose the left edge is calculated as 9.05 pixels2. Using a distance resolution of 0. 1 pixels, the distances of the left and the right inversion line from the edge are estimated as 3.0 and 3. 1 pixels. Thus the positions of the inversion lines are consistent with the estimated t value. The estimated t value for the right edge is 9.44 pixels2 and the distances of the left and the right inversion lines are both 3.0 pixels. The accuracy of these results can be improved if a longer segment of CW is considered.
The results are also likely to benefit from an increased distance resolution of the Hough transform.
Nevertheless the results allow the conclusion that the
<Desc/Clms Page number 29>
original blurring of the CW edges due to defocus was approximately te = 3 pixels2.
Various alternative methods of analysing a signal to obtain the characteristics thereof according to the preferred embodiments of the invention are set out below.
Instead of using spacial image derivatives, which can be calculated only approximately for discrete signals, it seems sensible to consider directly the differences between neighbouring samples. At position s + 1 and s, equation 9 reads
which again can be written as a system of linear equations
where
and p is the parameter vector (t S) T. Solving for s and t yields
Equations 36 and 37 need only two rather than three
<Desc/Clms Page number 30>
spacial derivatives. Thus they are advantageous for one-dimensional discrete signals. However for 2D discrete signals the values of the derivatives at position s + 1 can in generally only be obtained by interpolation, which again introduces some approximation error.
The methods presented so far have only analysed changes of a with respect to s. A source of information which seems to have been ignored are changes of a with respect to the scale parameter t. For instance, let a be smoothed with two different smoothing scales j and t.
Labelling the correspoinding signal derivatives with subjects 1 and 2 respectively, equation 9 may be written as
which can be combined into a system of equations
where
and p is the parameter vector (te s)T. Hence
<Desc/Clms Page number 31>
Equations 36 and 37 seem to be superior to equations 12 and 13, respectively, because they employ only two spacial derivatives. Nevertheless, using scale differences and differentials actually decreases the robustness of the results. This can be seen by differentiating equation 9 with respect to t
Exchanging the order of differential in the first term
and taking into account the relationship
one obtains
Thus differentiating equation 9 with respect to t is actually equivalent to differentiating it twice with respect to s. Yet higher signal derivatives obviously yield less robust results in the presence of noise.
<Desc/Clms Page number 32>
The embodiments described above relate to the analysis of a linear signal. The following considers the analysis of logarithmic response signals.
It is also possible to derive the edge parameters directly from the response b of a logarithmic image sensor without prior image linearisation. The onedimensional logarithmic sensor signal b is related to the irradiance function i by
In analogy to equation 5, the model of the Gaussian blurred edge in the irradiance function i may be written as
Differentiating equation 38 twice gives
Eliminating i yields
Taking into account that in analogy to equation 9, i'/i'=-s/t, one can obtain the equation
Differentiating equation 39 once more gives
<Desc/Clms Page number 33>
Equation 39 and 39a can be again combined into a system
with
and the parameter vector p = (t S) T. From equation 40, s and t can be determined as
Note that the logarithmic case of equations 41 and 42 approach the linear case of equations 12 and 13 as C1 grows towards infinity.
Equations 41 and 42 show that it is theoretically possible to calculate the edge parameters s and t directly from a logarithmic sensor response b if the camera parameter Ci is known. In practice, however, image noise makes such an approach difficult. Gaussian filtering cannot be applied to the signal b any more because the semi-group property of the Gaussian scale space is violated by the logarithmic sensor response.
Hence b needs to be linearised prior to Gaussian noise filtering.
The methods of solving a data set to find the characteristics thereof can also be extended to
<Desc/Clms Page number 34>
functions other than step functions.
Re-examining the basic equations of Gaussian edge characterisation, equation (4a) can actually be
interpreted as filtering the blur kernel g (s, t) with the step edge h (s, Z) h, ho). In this sense the first derivative calculation of equation (4a) can be regarded as inverse filtering eliminating the effect of the integration of g due to the convolution with h. Thus the result a'is a scaled version of the blur kernel g.
The recovery of the edge parameters s and t is actually based on special properties of the blur kernel.
Thus, Gaussian edge characterisation can be extended to other signals as long as the inverse filtering operation is adjusted accordingly. If the inverse filtering operations are confined to spacial differentiations, the signals that can be analysed are all indefinite integrals of the Dirac impulse o. The ideal step edge is the special case of the first order indefinite integral of 5.
More formally, the p-th indefinite integral of the Dirac impulse 0 at Xo scaled by c. shall be referred to as an event hp of order p, amplitude c and position xo. This may be expressed as
Introducing the event related spacial variable s = x-o, equation 43 can be simplified to
<Desc/Clms Page number 35>
For p = 0, 1, 2, hp is explicitly
The first order event hi corresponds to the step edge of equation (2). Second order events hy comprise a variety of signal shapes depending on the values of the parameters cl'ci and C2. For Co = -2C1 the model of a symmetric roof edge is obtained.
Since the inverse filtering operation for a Gaussian blurred p-th order event ap is the p-th spacial derivative apP), the parameters s, t and Co of hp can be stated directly using the results of equations (12) to (14)
The coefficients c] to cp may be computed from lower order derivatives of a.
Recovering the parameters of high order events is obviously difficult in the presence of signal noise because high order spacial derivatives need to be estimated. Nevertheless low order event characterisation does work successfully in a variety of situations. Thus, it is clear from the above that the method of the invention is applicable to the detection
<Desc/Clms Page number 36>
of various events in an image of which steps and impulses are just two examples.
Where the width of the noise smoothing filter applied to a data set is greater than the distance between two neighbouring events, Gaussian event characterisation encounters difficulties due to event interference. To ease this problem, the method of Gaussian edge characterisation described above can be extended to double edges or lines.
By analogy with equation (1), an ideal step line (or double edge) as shown in Figure 18 between positions Xo and Xj shall be defined by
h (x, x-o, Xj, Aho,/lhl, ho) = Ahou (x-xo) + Zhu (x-x + ho where Aho and Ah are the edge heights at x, and xl, respectively
The parameters ho, h1 and h, are the constant intensity values for x < xO, Xo < x < Xj and x > xi'respectively (figure 18). Blurring h with a Gaussian kernel
yields the image a
a (x, x,, x,. A, At t) =A (x-x) +A (x-x+ ho (47) The analysis can be simplified by introducing the edge
<Desc/Clms Page number 37>
related spacial variables 80 and 81 (figure 18)
Thus equation 47 becomes
Comparing equation 5 and equation 48, it is evident that the line model has two additional parameters related to the position and height of the second edge.
Differentiating the signal a twice with respect to the spacial variable x yields
and
Combining equations 49 and 50 gives
where w denotes the line width
For w = 0, equation 51 reduces to equation 9 for a Gaussian step edge. Differentiating equation 51 once more yields
<Desc/Clms Page number 38>
which can be combined with equation 51 to
At this point it is advantageous to introduce two new variables
Note that p is twice the distance from the centre point of the line. The variable q is negative inside the line, positive outside the line and has a minimum at the centre of the line.
With these definitions equation 52 may be expressed as
where the k-th spacial derivative is denoted by the number k superseded in round brackets. Unfortunately, equation 53 contains non-linear terms in the unknowns t and p. In order to obtain an equation in one variable only, two more differentiations are needed, i. e.
Combining equations 53 to 55 results in a third order polynomial in t only. Yet it is difficult in general to identify which of the roots of the polynomial is the actual solution. For the special case of a step edge (w = 0), the three roots of the polynomial turn out to coincide.
<Desc/Clms Page number 39>
With six image derivatives, all line parameters can easily be calculated uniquely. Yet, the usefulness of such a solution is very limited for real signals which contain signal noise.
The situation is quite different if some of the line parameters are actually known a priori. In the case of zero initial line blur, t actually equals the smoothing filter variance. With t known, equations 53 and 54 become linear and all edge parameters can be estimated with four spatial derivatives. For example for an OLGMS signal blurring due to defocus may be small. Hence, if the width of the contact wire (CW) is small using the line characterisation of this section and the assumption te = 0 may yield better results than simple edge characterisation.
If in addition to t, the line width w is given too, the line parameters can be recovered from three image derivatives. This is the same number required for edge characterisation. In the case of CW images, w depends obviously on the distance of the CW from the camera and thus no precise value can be assigned to w. However for applications other than an OLGMS, both the initial line blur and the line width may be given.
One possible solution to the difficulties of applying Gaussian edge characterisation, which assumes continuous signals, to spatially discrete signals is discussed above. However it may be preferable to use a discrete equivalent to Gaussian edge characterisation in the discrete domain thereby completely avoiding the issue of sampling.
In the following explanation signals in the spacial domain are referred to by lower case letters while their
<Desc/Clms Page number 40>
z-transforms are denoted by the corresponding capital letters. Moreover differentiation with respect to frequency variable Z is indicated by primes. The backwards and central difference operations with respect to discrete spacial variables are symbolised by V-and V, respectively.
T. Lindeberg, Scale-Space Theory in Computer Vision.
Dordrecht: Kluwer Academic Publishers, 1994 has introduced the normalised modified Bessel function b (i, t) of integer order i as the discrete equivalent of the Gaussian filter kernel g (x, t) for the one-dimensional (lD) linear scale space. The parameters i and x denote the spacial variables of the lD signals in the discrete and continuous domain, respectively. The paramter t is in either case the variance of the blurring kernel. The z-transform B (z, t) of b (i, t) is given by
Differentiating equation 56 with respect to z yields the following recurrence relationship for the modified Bessel function
Let h (i) be an un-blurred discrete signal and a (i) its blurred counterpart. According to discrete scale space theory, blurring h with b is equivalent to convolving the two functions. Thus in the z-domain the following relationship holds
<Desc/Clms Page number 41>
By analogy with the continuous domain, a zero order event ho shall be defined by a discrete Dirac impulse of amplitude co, i. e.
where i is the event related discrete spacial variable.
The z-transform of ho is obviously
Thus the z-transform Ao of the blurred zero order event is given by
Higher order discrete events hp (i, co) can be defined as any function with a rational z-transform, i. e.
where P and Q are two polynomials in z. From equation 58 it follows that the blurred higher order event Ap may be written as
This means that, the blurred zero order event Ao (z, t) can always be obtained from Ap (z, t) by inverse filtering, i. e.
The simplest example of a higher order event is the
<Desc/Clms Page number 42>
discrete step edge with the z-transform
Therefore the initial inverse filtering operation required for step edge characterisation is the backwards difference operation V-.
Once the appropriate event-dependent inverse filtering step has been carried out, the signal equals a first order event Ao. Differentiating equation 59 with respect to z and taking into account the recurrence relationship 57, one obtains
whose inverse z-transform is
in order to simplify the notation, a function d shall be defined
Hence
The backwards difference of equation 3 yields
<Desc/Clms Page number 43>
Inserting this expression into equation 3d gives
Once i and t are known, the inverse of equation 59 allows Co to be calculated, i. e.
There is a similarity between the discrete Bessel event characterisation of this section and the continuous Gaussian edge characterisation. By recognising that edge signals can be transformed into zero order events by simple differentiation in the continuous domain and differencing in the discrete domain, equation 2d can be considered as the discrete equivalent to equation 9.
Moreover in both the discrete and the continuous domain three spacial differencing operations are required for edge characterisation.
In an alternative embodiment of the invention, a system for characterising a non-linear imaging system response using a test image containing blurred step edges is provided. Signal blurring is achieved by signal defocussing.
A non-linear imaging sensor converts a spacial input signal i into an output signal o. In general i and o
<Desc/Clms Page number 44>
are two different types of signals, e. g. light irradiance and voltage. The invertible, but non-linear function f that relates o to i is called the sensor response characteristic. In general f depends on N constant parameters c, n = 1.. N.
A fundamental assumption in most computer vision algorithms is that the image intensity is a linear function of the object radiance. This assumption obviously does not hold for images taken by non-linear imaging sensors. It has been shown that in the case of a logarithmic device, edge detection using second order signal derivatives for example may suffer from spacial errors up to several pixels. Thus calculating a linear sensor response 1 prior to linear image analysis is paramount for accurate geometrical image measurements.
The example used throughout is that of a complementary Metal-Oxide-Silicon (CMOS) imaging device with logarithmic response f. CMOS sensors are being developed as a cheap alternative to the more estabtished Charge-Coupled Device (CCD) devices. Due to the logarithmic response f of some CMOS sensors, they can offer a dynamic range of 120dB or more. Such a huge dynamic range is of great interest to many machine vision systems which have to cope with large variations in object radiance. In view of the fast evolving popularity of CMOS imaging devices and the rapid increase in computer vision systems in many areas of everyday life, techniques for determining f in order to calculate 1 may be of great commercial interest.
The logarithmic sensor response f may be written as
<Desc/Clms Page number 45>
where the independent variable x denotes a spacial dimension of the signal. Thus for two input signals i1 and i the difference of the corresponding output signals Oi and 02 becomes
Using this equation, Ci may be found as described in greater detail below.
The linear sensor response 1 can be defined by
where k1 and k2 are two constants. Calculating 1 obviously requires knowledge of f. For the special case
of equation 59a, the simplest way to calculate 1 is by setting = 1, J= 0. Then combining (59a) and (59b) yields
The simplest known way to determine the parameters cn of f is to apply M different input signals to the sensor at positions Xm, m = 1.. M, and to measure the corresponding output signals. From the data pairs (i (xm), o the parameters Cn can be estimated, provided M is sufficiently large. The disadvantage of this approach is the need to measure the absolute values of i at xm accurately.
<Desc/Clms Page number 46>
It would be much more practical if the absolute information required to determine Cn lay in some relative measure between the various signal points i (x.
The present invention therefore uses a test signal containing step edges of arbitrary height and exploits
signal defocusing to give i (xm) certain known properties.
These properties can then be used to determine c, from o (xj.
Step edges are favourable image features because they are easy to create in spacial signals.
Defocusing is a unique means of imposing known properties on to i for two reasons: 1. Most imaging systems already contain a focusing system by default.
2. The focusing system is usually the only available component between the test image and the imaging sensor to modulate the test signal.
There are various ways to infer the sensor parameters Pn from o (x) depending on the assumptions made about the defocusing kernel h (x). Useful assumptions are for instance 1. spacial symmetry, i. e. h (x) = h (-x) 2. central maximum, i. e. h (x) < h (0) for all x 0 3. Gaussian shape, i. e.
for some scalar value of t.
<Desc/Clms Page number 47>
Thus, test signals generated by a test signal generator are passed through a focussing system to provide blurred step edges to the non-linear imaging sensor. The signals generated by the non-linear imaging sensor are received by a signal analysis unit. In this way, the invention provides a pure software solution which allows signals from a non-linear imaging sensor to be converted back to a linear response type signal so that known vision algorithms can be used more accurately with the signals.
In a further embodiment of the invention, a method of measuring the depth of an object in an image from the extent of blur or defocus of the object is provided.
The idea of depth-from-defocusing techniques is to recover the distance between an object in the 3D world and a camera lens by analyzing the blurring of the image of the object on the sensor plane of the camera.
Various methods have been proposed over the last decade.
The Depth-from-Defocus (DFD) paradigm tries to determine the object distance directly from the amount of blurring on the sensor plane. Hence it needs a blurring model that relates the image blurring to the object distance.
The great advantage of DFD over Depth-from-Focus techniques (DFF) is the fact that it can work with as few as two or even one image. In the case of singleimage DFD the scene does not need to be static.
The present invention relates to an improvement to the DFD paradigm.
DFD makes use of the fact that, unlike a pin-hole camera, a camera with a finite aperture and a lens can image only points at one particular distance sharply on the imaging sensor. This becomes clear when comparing the paths of light in the two camera types using
<Desc/Clms Page number 48>
geometric optics.
Assume a point source of light in the 3D world in front of the camera, i. e. the object space of the camera lens.
It emits rays of light in all directions of the object space. In the case of an ideal pinhole camera only the light ray which hits exactly the pinhole may enter the camera. This light ray produces the image of the light source on the sensor plane. Conversely, the irradiance at any point in the sensor plane is solely determined by the radiance of the corresponding point in the object space. This means that no blurring occurs in the image plane regardless of the position of the 3D point and the sensor in the object space and image space, respectively. The same fact may also be expressed by saying that the whole object space is in focus or the depth of focus is infinite. Consequently the DFD technique can not work with a pinhole camera.
The technique is however applicable to cameras with finite apertures and lenses. The idea of a lens is to focus all the light-rays which are emitted from a single point source in the object space and pass the aperture, in one point in the image space. Therefore more of the emitted light is transferred into the image space.
However the price to pay for this advantage is the loss of infinite depth of focus. As becomes clear from figure 19, for a given distance z, between the lens and the sensor plane only the points of the object space at a certain distance Zcl away from the lens are perceived as points in the sensor plane. A point at a distance Zo + zou is focused at a distance Zj z in the image space. Hence, if the aperture is circular, its image on the sensor plane is a circle of radius r. The relationship between the object space and image space distances is determined by the focal length f of the
<Desc/Clms Page number 49>
lens, i. e.
Furthermore from figure 19 it may be seen that
where A is the diameter of the lens aperture. Thus z1 may be expressed as
Equations 61 and 62 can be combined in order to eliminate z
Therefore zo can be calculated as
with
<Desc/Clms Page number 50>
being the F-number of the lens. If the object distance z, is greater than the focused distance zou equation 63 becomes
Note that if the camera parameters f, F and Zlt are kept constant, zizis a function of the blur circle radius r only. Thus equation 63 can be used to calculate the depth of a point in the object space by simply measuring the blurring of its image in the sensor plane.
The idea of depth measurements from penumbral blur is illustrated in figure 20. It shows a diffused light source which illuminates an object plane. Light is emitted only through the aperture of the light source which is of size A. Some of the light is blocked by an occluding edge which is placed a distance v in front of the light source. The distance between the edge and the object plane is denoted as u.
Due to the finite extent of the light source, there is no sharp transition between the fully illuminated and the completely occluded area of the object plane. This effect is called penumbral blurring. As indicated in figure 20, the area of penumbral blur is defined by the two extreme light rays emitted by the light source which can just pass the occluding edge.
From figure 20 it can be seen that the distance u of the object plane is given as
<Desc/Clms Page number 51>
If v and A are known and w can be measured, equation 66 can be used to determine u.
The occluding edge can easily be made part of the light source. It may be one edge of the exit window of the light source for instance. In this case A and v are fixed and soley determined by the light source geometry.
Thus the ratio v/A can be measured during a calibration process.
Penumbral blur can be modelled by Gaussian blurring.
Thus the width w of the blurring area is proportial to the square root of the variance to of the Gaussian blurring kernel of the radiance function of the object plane, i. e.
The variance to can be measured by taking an image of the object plane with a linear projective imaging device such as a CCD camera (figure 20). Then the variance ti of the Gaussian blurring kernel measured in the image of the area of penumbral blur is again proportional to tot i. e.
Combining equations 66 to 68 yields
where
The image blurring ti can be measured with Gaussian Edge
<Desc/Clms Page number 52>
Characterisation. Thus, the method described above provides one useful application of the Gaussian Edge Characterisation methods of the invention which has many potential practical uses. For example, a light source with an occluding edge and an imaging device could be provided on a production line in a factory to verify that the height or depth of products passing under the light source was within an acceptable range.
In a further embodiment, the method of the invention can be applied to corners in images. Figure 21 shows the gray-level image of a corner which is the region around the point of intersection of two straight edges. It consists of individual pixels p, which are arranged on a squared grid. The spacial coordinate along the columns and rows of the grid are denoted by x and y, respectively.
The gray level values of the individual pixels can be described by the intensity function a (x, y) which is illustrated in Figure 22. a (x, y) is the quantity measured by a monochrome area scan camera.
Given the intensity function a (x, y), the method of the invention allows the identification of edge pixels, i. e. pixels which lie at the centre of the transition from the dark to the bright image gray level. This process is called edge detection at corners or simply corner detection. For the particular case of Figure 21, the process should produce an edge image as shown in Figure 23, where edge pixels and non-edge pixels are marked in whie and black colour, respectively.
In summary Gaussian Corner Characterisation includes the following three steps:
<Desc/Clms Page number 53>
1. Differentiation along the two edges of the corner.
The result of the differentiation along the first edge is a function ayi which is a one-dimensional Gaussian function of the spacial variable x2 across the second edge (Figure 24). Conversely, differentiation along the second edge yields a function a which is the one-dimensional Gaussian function of the spacial variable Xi across the first edge (Figure 25).
2. Gaussian Edge Characterisation of ayl and By along x. and Xi, respectively, yields local estimates of the corner parameters, i. e. xi, xi and the image blur.
3. Combining the results of Gaussian Edge Characterisation for ayl and ayez.
Note that this algorithm can easily be extended to characterise junctions where more than two edges meet.
In this case differentiation along each of the edges is required.
To implement Gaussian corner characterisation, two edge related coordinates systems (x, yl) and (x2, yz) are defined, both of which have their origin at the vertex of the corner (Figure 26). xi and Yi denote the coordinate axes across and along the i-th edge of the corner, respectively. For simplicity here it is assumed that the orientation of the two edges are known. Thus the directional derivatives ayi and a along and across the i-th edge at every point (x, y) can easily be calculated.
The angle between yi and Y2 is denoted as 13. If 13 is taken to be positive in the anti-clockwise direction, the two edge based coordinates systems are related by
<Desc/Clms Page number 54>
Without loss of generality it is also assumed in this explanation that 0 < < -n. Thus an unblurred corner h (x, y) of height Ah and offset ho may be written as
where Uc denotes the one-dimensional continuous step function and
Gaussian Corner Characterisation assumes again a Gaussian Blurring kernel g (Xi, yi, t)
where t is the variance of the filter kernel. The blurred corner a (xj, y1, t) is obtained by convolving h (xi, yi) and g (xi, yi, t)
The principal aim of Gaussian Corner Characterisation is to determine the unknown parameters Xi, x, t.
To this end equation 3e is differentiated with respect to the spacial variables x and yi which yields
<Desc/Clms Page number 55>
Thus
However from le the following is obtained
Combining 2e, 4e and 5e gives
Differentiating with respect to Xi yields
Therefore
which is a linear relationship between x and t. A final differentiation of 6e with respect to Xi gives the desired results
<Desc/Clms Page number 56>
Combining 6e and 7e leads to the following expression for Xi
This derivation was based on the (xl, y) coordinate system. However if the (x, y) coordinate system is used instead, the following dual relationships can be derived
<Desc/Clms Page number 57>
Combining equations 7e, 8e, lOe, lie the paramters Xi, x2, t can be determined from image derivatives only.
In order to obtain the edge image of figure 23 all points for which the following condition holds are marked
where thi and thx2 are two suitable thresholds. If a (x1, Yi, t) is sampled on a squared grid, thx1 and the should be made dependent on the orientation of the edges relative to the sampling grid. For example, in the easiest case of an edge that is aligned with the direction of the sampling rows or columns the correct threshold is half a sampling interval.
In general neither of the two edge based coordinate systems (x, yi) and (X2, Y2) lines up with the actual image coordinate system (x, y). The latter may be defined by the pixel rows and columns, for instance.
The relationship between the coordinate system of the i-th edge and the image coordinate system is fixed by the rotation angle &alpha;1. It may be expressed as
The rotation angles a1 can be determined from image derivatives along the image coordinate system. To this end only relationships which include image derivatives along and across the edges are considered.
Differentiating equations 6e and 9e with respect to yi and Y2, respectively, yields
<Desc/Clms Page number 58>
which are two examples of such relationships. Given the derivatives along the axes of the image coordinate system, these equations can be solved for the unknowns a, and C (2- It will be appreciated that the embodiments of the inventions described above are illustrative only and that the scope of the inventions are intended to be limited only by the statements of invention.

Claims (23)

  1. CLAIMS 1. A method for analysing a set of data to obtain characteristics defining the data in which a combination of partial derivatives and local differences, which are with respect to the characteristics and of minimal order, are used to find numerical values for the characteristics.
  2. 2. A method according to claim 1, in which the data is modelled by a Gaussian curve obtained by convolving a Gaussian blurring kernel with the equation of a straight step-edge.
  3. 3. A method according to claim 1, in which the data is modelled by a Gaussian curve obtained by convolving a Gaussian blurring kernel with the equation of a logarithmic step response such that the characteristics of the data may be obtained for a data set representing a logarithmic sensor response.
  4. 4. A method according to claim 1, in which the data is modelled by a comer consisting of several Gaussian blurred step edges and the model of the comer obtained is differentiated along each edge.
  5. 5. A method according to claim 2 or claim 3 or claim 4 in which the first, second and third spatial derivatives of the Gaussian are used to solve for the characteristics.
  6. 6. A method according to claim 1, in which the data is modelled by a Gaussian curve obtained by convolving a Gaussian blurring kernel with the equation of a perfect impulse.
  7. 7. A method according to claim 6, in which the Gaussian curve and its first and second spatial derivatives are preferably used to solve for the characteristics.
  8. 8. A method according to claim 1, in which the characteristics found include an indication of the location of an event such as a peak, edge or comer in the data.
  9. 9. A method according to claim 1, in which the characteristics found include a measure of the extent of blurring of the image data.
  10. 10. A method according to claim 9, in which the measure of the extent of blurring of the image data is used to estimate the depth of an object in the image.
  11. 11. A method according to claim 1, in which discrete data is modelled in terms of a modified Bessel function.
  12. 12. A method according to claim 1, in which the data is modelled in terms of a function for which the defining parameters can be expressed in terms of partial derivatives and local differences with respect to the spatial parameters in order to find the numerical values of the characteristics of the data.
  13. 13. A method according to claim 1, in which the data is modelled by a function and the function is solved using the first, second and third derivatives thereof to obtain the characteristics of the data.
  14. 14. A method according to claim 8, in which a point in the data set is found to be at an event such as a peak, edge or comer in the data if the distance of the point from the event is less than a predetermined threshold.
  15. 15. A method according to claim 1, in which tolerances are introduced into the constraints on the estimated characteristics to allow for signal noise and/or sampling noise.
  16. 16. A method according to claim 1, in which the estimated characteristics obtained at several signal points are averaged to improve the accuracy of the results obtained.
    <Desc/Clms Page number 60>
  17. 17. A method according to claim 1, in which the data is modelled by a function obtained by convolving the equation of a double step edge with a Gaussian blurring kernel, and numerical values for the characteristics of the data are obtained from a combination of partial derivatives and local differences up to order five.
  18. 18. A method according to claim 2, in which the first and second derivatives, together with the difference in values between discrete points in the data are used to solve for the characteristics of the data.
  19. 19. A method of characterising a non-linear imaging sensor response in which an image containing step edges of known spatial distribution is defocussed before being provided to the non-linear imaging sensor and the output of the non- linear imaging sensor is analysed to obtain the characteristics of the non-linear imaging sensor.
  20. 20. A system for obtaining the characteristics of a non-linear imaging sensor including a test signal generator, focussing means for blurring the output of the test signal generator and a signal analysis unit for analysing the output from the non-linear imaging sensor which receives the blurred test signals.
  21. 21. A method of estimating the distance of an object in an image from a light source, in which the object is illuminated by a light source which provides an occluding edge at a known distance therefrom, an image of the illuminated object is obtained, the width of the area of penumbral blur in the image is measured and the distance of the object from the light source is then calculated using the measured width and the known distance of the occluding edge from the light source.
  22. 22. A method according to claim 21, in which the width of the area of penumbral blur in the image is obtained by analysing the image data to obtain the characteristics thereof using one of the methods described above in which the data is modelled by a Gaussian curve, and calculating the width as being dependent on the square root of the variance of the Gaussian blurring kernel.
  23. 23. A computer program for implementing any of the methods described above.
GB0116468A 2001-07-05 2001-07-05 Image data analysis Withdrawn GB2379113A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0116468A GB2379113A (en) 2001-07-05 2001-07-05 Image data analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0116468A GB2379113A (en) 2001-07-05 2001-07-05 Image data analysis

Publications (2)

Publication Number Publication Date
GB0116468D0 GB0116468D0 (en) 2001-08-29
GB2379113A true GB2379113A (en) 2003-02-26

Family

ID=9917989

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0116468A Withdrawn GB2379113A (en) 2001-07-05 2001-07-05 Image data analysis

Country Status (1)

Country Link
GB (1) GB2379113A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011042738A2 (en) 2009-10-07 2011-04-14 Cambridge Enterprise Limited Image data processing systems
US8624986B2 (en) 2011-03-31 2014-01-07 Sony Corporation Motion robust depth estimation using convolution and wavelet transforms

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115586485B (en) * 2022-09-30 2024-04-09 北京市腾河科技有限公司 Signal step value extraction method and system, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5463697A (en) * 1992-08-04 1995-10-31 Aisin Seiki Kabushiki Kaisha Apparatus for detecting an edge of an image
US5479535A (en) * 1991-03-26 1995-12-26 Kabushiki Kaisha Toshiba Method of extracting features of image
WO2001052188A2 (en) * 2000-01-12 2001-07-19 Koninklijke Philips Electronics N.V. Method and apparatus for edge detection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5479535A (en) * 1991-03-26 1995-12-26 Kabushiki Kaisha Toshiba Method of extracting features of image
US5463697A (en) * 1992-08-04 1995-10-31 Aisin Seiki Kabushiki Kaisha Apparatus for detecting an edge of an image
WO2001052188A2 (en) * 2000-01-12 2001-07-19 Koninklijke Philips Electronics N.V. Method and apparatus for edge detection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Local scale control for edge detection and blur estimation", J. H. Elder et al., IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 20, No. 7, pages 699-716, 1998 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011042738A2 (en) 2009-10-07 2011-04-14 Cambridge Enterprise Limited Image data processing systems
US8938109B2 (en) 2009-10-07 2015-01-20 Cambridge Enterprise Limited Image data processing systems for estimating the thickness of human/animal tissue structures
US8624986B2 (en) 2011-03-31 2014-01-07 Sony Corporation Motion robust depth estimation using convolution and wavelet transforms

Also Published As

Publication number Publication date
GB0116468D0 (en) 2001-08-29

Similar Documents

Publication Publication Date Title
Rufli et al. Automatic detection of checkerboards on blurred and distorted images
Placht et al. Rochade: Robust checkerboard advanced detection for camera calibration
JP5542889B2 (en) Image processing device
CA2326816C (en) Face recognition from video images
US9025862B2 (en) Range image pixel matching method
KR20150117646A (en) Method and apparatus for image enhancement and edge verification using at least one additional image
KR20110111362A (en) Digital processing method and system for determination of optical flow
Nieto et al. Real-time vanishing point estimation in road sequences using adaptive steerable filter banks
US10628925B2 (en) Method for determining a point spread function of an imaging system
CN113324478A (en) Center extraction method of line structured light and three-dimensional measurement method of forge piece
US20200258300A1 (en) Method and apparatus for generating a 3d reconstruction of an object
CN112313541A (en) Apparatus and method
Babbar et al. Comparative study of image matching algorithms
CN110959099B (en) System, method and marker for determining the position of a movable object in space
CN113822942A (en) Method for measuring object size by monocular camera based on two-dimensional code
Choudhuri et al. Crop stem width estimation in highly cluttered field environment
GB2379113A (en) Image data analysis
Reich et al. A Real-Time Edge-Preserving Denoising Filter.
Senel Gradient estimation using wide support operators
Dryanovski et al. Real-time pose estimation with RGB-D camera
Chandrakar et al. Study and comparison of various image edge detection techniques
JP3275252B2 (en) Three-dimensional information input method and three-dimensional information input device using the same
JP5887974B2 (en) Similar image region search device, similar image region search method, and similar image region search program
Chidambaram Edge Extraction of Color and Range Images
JP2002312787A (en) Image processor, image processing method, recording medium and program

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)