WO2005059834A1 - Estimation of orientation of image features - Google Patents

Estimation of orientation of image features Download PDF

Info

Publication number
WO2005059834A1
WO2005059834A1 PCT/GB2004/005247 GB2004005247W WO2005059834A1 WO 2005059834 A1 WO2005059834 A1 WO 2005059834A1 GB 2004005247 W GB2004005247 W GB 2004005247W WO 2005059834 A1 WO2005059834 A1 WO 2005059834A1
Authority
WO
WIPO (PCT)
Prior art keywords
filter
steerable
orientation
filters
responses
Prior art date
Application number
PCT/GB2004/005247
Other languages
French (fr)
Inventor
John Michael Brady
Veit Ulrich Boris Schenk
Original Assignee
Isis Innovation Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Isis Innovation Limited filed Critical Isis Innovation Limited
Publication of WO2005059834A1 publication Critical patent/WO2005059834A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]

Definitions

  • the present invention relates generally to the analysis of digital images, typically by a computer system.
  • the present invention relates to the estimation of orientation of image features in a digital image.
  • Digital images may be acquired by a wide variety of devices and systems, including CCD and other digital cameras and image capture devices sensitive to various regions of the electromagnetic spectrum. Whereas many imaging devices capture a two-dimensional still image, other systems and devices capture a three- dimensional still image. Examples are a magnetic resonance imaging (MRI) system or a computed tomography (CT) system. Also, whereas many imaging devices and systems capture a still image, other devices and systems produce a motion image, that is a series of still images over time. Examples are a digital video camera or a contrast-enhanced MRI system.
  • MRI magnetic resonance imaging
  • CT computed tomography
  • Two-dimensional and three-dimensional spatial images consist of a set of values I(r) in respect of points at position r over space, commonly known as pixels.
  • a pixel is often referred to as a voxel which is an abbreviation of "volume pixel”.
  • Motion images consisting of a series of two- dimensional or three-dimensional still images may be considered as three- dimensional or four-dimensional images, respectively, consisting of values I(r, t) in respect of points at position r and at time t.
  • the present invention is generally applicable to any such digital images which may be two-dimensional, three- dimensional or four-dimensional.
  • an orientation may be associated with the image features at each point of an image.
  • a direction may be defined, for example by the unit normal, the unit tangent or the unit horizontal or vertical.
  • the possible orientations correspond to a circle, a sphere or a hypersphere of the appropriate dimensions.
  • the estimation of orientation is of key importance in image analysis for supporting a wide range of applications, including but not exhaustively the segmentation of a shape from its background, the recognition or classification of a shape, the tracking of a shape in motion or the quantification of growth or shrinkage of a shape.
  • the resolution of the sampling over space reduces the accuracy of the estimation, thereby increasing the error and uncertainty in the estimated orientation.
  • a related issue is that for image features having texture, the orientation of the image featured varies with the scale at which the orientation of the image is analysed. For example, a line which is straight when viewed at a large scale, but curved when viewed at a small scale will have a single orientation when analysed at a coarse resolution, but will have a continuously varying orientation when analysed at a fine resolution.
  • a feature is locally one-dimensional if the image changes (at the scale at which it is analysed) in one direction only and has no change in the perpendicular direction(s).
  • real images include a variety of different features which are not one-dimensional.
  • Such complex features which have more than one dimension are varied and include corners, junctions, crossing image features and other more complicated features.
  • the orientation of the image feature at the point in question may be defined as the orientation of maximal change (at the scale of the analysis), this definition of orientation does not by itself overcome the problem of providing a proper analysis of such complex features.
  • Known techniques for analysing a digital image to estimate the orientation of image features involves filtering the image with a plurality of filters which are each oriented in different orientations.
  • the filters are oriented in the respective orientations and produce respective filter responses oriented in those orientations.
  • the orientation of an image feature is estimated from the filter responses.
  • Such known methods exploit the fact that a response depends on the relative orientation of the filter and the feature. For example, if we define orientation to be the normal of the one-dimensional feature (as opposed the tangent), then the one- dimensional feature produces a large response when filtered by a filter oriented in the same direction as the one-dimensional feature and a small (or zero) response when filtered by a filter oriented in a perpendicular direction. This idea is used to estimate the orientation of the image feature at the point in question.
  • a method of analysing a digital image comprising, in respect of each point in a target region of the image: filtering the image with a steerable filter produce a plurality of filter responses derived by the same filter characteristic oriented in different orientations; estimating the orientation of an image feature from the filter responses of the steerable filter; and calculating an error measure between (a) the theoretical responses of the steerable filter to a one-dimensional feature oriented in the estimated orientation, and (b) the actual responses of the steerable filter.
  • Such a method provides a number of advantages over the known techniques summarized above.
  • it allows precise estimation of orientation of the image feature at the point in question irrespective of the actual orientation, in particular using a weighting which is accurate for all orientations.
  • the error measure of the present invention provides a reliable and robust measure of the uncertainty in the estimated orientation by indicating precisely the extent to which the image feature is one-dimensional and hence the extent to which it is meaningful to assign an estimated orientation to the feature.
  • Such an uncertainty measure is important, because the estimated orientation is only valid if the image feature is one- dimensional, whereas at a complex feature or for a noisy response, the estimation of orientation should not be trusted.
  • a steerable filter is a filter bank which comprises a set of basis filters at different orientations.
  • Each basis filter has the same filter characteristic relative to its respective orientation.
  • the basis filters may be thought of as rotated copies of each other.
  • the basis filters also have the property that a filter with the same filter characteristic in any arbitrary orientation may be synthesized by a linear combination of the basis filters. Consequently, a steerable filter has the property that the output filter responses are capable of synthesizing a filter response in any arbitrary orientation with the same filter characteristic relative to that orientation by taking a linear combination of the output filter responses.
  • the process of taking a linear combination of the filter responses to synthesize a filter response in a particular orientation is known as "steering" the filter to that orientation.
  • the properties of a steerable filter mean that the estimation of the orientation of an image feature from the filter responses is exact, irrespective of the actual orientation of the image feature. This follows from the fact that the filter may be steered to any orientation so the filter characteristics in every orientation is known.
  • the properties of the steerable filter result in the error measure calculated in accordance with the present invention being a meamngful measure of the uncertainty in the estimated orientation.
  • the theoretical responses of the steerable filter to a one-dimensional feature are predictable.
  • the theoretical responses of the steerable filter to a perfectly one-dimensional feature when it is steered to all possible orientations correspond exactly to the angular component of the filter characteristic in the frequency domain.
  • the actual responses of the steerable filter will be the theoretical responses, so the error measure will be zero.
  • the actual responses of the steerable filter will vary from the theoretical responses and the error measure will represent the extent to which the feature is not one-dimensional.
  • the error measure calculated in accordance with the present invention is a measure of the error between the theoretical responses and the actual responses, and so is an exact analytical measure of the extent to which the feature is one- dimensional.
  • the error measure is sensitive to uncertainty in the estimated orientation resulting from the image feature at the point in question being a complex feature of the type described above, for example a corner or a junction.
  • the error measure is also sensitive to uncertainty due to noise.
  • the error measure may therefore be used to attribute a confidence to the interpretation that a feature whose existence and orientation has been detected is in fact locally one-dimensional, rather than locally two-dimensional.
  • the error measure is independent of the type of the underlying feature.
  • the present invention is capable of dealing with any type of feature.
  • the present invention does not require an explicit model to be constructed for any particular type of image feature, because the error measure will identify any type of feature which is not locally one-dimensional.
  • the error measure is independent of the orientation of the feature at the point in question. This is due to the use of a steerable filter as discussed above.
  • the error measure is independent of the resolution of the steerable filter. This allows the steerable filter to be designed to analyse the image at any scale, depending on what scale is appropriate for the image features of interest.
  • the present invention provides the advantage that the method may be performed in respect of each of a plurality of steerable filters, each having a different resolution.
  • Such implementation of steerable filters in a multi-resolution framework allows the orientation of image features to be simultaneously estimated at plural scales.
  • a further advantage of the combination of invariance with scale and with the type of feature is that the error measure may be calculated at all locations across a feature, not just its edges. This is particularly important when the features are broad relative to the resolution of the image data.
  • orientation can be estimated at a plurality of different scales, as is appropriate for image features having textures, as described above. Furthermore, an error measure is derived for the estimation of orientation at each scale.
  • error measure A plurality of different types of error measure may be used. Examples of suitable error measures are given in the detailed description of an embodiment of the invention below.
  • any one of said step of calculating an error measure includes normalising said actual responses of the steerable filter before calculation of said error measure.
  • the enor measure maybe normalised across the image irrespective of the magnitude of the filter responses of the steerable filter.
  • the present invention is independent of the shape of the filter provided that it is a steerable filter. Accordingly, the present invention may be implemented with a steerable filter of any shape. This allows the selection of a filter with an appropriate shape for the image features of interest. Specific examples are given in the detailed description of a preferred method given below.
  • a prefened form of steerable filter is a local energy filter comprising two steerable sub-filters in quadrature.
  • Such a local energy filter is in itself known.
  • the image is first filtered separately by the two sub-filters and secondly the local energy filter response may be calculated as the amplitude of the responses of the two sub-filters in quadrature, that is as the square root of the sum of the squares of the responses of the two sub-filters.
  • the step of filtering the image is performed in the frequency domain. This simplifies the implementation of the steerable filter and allows the steerable filter to be implemented with a high computational efficiency.
  • the steerable filter has a polar-separable filter characteristic in the frequency domain. This also allows the steerable filter to be implemented with a high computational efficiency.
  • the estimation of orientation uses a vector-sum technique in which the vector-sum of the responses of the steerable filter is calculated and the orientation of the vector-sum is taken as the estimated orientation.
  • a vector-sum technique is known in itself.
  • the vector sum method is a mathematical method which gives the precise analytical solution for perfectly locally linear features, thereby giving the exact orientation subject to any noise present.
  • any other technique for estimating the orientation of the image feature from the filter responses could be applied.
  • the target region of the image to which the method of the present invention is applied is the entire image.
  • the present invention could equally be applied to a target region which constitutes merely a portion of the entire image.
  • the present invention may be implemented by a computer program executable on a computer system, such as a conventional PC. On execution of the computer program, the computer performs the method. Therefore, in accordance with further aspects of the present invention there is provided a computer program, a storage medium storing the computer program in a form readable by a computer system, or a computer system loaded with the computer program.
  • a computer program a storage medium storing the computer program in a form readable by a computer system, or a computer system loaded with the computer program.
  • the method could equally be implemented in hardware or a combination of hardware and software as appropriate for the specific application.
  • the enor measure may be used in a variety of different ways. The enor measure may simply be provided as a data set accompanying the data set representing the estimated orientations. Alternatively, the enor measure and/or the estimated orientations may be further processed.
  • the enor measure may be thresholded to indicate areas of high uncertainty.
  • the present invention has a wide range of applications. In general, it may be applied in any situation where it is desired to estimate " the orientation of image features in a digital image.
  • the present invention may be applied to diverse technologies such as image matching, stereo vision and object tracking. It may be applied to feature classifiers of the type which distinguish between features having different dimensions. It may be applied in a "blob detector", which is an instance of a feature classifier used to detect locally round structures (“blobs”) typically in medical applications, for example calcifications in mammograms or structures such as tumours in three-dimensional and four-dimensional images from systems such as MRI systems, CT systems and PET systems.
  • blob detector is an instance of a feature classifier used to detect locally round structures ("blobs") typically in medical applications, for example calcifications in mammograms or structures such as tumours in three-dimensional and four-dimensional images from systems such as MRI systems, CT systems and PET systems.
  • Fig. 1 is a flowchart of the method
  • Fig. 2 is a flowchart showing step 1 of the method in more detail
  • Fig. 3 is a flowchart showing step 3 of the method in more detail
  • Fig. 4 shows a first synthetic image
  • Fig. 5 and 6 show, respectively, two types of enor measure obtained for the first image of Fig. 4 using the prefened method
  • Fig. 7 shows the thresholding of the enor measure of Fig. 5;
  • Fig. 8 shows the thresholded enor measure overlayed on the first image of
  • Fig. 9 shows a second synthetic image
  • Fig. 10 shows an enor measure obtained for the second image of Fig. 9 using the prefened method
  • Fig. 11 shows the enor measure of Fig. 10 after thresholding, overlayed on the second image of Fig. 9.
  • the hereinafter described embodiment is described with reference to a two-dimensional image.
  • the present invention is equally applicable to three-dimensional images representing points in three spatial dimensions and to three-dimensional and four-dimensional images representing points in two and three spatial dimensions, respectively, and in time as an additional dimension.
  • the hereinafter described embodiment may be generalised to such three- dimensional and four-dimensional images.
  • Fig. 1 The preferred method is shown overall in Fig. 1.
  • the method is conveniently implemented by a computer program executed on a computer system, for example a conventional PC.
  • the examples described below used the software known as "Matlab” (trade mark), but almost any modern programming language could alternatively be used.
  • the method is performed to analyse a digital image I.
  • the image I consists of a value in respect of each point of the image I.
  • the steps of the method are performed in respect of each point of the image, although in principle the method could be performed for a target region which is a portion of the image.
  • the image I is filtered by a steerable filter. This produces a plurality of filter responses B ⁇ , each representing the response for a filter characteristic oriented in a respective orientation.
  • the responses R j each consist of the value of the response in respect of each point of the image I.
  • step 2 the estimated orientation ⁇ of an image feature at each point of the image I is estimated from the filter responses R ⁇
  • an enor measure E is calculated in respect of each point of the image I.
  • the enor measure E represents the enor between (a) the theoretical responses R mon of the steerable filter to a one-dimensional feature oriented in the estimated orientation ⁇ estimated in step 2, and (b) the actual responses of the steerable filter.
  • step 4 which is optional, post-processing is performed.
  • Step 1 uses a steerable filter.
  • Steerable filters are in themselves known. For example, techniques for designing and using steerable filters which may be applied to the present invention are disclosed in W. T. Freeman and E. H. Adelson, "The design and use of steerable filters", IEEE Trans, on Pattern Analysis and Machine Intelligence, vol. 13, no 9, pp. 891-906, September, 1991. h general, a steerable filter comprises a set of basis filters each oriented in a different orientation. Therefore, a steerable filter may be thought of as a filter bank. Each basis filter has the same filter characteristic relative to its respective orientation. The basis filters have the property that a filter with the same filter characteristics as the basis filters but in any arbitrary direction may be synthesised by an appropriate linear combination of the basis filters.
  • the basis filters are not unique in the sense that the same steerable filter could be represented by more than one set of basis filters.
  • the output filter responses are similarly capable of synthesising a filter response in any arbitrary orientation by taking a linear combination of the output filter responses.
  • Steerable filters have the advantage that the filter characteristic is known exactly in all directions, which property is used to advantage in the present invention.
  • the steerable filter could be of any type and have any filter characteristic.
  • the steerable filter is a local energy filter.
  • Local energy filters are known in themselves, for example from M. C. Monane R. A. Owens, "Feature Detection From Local Energy", Pattern Recognition Letters, 1987, Vol.6, pp303-313 the disclosure of which may be applied to the present invention.
  • a local energy filter comprises two sub-filters in quadrature, i.e. the sub-filters are 90° out of phase. One of the sub-filters is odd-symmetric and the other of the sub-filters is even-symmetric.
  • the amplitude of the responses of the two sub-filters in quadrature represents a response which is the local energy of the image being filtered.
  • the two sub-filters are each steerable so that the local energy filter as a whole is steerable.
  • the filter characteristic of the steerable filter in the prefened method or to be more specific the filter characteristic of the basis filters of the steerable filter.
  • the filter characteristics of the two sub-filters and hence of the steerable filter as a whole are polar-separable in the frequency domain.
  • the radial and angular components of the filter characteristics in the frequency domain are as follows.
  • the radial component could in principle have any shape because the method is independent of the exact shape of the radial component, provided that the two sub- filters form a quadrature pair.
  • the shape of the radial components of both sub-filters is the square of a cosine on a logarithmic scale.
  • the shape is normalised to vary the radial position of the peak and to place the half-power points at the desired band width. This provides variation of the frequency components of the image to which the filter is responsive, thereby varying the scale at which the detection of orientation is performed.
  • the scale is varied in accordance with the scale of the desired image features which are to be analysed.
  • Such a log-cosine shape is preferred because of its good reconstruction properties.
  • any other shape is possible, for example a log-Gabor which is popular in image analysis filters although having worse reconstruction properties than the log-cosine shape due to being highly non-orthogonal.
  • the angular component could in principle have any shape provided that the condition of providing a steerable filter is met.
  • the shape of the angular component of the odd-symmetric sub-filter is the cube of a cosine (cos 3 ) and the shape of the angular component of the even-symmetric sub-filter is the modulus of the cube of a cosine (
  • the sub-filters fulfil the requirements of being steerable.
  • the present invention uses the fact that the theoretical response of the steerable filter steered to all orientations to a one- dimensional feature oriented in a given orientation has the same shape as the angular component of the filter characteristic in the frequency domain.
  • This shape is the modulus of the cube of a cosine (
  • Step 1 of filtering the image I comprises a number of steps which are shown in Fig. 2 and will now be described in detail.
  • the image I is filtered in the frequency domain.
  • step 11 the original image I is Fourier transformed to give the transform of the image F(I) in the frequency domain.
  • the steerable filter is a local energy filter, at a subsequent stage (in fact in step 15 as described below) it is necessary to calculate the local energy response as the amplitude of the responses of the two sub-filters.
  • the responses of the basis filters of the odd-symmetric sub-filter need to be aligned with the responses of the basis filters of the even-symmetric sub-filter.
  • the intermediate results of the basis filters of the even-symmetric sub-filter are steered to the orientations of the basis filters of the odd-symmetric sub-filter, that is at orientations of 0 ⁇ /4, ⁇ /2 and 3 ⁇ /4, respectively.
  • the sub-filters are steerable, this is achieved simply by taking out the appropriate linear combinations of the intermediate results of the basis filters of the even-symmetric sub-filter.
  • step 13 steers the N e even-symmetric intermediate results into N 0 even- symmetric intermediate results.
  • step 14 each of the N 0 even-symmetric results produced by step 13 and the N 0 odd-symmetric intermediate results are inverse- Fourier transformed. This generates N 0 odd-symmetric response maps O n in respect of the basis-filters of the odd-symmetric sub-filter and N 0 even-symmetric response maps E n in respect of the basis-filters of the even-symmetric sub-filter .
  • Each of the response maps O n and E n consist of the response of the respective basis filter of the respective sub-filter, oriented in a respective orientation, in respect of each of the points of the image I.
  • step 15 the local energy responses in respect of each orientation of the basis filters of the sub-filters is calculated from the odd-symmetric response and the even-symmetric response.
  • step 15 produces N 0 response map R rug for the respective orientations.
  • a local energy filter is advantageous, because local energy is independent of the type of the image feature at the point in question.
  • the response may be thought of as the odd-symmetric sub-filter detecting the odd symmetric component of a feature, such as an edge, and the even-symmetric sub-filter detecting the even-symmetric component of a feature, such as a line.
  • a local energy filter can reliably detect any type of image feature.
  • the use of a local energy filter has particular advantage when used in the present invention, because the calculated enor measure allows one to distinguish any type of one-dimensional feature from any type of feature having two or more dimensions. That being said, the use of a local energy filter is not essential and in principle, the present invention could be applied with any other type of steerable filter.
  • step 1 of filtering the image I would be performed in basically the same manner as described in detail with reference to Fig. 2, except that step 12 would be performed for only the basis filters of the steerable filter being used and steps 13 and 15 would be omitted.
  • step 2 the estimated orientation ⁇ of an image feature at each point of the image I is estimated from the filter responses R mon of the steerable filter.
  • the estimation is performed using a vector-sum technique.
  • a vector-sum technique is known in itself, for example from the textbook B. Jahne, "Digital Image Processing", Springer, the portions of which concerning a vector-sum technique for estimating orientation may be applied to the present invention, hi such a technique, in respect of each point of the image I, a vector-sum of the responses R ⁇ of the steerable filter is performed and the estimated orientation of the image feature is taken as the orientation of the vector-sum. In practice, this may be implemented by performing a calculations in accordance with equation (2):
  • equation (2) the summation represents the vector-sum.
  • the index n represents the respective orientations.
  • the exponential term represents the response expressed as a unit vector because of the use of imaginary numbers.
  • the inverse tangent has the effect of taking the orientation of the vector-sum.
  • the two constants with values 54 and 2 in equation (2) merit specific mention.
  • the constant with value 2 inside the exponential is used to double the angle of the orientation of the respective basis filters. This is necessary in order to have an unbiased distribution of filter angles around the full circle, since the basis filters used are distributed evenly over a semi-circle.
  • the constant with value 54 is used to halve the result of the inverse tangent in order to shift the resulting values into the range of - ⁇ /2 to ⁇ /2 rather than the range - ⁇ to ⁇ . The reason for this is that the orientation for one-dimensional features in two-dimensional images is periodic with ⁇ , rather than 2 ⁇ . For edges, the sign can be used to extend this range to 2 ⁇ .
  • Such a vector-technique can be shown to produce the exact local orientation in the case of a one-dimensional feature.
  • the minimum number of orientations required to do this is three.
  • the orientations used are the orientations of the N 0 basis filters of the odd-symmetric sub-filter, that is four orientations.
  • the responses of the steerable filter could be steered to any other set of orientations sufficient to perform the vector-sum.
  • step 3 there is calculated the error measure E between (a) the theoretical responses of the steerable filter to a one-dimensional feature oriented in the estimated orientation ⁇ and (b) the actual responses of the steerable filter.
  • Step 3 is performed using the theoretical and actual responses in respect of each of the orientations of the response maps R ⁇
  • the enor measure E is calculated in respect of each point of the image I.
  • the estimated orientation ⁇ is estimated in step 2 on the underlying assumption that the image feature at the point in question is basically one- dimensional.
  • the enor measure E provides a measure of the extent to which the actual response is not one-dimensional, because it is a measure of the extent to which the actual response differs from the theoretical response to a one-dimensional feature.
  • the enor measure provides a measure of the uncertainty in the estimated orientation ⁇ .
  • the enor measure E is exact and reliable as a result of the fact that the theoretical responses of the steerable filter at all possible orientations to a one-dimensional feature at all possible orientations is entirely predictable.
  • the theoretical response to a one-dimensional features has the same shape as the angular component of the filter characteristic of the steerable filter in the frequency domain, hi the prefened method, this is the modulus of the cube of a cosine (
  • Step 3 is performed in respect of each point in the image I, using the response R linen for each orientation.
  • Step 3 involves two pre-processing steps 31 and 32.
  • cos 3 1 ) is phase-shifted to align it with the estimated orientation ⁇ .
  • the actual responses R mon of the steerable filter are normalised so that they can be compared meaningfully to the theoretical response.
  • cos 3 1 ) is of course 1, so the actual responses can be normalised as in two stages as follows.
  • the first stage is to normalise the actual responses R ⁇ by the maximum sub- band coefficient amplitude, so that the maximum value is 1 and can be properly compared to the maximum amplitude of the theoretical response (
  • the second stage is to normalise by the actual amplitude of the theoretical response (
  • the explanation for this is that the sub-band coefficients are computed at fixed orientations, whereas the local estimated orientation ⁇ may be at any orientation, not necessarily aligned with one of the fixed orientations of the responses ⁇ . Since the maximum amplitude of the theoretical response (
  • R_Normalised R/afFinalNormTerm
  • step 33 the enor measure E is calculated from the theoretical responses derived from pre-processing step 31 and the normalised actual responses obtained from pre-processing step 32. hi general, it is possible to calculate any enor measure which represents the enor between the theoretical and actual responses.
  • the first possible type of enor measure E s is the sum of the squares of the differences between (a) the theoretical responses and (b) the actual responses at each of the orientations at the responses Rriz .
  • the enor measure E s may be calculated in using equation (3):
  • the term R ⁇ normalised is the actual response R, after normalisation in pre-processing step 32.
  • R_Normalised in the pseudo code set out above.
  • T n is the theoretical response obtained in the pre-processing step 31. It conesponds to the term afCos3Nom ⁇ Term in the pseudo code set out above.
  • the constant 54 in equation (3) is present merely to adjust the range of possible responses of the enor measure E s .
  • the summation may be divided by the number N 0 of the responses R,, to obtain a measure in the range from 0 to 2, with 0 conesponding to a perfect fit and 2 conesponding to maximum disagreement.
  • the maximum disagreement is two rather than one, because the filter response can vary in the range from +1 to -1.
  • the summation may be further divided by 2 in order to normalise the measure to be in the range from 0 to 1.
  • the first type of enor measure E s work very well for finding general areas of features which are one-dimensional when using a local energy filter. It is accurate to the level of a single pixel when using a simple odd-symmetric filter or a even- symmetric filter instead of a local energy filter.
  • the second type of enor measure E R is a measure of the enor between the vector-sum of the theoretical responses and the vector-sum of the actual responses.
  • the measure may be the ratio of the length of the two-vector sums.
  • Such an enor measure E R may be calculated as follows. Firstly, the vector-sum V R of the actual responses is calculated in accordance with equation (4):
  • equation (4) is the inner sum of equation (2).
  • equation (5) the vector-sum V ⁇ of the theoretical responses is calculated in accordance with equation (5):
  • the enor measure is equal to the ratio of the vector-sums calculated by equations (4) and (5), that is
  • the second type of enor measure E R is in the range from 0 to 1, 0 conesponding to maximum disagreement and 1 conesponding to a perfect fit.
  • the second enor measure E R has the disadvantage that it does not identify an enor in the case that the two vector-sums are of the same length, but pointing in different directions. That being said, in practice it gives good results.
  • the third type of enor measure E P is a measure of the sum over orientations of the enor between the theoretical response in one orientation and the actual response in the same orientation. This may be considered as the projection of the N 0 vectors represented by the actual responses onto the N 0 vectors represented by the theoretical responses.
  • the third type of enor measure E P may be calculated by the combination of the techniques used to calculate the first and second types of enor measure E s and E R .
  • the actual response is projected as a component vector onto the theoretical response as a component vector and a component enor measure is calculated as the magnitude of the enor vector between the actual and theoretical responses (the square root of the sum of the squares of the differences between the x, y and z components of the two component vectors).
  • the component enor measures for each orientation is summed over the respective orientations to derive the overall enor measure E P .
  • step 3 produces an enor measure E in respect of each point in the image I representing the uncertainty in the estimated orientation ⁇ derived in step 2.
  • the estimated orientation ⁇ and the enor measure E may be used in a plurality of ways.
  • the estimated orientations may be subjected to post-processing as follows, although this is optional.
  • step 4 the enor measure E is thresholded on the basis of the magnitude of the response R of the steerable filter. That is to say, a suitable threshold level is identified for the response R and the enor measure E at each point in the image is classified according to whether or not the response R at that point is above or below the threshold. This has the effect of excluding form consideration areas of the image where a high value of the error measure E is caused by noise. In other words, only areas with a high local energy response R are considered for further processing.
  • post-processing in step 4 is to threshold or classify the estimated orientation ⁇ on the basis of the magnitude of the enor measure E.
  • the post-processing in step 4 may also include conventional clean-up operations, for example filtering of the estimated orientation ⁇ to remove spurious results.
  • Fig. 4 is a synthetic image including several one-dimensional features with a variety of types of junctions therebetween.
  • the prefened method as described above was performed to analyse the image shown in Fig. 4, in particular to derive the estimated orientation ⁇ at each point in the image I and also to derive the first and second types of enor measure E s and E R .
  • Fig. 5 shows the first type of enor measure E s .
  • Low values indicate a high degree of certainty in the estimated orientation.
  • Fig. 5 shows how the error measure E s indicates high certainty in the estimated orientation of one-dimensional features at positions separated from junctions, but uncertainty near junctions between the one- dimensional image features. This is due to the fact that such junctions are locally not one-dimensional.
  • Fig. 6 shows the second type of enor measure E R .
  • High values of the enor measure E R represent a high degree of certainty.
  • Fig. 6 shows how the enor measure E R indicates high certainty in the estimated orientation of one-dimensional features at positions separated from junctions, but uncertainty near junctions between the one-dimensional image features.
  • Fig. 7 shows the enor measure E s of Fig. 5 thresholded with fixed thresholds on the basis of the magnitude of the response R of the steerable filter. Areas of high local energy (R is large) are shown in red in the original version of Fig. 7. Areas with a high enor measure E s are shown overlaid on the areas of high local energy in green in the original version of Fig. 7. It can be seen that the green areas indicating high uncertainty only overlap the red areas indicating a high local energy only in the regions of junctions. To illustrate this further, Fig. 8 shows regions where the thresholded error measure E s and response R overlap in regions with a high local energy above the threshold and an enor measure E s above the threshold indicating a high degree of uncertainty.
  • Fig. 8 clearly shows how the enor measure indicates that there is uncertainty in the estimated orientation near junctions between the one-dimensional features. This illustrates how the enor measure of the present invention is sensitive to uncertainty due to the presence of features which are locally not one-dimensional. In general, junctions may be identified as the intersection of the thresholded error measure and local energy.
  • Fig 9 is a synthetic image consisting of an edge extending vertically down the image with a noisy pixel at the centre.
  • the prefened method as described above was performed to analyse the image shown in Fig. 9, in particular to derive the estimated orientation ⁇ at each point in the image I and also to derive the second type of enor measure E R .
  • Fig. 10 shows the enor measure E R .
  • Fig. 11 shows the regions where the enor measure E R after thresholding and the local energy after thresholding overlap this, overlayed on the original image I. These regions are shown in red in the original version of Fig. 11.
  • Fig. 11 only the pixel in the centre of the image I in Fig. 9 is such a region. This indicates a high degree of uncertainty at the position of the noisy pixel in the centre of the image I, which illustrates how the error measure of the present invention is sensitive to uncertainty in the estimated orientation ⁇ caused by noise.
  • the prefened method described above analyses the image to detect orientation at the single scale, although that scale may be freely selected.
  • the orientation may be analysed at a plurality of different scales. This is done by performing the method described above in respect of a plurality of steerable filters, each having a different resolution.
  • the plurality of steerable filters may each have the same filter characteristic except with a different radial component in the frequency domain.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

Analysis of a digital image to estimate the orientation of image features uses a local-energy, steerable filter to produce a plurality of filter responses oriented in different orientations (step 1). The orientation of image features is estimated from the filter responses using a vector-sum technique (step 2). An error measure between (a) the theoretical responses of the steerable filter to a one-dimensional feature oriented in the estimated orientation, and (b) the actual responses of the steerable filter is calculated (step 3). The error measure provides an exact measure of the uncertainty in the estimated orientation which is sensitive both to features which are not one­-dimensional and to noise. The error measure is independent of the orientation of the image feature, the scale and type of the feature and the exact shape of the filters used.

Description

Estimation of Orientation of Image Features The present invention relates generally to the analysis of digital images, typically by a computer system. In particular, the present invention relates to the estimation of orientation of image features in a digital image. Digital images may be acquired by a wide variety of devices and systems, including CCD and other digital cameras and image capture devices sensitive to various regions of the electromagnetic spectrum. Whereas many imaging devices capture a two-dimensional still image, other systems and devices capture a three- dimensional still image. Examples are a magnetic resonance imaging (MRI) system or a computed tomography (CT) system. Also, whereas many imaging devices and systems capture a still image, other devices and systems produce a motion image, that is a series of still images over time. Examples are a digital video camera or a contrast-enhanced MRI system.
Two-dimensional and three-dimensional spatial images consist of a set of values I(r) in respect of points at position r over space, commonly known as pixels. For a three-dimensional image, a pixel is often referred to as a voxel which is an abbreviation of "volume pixel". Motion images consisting of a series of two- dimensional or three-dimensional still images may be considered as three- dimensional or four-dimensional images, respectively, consisting of values I(r, t) in respect of points at position r and at time t. The present invention is generally applicable to any such digital images which may be two-dimensional, three- dimensional or four-dimensional.
A fundamental consequence of the fact that an image has a dimension greater than one is that an orientation may be associated with the image features at each point of an image. For example, if the image contains a curve, then at each point along the curve a direction may be defined, for example by the unit normal, the unit tangent or the unit horizontal or vertical. In general, depending on the dimension of the image, the possible orientations correspond to a circle, a sphere or a hypersphere of the appropriate dimensions. hi practice, the estimation of orientation is of key importance in image analysis for supporting a wide range of applications, including but not exhaustively the segmentation of a shape from its background, the recognition or classification of a shape, the tracking of a shape in motion or the quantification of growth or shrinkage of a shape. However, when estimating orientation, there are a number of fundamental difficulties which need to be addressed, as follows.
Firstly, most images contain substantial amounts of noise. This causes errors and uncertainty in the estimated orientation.
Secondly, as images are sampled, the resolution of the sampling over space (or space and time) reduces the accuracy of the estimation, thereby increasing the error and uncertainty in the estimated orientation.
Thirdly, a related issue is that for image features having texture, the orientation of the image featured varies with the scale at which the orientation of the image is analysed. For example, a line which is straight when viewed at a large scale, but curved when viewed at a small scale will have a single orientation when analysed at a coarse resolution, but will have a continuously varying orientation when analysed at a fine resolution.
Fourthly, it is necessary to take account of a wide range of types of image features. A feature is locally one-dimensional if the image changes (at the scale at which it is analysed) in one direction only and has no change in the perpendicular direction(s). However, real images include a variety of different features which are not one-dimensional. Such complex features which have more than one dimension are varied and include corners, junctions, crossing image features and other more complicated features. In such cases, there is no universally accepted definition of orientation for such complex features. Even if the orientation of the image feature at the point in question may be defined as the orientation of maximal change (at the scale of the analysis), this definition of orientation does not by itself overcome the problem of providing a proper analysis of such complex features.
Known techniques for analysing a digital image to estimate the orientation of image features involves filtering the image with a plurality of filters which are each oriented in different orientations. The filters are oriented in the respective orientations and produce respective filter responses oriented in those orientations. Subsequently, the orientation of an image feature is estimated from the filter responses. Such known methods exploit the fact that a response depends on the relative orientation of the filter and the feature. For example, if we define orientation to be the normal of the one-dimensional feature (as opposed the tangent), then the one- dimensional feature produces a large response when filtered by a filter oriented in the same direction as the one-dimensional feature and a small (or zero) response when filtered by a filter oriented in a perpendicular direction. This idea is used to estimate the orientation of the image feature at the point in question.
However, such know methods suffer from the following problems. The first problem arises from the inherent, underlying assumption that the feature is locally one-dimensional, whereas in fact orientation cannot be properly expressed for complex features of the types discussed above, simply because there is no universally accepted definition of orientation for such complex features. This means that in practice it is necessary to ascribe an uncertainty measure which expresses the confidence in the estimated orientation.
Overwhelmingly, existing techniques for estimating orientation assume that the point of interest corresponds to a one-dimensional feature, for example by detecting an edge, that is a step change, or a line, that is a change (positive-going or negative-going) and then an opposite change, typically over a three-by-three window. However, in many applications, this is far too restrictive. For example, this type of technique is insufficient for complex features of the type described above or to obtain the overall orientation of large features at locations away from the edge of the feature, such as at the centre of a broad line.
A second problem results from the filters being oriented along a number of specific orientations, h practice, image features may have any orientation, not only oriented with one of the filters, but also be oriented at any orientation in between. Therefore, there arises the problem of how to weight the orientations in between. In known techniques, the weighting of intermediate orientations is often guessed or approximated. This can lead to difficulties in obtaining the correct weighting and error and uncertainty in the estimated orientation. In addition, this can create difficulty in ascribing a sensible and reliable uncertainty measure to the estimated orientation.
According to the present invention, there is provided a method of analysing a digital image, the method comprising, in respect of each point in a target region of the image: filtering the image with a steerable filter produce a plurality of filter responses derived by the same filter characteristic oriented in different orientations; estimating the orientation of an image feature from the filter responses of the steerable filter; and calculating an error measure between (a) the theoretical responses of the steerable filter to a one-dimensional feature oriented in the estimated orientation, and (b) the actual responses of the steerable filter.
Such a method provides a number of advantages over the known techniques summarized above. In particular, it allows precise estimation of orientation of the image feature at the point in question irrespective of the actual orientation, in particular using a weighting which is accurate for all orientations. In addition, the error measure of the present invention provides a reliable and robust measure of the uncertainty in the estimated orientation by indicating precisely the extent to which the image feature is one-dimensional and hence the extent to which it is meaningful to assign an estimated orientation to the feature. Such an uncertainty measure is important, because the estimated orientation is only valid if the image feature is one- dimensional, whereas at a complex feature or for a noisy response, the estimation of orientation should not be trusted. These points will now be explained in more detail. The advantages are achieved by exploiting the properties of a particular class of filter, namely a steerable filter. A steerable filter is a filter bank which comprises a set of basis filters at different orientations. Each basis filter has the same filter characteristic relative to its respective orientation. In other words, the basis filters may be thought of as rotated copies of each other. The basis filters also have the property that a filter with the same filter characteristic in any arbitrary orientation may be synthesized by a linear combination of the basis filters. Consequently, a steerable filter has the property that the output filter responses are capable of synthesizing a filter response in any arbitrary orientation with the same filter characteristic relative to that orientation by taking a linear combination of the output filter responses. The process of taking a linear combination of the filter responses to synthesize a filter response in a particular orientation is known as "steering" the filter to that orientation.
The properties of a steerable filter mean that the estimation of the orientation of an image feature from the filter responses is exact, irrespective of the actual orientation of the image feature. This follows from the fact that the filter may be steered to any orientation so the filter characteristics in every orientation is known.
Furthennore, the properties of the steerable filter result in the error measure calculated in accordance with the present invention being a meamngful measure of the uncertainty in the estimated orientation. This follows from the observation that the theoretical responses of the steerable filter to a one-dimensional feature are predictable. In particular, the theoretical responses of the steerable filter to a perfectly one-dimensional feature when it is steered to all possible orientations correspond exactly to the angular component of the filter characteristic in the frequency domain. In other words, for a perfectly one-dimensional feature, the actual responses of the steerable filter will be the theoretical responses, so the error measure will be zero. In contrast, for a feature which is not one-dimensional, the actual responses of the steerable filter will vary from the theoretical responses and the error measure will represent the extent to which the feature is not one-dimensional.
The error measure calculated in accordance with the present invention is a measure of the error between the theoretical responses and the actual responses, and so is an exact analytical measure of the extent to which the feature is one- dimensional. The error measure is sensitive to uncertainty in the estimated orientation resulting from the image feature at the point in question being a complex feature of the type described above, for example a corner or a junction. The error measure is also sensitive to uncertainty due to noise. The error measure may therefore be used to attribute a confidence to the interpretation that a feature whose existence and orientation has been detected is in fact locally one-dimensional, rather than locally two-dimensional.
The error measure is independent of the type of the underlying feature. The present invention is capable of dealing with any type of feature. Thus, the present invention does not require an explicit model to be constructed for any particular type of image feature, because the error measure will identify any type of feature which is not locally one-dimensional.
The error measure is independent of the orientation of the feature at the point in question. This is due to the use of a steerable filter as discussed above.
The error measure is independent of the resolution of the steerable filter. This allows the steerable filter to be designed to analyse the image at any scale, depending on what scale is appropriate for the image features of interest.
Similarly, the present invention provides the advantage that the method may be performed in respect of each of a plurality of steerable filters, each having a different resolution. Such implementation of steerable filters in a multi-resolution framework allows the orientation of image features to be simultaneously estimated at plural scales.
A further advantage of the combination of invariance with scale and with the type of feature, is that the error measure may be calculated at all locations across a feature, not just its edges. This is particularly important when the features are broad relative to the resolution of the image data.
This allows the orientation to be estimated at a plurality of different scales, as is appropriate for image features having textures, as described above. Furthermore, an error measure is derived for the estimation of orientation at each scale.
A plurality of different types of error measure may be used. Examples of suitable error measures are given in the detailed description of an embodiment of the invention below.
Preferably, wherein any one of said step of calculating an error measure includes normalising said actual responses of the steerable filter before calculation of said error measure.
By normalising the actual responses of the steerable filter, the enor measure maybe normalised across the image irrespective of the magnitude of the filter responses of the steerable filter. hi general, the present invention is independent of the shape of the filter provided that it is a steerable filter. Accordingly, the present invention may be implemented with a steerable filter of any shape. This allows the selection of a filter with an appropriate shape for the image features of interest. Specific examples are given in the detailed description of a preferred method given below. A prefened form of steerable filter is a local energy filter comprising two steerable sub-filters in quadrature.
Such a local energy filter is in itself known. To filter an image with a local energy filter, firstly the image is first filtered separately by the two sub-filters and secondly the local energy filter response may be calculated as the amplitude of the responses of the two sub-filters in quadrature, that is as the square root of the sum of the squares of the responses of the two sub-filters.
Preferably, the step of filtering the image is performed in the frequency domain. This simplifies the implementation of the steerable filter and allows the steerable filter to be implemented with a high computational efficiency. Preferably, the steerable filter has a polar-separable filter characteristic in the frequency domain. This also allows the steerable filter to be implemented with a high computational efficiency.
Advantageously, the estimation of orientation uses a vector-sum technique in which the vector-sum of the responses of the steerable filter is calculated and the orientation of the vector-sum is taken as the estimated orientation. Such a vector- sum technique is known in itself. When applied to the present invention in combination with the use of a steerable filter, it has the advantage of providing an accurate estimation of orientation. This is because the vector sum method is a mathematical method which gives the precise analytical solution for perfectly locally linear features, thereby giving the exact orientation subject to any noise present. However, in principle, any other technique for estimating the orientation of the image feature from the filter responses could be applied.
In most applications, the target region of the image to which the method of the present invention is applied is the entire image. However, in principle, the present invention could equally be applied to a target region which constitutes merely a portion of the entire image.
Conveniently, the present invention may be implemented by a computer program executable on a computer system, such as a conventional PC. On execution of the computer program, the computer performs the method. Therefore, in accordance with further aspects of the present invention there is provided a computer program, a storage medium storing the computer program in a form readable by a computer system, or a computer system loaded with the computer program. However, in principle, the method could equally be implemented in hardware or a combination of hardware and software as appropriate for the specific application. The enor measure may be used in a variety of different ways. The enor measure may simply be provided as a data set accompanying the data set representing the estimated orientations. Alternatively, the enor measure and/or the estimated orientations may be further processed. For example, the enor measure may be thresholded to indicate areas of high uncertainty. The present invention has a wide range of applications. In general, it may be applied in any situation where it is desired to estimate" the orientation of image features in a digital image. For example, the present invention may be applied to diverse technologies such as image matching, stereo vision and object tracking. It may be applied to feature classifiers of the type which distinguish between features having different dimensions. It may be applied in a "blob detector", which is an instance of a feature classifier used to detect locally round structures ("blobs") typically in medical applications, for example calcifications in mammograms or structures such as tumours in three-dimensional and four-dimensional images from systems such as MRI systems, CT systems and PET systems. A detailed description of a preferred method in accordance with the present invention will now be given by way of non-limitative example with reference to the accompanying drawings, in which:
Fig. 1 is a flowchart of the method;
Fig. 2 is a flowchart showing step 1 of the method in more detail; Fig. 3 is a flowchart showing step 3 of the method in more detail;
Fig. 4 shows a first synthetic image;
Fig. 5 and 6 show, respectively, two types of enor measure obtained for the first image of Fig. 4 using the prefened method;
Fig. 7 shows the thresholding of the enor measure of Fig. 5; Fig. 8 shows the thresholded enor measure overlayed on the first image of
Fig. 4;
Fig. 9 shows a second synthetic image;
Fig. 10 shows an enor measure obtained for the second image of Fig. 9 using the prefened method; and Fig. 11 shows the enor measure of Fig. 10 after thresholding, overlayed on the second image of Fig. 9.
For the sake of simplicity, the hereinafter described embodiment is described with reference to a two-dimensional image. However, the present invention is equally applicable to three-dimensional images representing points in three spatial dimensions and to three-dimensional and four-dimensional images representing points in two and three spatial dimensions, respectively, and in time as an additional dimension. The hereinafter described embodiment may be generalised to such three- dimensional and four-dimensional images.
The preferred method is shown overall in Fig. 1. The method is conveniently implemented by a computer program executed on a computer system, for example a conventional PC. The examples described below used the software known as "Matlab" (trade mark), but almost any modern programming language could alternatively be used.
Firstly, the overall nature of the steps of the method shown in Fig. 1 will be described. The method is performed to analyse a digital image I. The image I consists of a value in respect of each point of the image I. The steps of the method are performed in respect of each point of the image, although in principle the method could be performed for a target region which is a portion of the image. In step 1, the image I is filtered by a steerable filter. This produces a plurality of filter responses B^ , each representing the response for a filter characteristic oriented in a respective orientation. The responses Rj, each consist of the value of the response in respect of each point of the image I.
In step 2, the estimated orientation θ of an image feature at each point of the image I is estimated from the filter responses R^
In step 3, an enor measure E is calculated in respect of each point of the image I. The enor measure E represents the enor between (a) the theoretical responses R„ of the steerable filter to a one-dimensional feature oriented in the estimated orientation θ estimated in step 2, and (b) the actual responses of the steerable filter.
In step 4, which is optional, post-processing is performed.
Next, the individual steps of the method shown in Fig. 1 will be described in detail.
Step 1 uses a steerable filter. Steerable filters are in themselves known. For example, techniques for designing and using steerable filters which may be applied to the present invention are disclosed in W. T. Freeman and E. H. Adelson, "The design and use of steerable filters", IEEE Trans, on Pattern Analysis and Machine Intelligence, vol. 13, no 9, pp. 891-906, September, 1991. h general, a steerable filter comprises a set of basis filters each oriented in a different orientation. Therefore, a steerable filter may be thought of as a filter bank. Each basis filter has the same filter characteristic relative to its respective orientation. The basis filters have the property that a filter with the same filter characteristics as the basis filters but in any arbitrary direction may be synthesised by an appropriate linear combination of the basis filters. Thus, the basis filters are not unique in the sense that the same steerable filter could be represented by more than one set of basis filters. Thus, the output filter responses are similarly capable of synthesising a filter response in any arbitrary orientation by taking a linear combination of the output filter responses. Steerable filters have the advantage that the filter characteristic is known exactly in all directions, which property is used to advantage in the present invention.
In general, the steerable filter could be of any type and have any filter characteristic. However, in the prefened method, the steerable filter is a local energy filter. Local energy filters are known in themselves, for example from M. C. Monane R. A. Owens, "Feature Detection From Local Energy", Pattern Recognition Letters, 1987, Vol.6, pp303-313 the disclosure of which may be applied to the present invention. A local energy filter comprises two sub-filters in quadrature, i.e. the sub-filters are 90° out of phase. One of the sub-filters is odd-symmetric and the other of the sub-filters is even-symmetric. Accordingly, the amplitude of the responses of the two sub-filters in quadrature represents a response which is the local energy of the image being filtered. In accordance with the present invention, the two sub-filters are each steerable so that the local energy filter as a whole is steerable.
There will now be described the filter characteristic of the steerable filter in the prefened method, or to be more specific the filter characteristic of the basis filters of the steerable filter. To increase the computational efficiency, the filter characteristics of the two sub-filters and hence of the steerable filter as a whole are polar-separable in the frequency domain. The radial and angular components of the filter characteristics in the frequency domain are as follows.
The radial component could in principle have any shape because the method is independent of the exact shape of the radial component, provided that the two sub- filters form a quadrature pair. In the prefened method, the shape of the radial components of both sub-filters is the square of a cosine on a logarithmic scale. The shape is normalised to vary the radial position of the peak and to place the half-power points at the desired band width. This provides variation of the frequency components of the image to which the filter is responsive, thereby varying the scale at which the detection of orientation is performed. The scale is varied in accordance with the scale of the desired image features which are to be analysed.
Such a log-cosine shape is preferred because of its good reconstruction properties. However, any other shape is possible, for example a log-Gabor which is popular in image analysis filters although having worse reconstruction properties than the log-cosine shape due to being highly non-orthogonal.
The angular component could in principle have any shape provided that the condition of providing a steerable filter is met. In the prefened method, the shape of the angular component of the odd-symmetric sub-filter is the cube of a cosine (cos3) and the shape of the angular component of the even-symmetric sub-filter is the modulus of the cube of a cosine ( | cos31 ). As a result, the sub-filters fulfil the requirements of being steerable.
In particular, the odd-symmetric sub-filter consists of a number N0 = 4 of cos3 basis filters oriented over a semi-circle at orientations of 0, π/4, π/2 and 3π/4, respectively. The even-symmetric sub-filter consists of a number Ne = 5 of | cos31 basis-filters oriented over a semi-circle at orientations of 0, π/5, 2 π/5, 3π/5 and 4π/5, respectively.
As described in more detail below, the present invention uses the fact that the theoretical response of the steerable filter steered to all orientations to a one- dimensional feature oriented in a given orientation has the same shape as the angular component of the filter characteristic in the frequency domain. This follows from the fact that the filter is steerable. For the filter of the prefened method this shape is the modulus of the cube of a cosine ( | cos31 ), as a result of the local energy filter response being the amplitude of the responses of the sub-filters in quadrature. Step 1 of filtering the image I comprises a number of steps which are shown in Fig. 2 and will now be described in detail. The image I is filtered in the frequency domain.
In step 11, the original image I is Fourier transformed to give the transform of the image F(I) in the frequency domain. In step 12, to effect the filtering by the sub- filters of the steerable filter, the transform of the image F(I) is multiplied by the frequency co-efficients of the basis filters of the two sub-filters. This produces a number Nt= N0+ Ne of intermediate results which each represent the transform of image filtered by one of the basis filters of one of the sub-filters in the frequency domain. As the steerable filter is a local energy filter, at a subsequent stage (in fact in step 15 as described below) it is necessary to calculate the local energy response as the amplitude of the responses of the two sub-filters. To allow this to be done, the responses of the basis filters of the odd-symmetric sub-filter need to be aligned with the responses of the basis filters of the even-symmetric sub-filter. To achieve this, in step 13, the intermediate results of the basis filters of the even-symmetric sub-filter are steered to the orientations of the basis filters of the odd-symmetric sub-filter, that is at orientations of 0 π/4, π/2 and 3 π/4, respectively. As the sub-filters are steerable, this is achieved simply by taking out the appropriate linear combinations of the intermediate results of the basis filters of the even-symmetric sub-filter. As an alternative it would equally be possible to steer the intermediate results of the basis filters of the odd-symmetric sub-filter to the orientations of the basis filters of the even-symmetric sub-filter. In principle, it would also be possible to steer the intermediate results of the basis filters of both sub-filters to some other set of orientations suitable for estimating the orientation of the image feature in step 2, although this would require additional computation.
Thus, step 13 steers the Ne even-symmetric intermediate results into N0 even- symmetric intermediate results. In step 14, each of the N0 even-symmetric results produced by step 13 and the N0 odd-symmetric intermediate results are inverse- Fourier transformed. This generates N0 odd-symmetric response maps On in respect of the basis-filters of the odd-symmetric sub-filter and N0 even-symmetric response maps En in respect of the basis-filters of the even-symmetric sub-filter . Each of the response maps On and En consist of the response of the respective basis filter of the respective sub-filter, oriented in a respective orientation, in respect of each of the points of the image I. In step 15, the local energy responses in respect of each orientation of the basis filters of the sub-filters is calculated from the odd-symmetric response and the even-symmetric response. As previously discussed local energy is defined as the amplitude of the responses of the two sub-filters in quadrature. Therefore, the local energy response maybe calculated as the square root of the sum of the squares of the responses of the two sub-filters according to equation (1): = + (1)
The local energy response R,, is calculated in respect of each point in the image I and in respect of each of the orientations of the basis filters of the odd- symmetric and even-symmetric sub-filters. Therefore, step 15 produces N0 response map R„ for the respective orientations.
The use of a local energy filter is advantageous, because local energy is independent of the type of the image feature at the point in question. The response may be thought of as the odd-symmetric sub-filter detecting the odd symmetric component of a feature, such as an edge, and the even-symmetric sub-filter detecting the even-symmetric component of a feature, such as a line. As a result, a local energy filter can reliably detect any type of image feature. The use of a local energy filter has particular advantage when used in the present invention, because the calculated enor measure allows one to distinguish any type of one-dimensional feature from any type of feature having two or more dimensions. That being said, the use of a local energy filter is not essential and in principle, the present invention could be applied with any other type of steerable filter. For example, the present invention could be applied using either the odd- symmetric sub-filter described above by itself as the steerable filter or by using the even-symmetric sub-filter described above by itself as the steerable filter. In this case, step 1 of filtering the image I would be performed in basically the same manner as described in detail with reference to Fig. 2, except that step 12 would be performed for only the basis filters of the steerable filter being used and steps 13 and 15 would be omitted.
In step 2, the estimated orientation θ of an image feature at each point of the image I is estimated from the filter responses R„ of the steerable filter. The estimation is performed using a vector-sum technique. Such a vector-sum technique is known in itself, for example from the textbook B. Jahne, "Digital Image Processing", Springer, the portions of which concerning a vector-sum technique for estimating orientation may be applied to the present invention, hi such a technique, in respect of each point of the image I, a vector-sum of the responses R^ of the steerable filter is performed and the estimated orientation of the image feature is taken as the orientation of the vector-sum. In practice, this may be implemented by performing a calculations in accordance with equation (2):
(2)
Figure imgf000017_0001
In equation (2), the summation represents the vector-sum. The index n represents the respective orientations. The exponential term represents the response expressed as a unit vector because of the use of imaginary numbers. The inverse tangent has the effect of taking the orientation of the vector-sum. The two constants with values 54 and 2 in equation (2) merit specific mention.
The constant with value 2 inside the exponential is used to double the angle of the orientation of the respective basis filters. This is necessary in order to have an unbiased distribution of filter angles around the full circle, since the basis filters used are distributed evenly over a semi-circle. The constant with value 54 is used to halve the result of the inverse tangent in order to shift the resulting values into the range of -π/2 to π/2 rather than the range -π to π. The reason for this is that the orientation for one-dimensional features in two-dimensional images is periodic with π, rather than 2π. For edges, the sign can be used to extend this range to 2π.
The use of a vector sum technique is prefened, but, in principle, it would be equally possible to apply any other technique for estimating the orientation of image features at the points of the image I from the filter responses R,, of the steerable filter.
Such a vector-technique can be shown to produce the exact local orientation in the case of a one-dimensional feature. The minimum number of orientations required to do this is three. In the prefened method, the orientations used are the orientations of the N0 basis filters of the odd-symmetric sub-filter, that is four orientations. However, in principle the responses of the steerable filter could be steered to any other set of orientations sufficient to perform the vector-sum. In step 3, there is calculated the error measure E between (a) the theoretical responses of the steerable filter to a one-dimensional feature oriented in the estimated orientation θ and (b) the actual responses of the steerable filter. Step 3 is performed using the theoretical and actual responses in respect of each of the orientations of the response maps R^ The enor measure E is calculated in respect of each point of the image I.
The estimated orientation θ is estimated in step 2 on the underlying assumption that the image feature at the point in question is basically one- dimensional. Thus, the enor measure E provides a measure of the extent to which the actual response is not one-dimensional, because it is a measure of the extent to which the actual response differs from the theoretical response to a one-dimensional feature. Thus, the enor measure provides a measure of the uncertainty in the estimated orientation θ. The enor measure E is exact and reliable as a result of the fact that the theoretical responses of the steerable filter at all possible orientations to a one-dimensional feature at all possible orientations is entirely predictable. In fact, the theoretical response to a one-dimensional features has the same shape as the angular component of the filter characteristic of the steerable filter in the frequency domain, hi the prefened method, this is the modulus of the cube of a cosine ( | cos31 ) as explained above.
The calculation of the enor measure in step 3 comprises a number of steps which are shown in Fig. 3 and will now be described in detail. Step 3 is performed in respect of each point in the image I, using the response R„ for each orientation. Step 3 involves two pre-processing steps 31 and 32. In the first pre-processing step 31 , the theoretical filter response ( | cos31 ) is phase-shifted to align it with the estimated orientation θ. In the second pre-processing step 32, the actual responses R„ of the steerable filter are normalised so that they can be compared meaningfully to the theoretical response. The maximum amplitude of the theoretical response ( | cos31 ) is of course 1, so the actual responses can be normalised as in two stages as follows.
The first stage is to normalise the actual responses R^by the maximum sub- band coefficient amplitude, so that the maximum value is 1 and can be properly compared to the maximum amplitude of the theoretical response (| cos31 ).
The second stage is to normalise by the actual amplitude of the theoretical response (| cos3]) at the actual orientation of the responses R„, that is at the orientations of 0, π/4, π/2 and π/4, respectively. The explanation for this is that the sub-band coefficients are computed at fixed orientations, whereas the local estimated orientation θ may be at any orientation, not necessarily aligned with one of the fixed orientations of the responses ^. Since the maximum amplitude of the theoretical response ( | cos31 ) is aligned with the estimated orientation θ in the pre-processing step 31, the values of the theoretical response ( | cos31 ) at the fixed orientations of the responses R,, may be less than one. Consequently, the sub-band coefficients (after normalisation in the first stage by the maximum sub-band coefficient amplitude) are normalised in the second stage to the maximum value of the model at the fixed orientation of the responses R„.
Pseudo code for finding the normalisation terms is as follows: % theta_l is the local orientation
% the orientations of the filters (phi_0) phi _0 = [0, pi/4, pi/2, 3*pi/4J; % compute the values of a cos3 with the peak % aligned with theta_l at % the orientations phi_0 for idxAngle=l:N afcos3NormTerm(idxAngle) =cos (phi_o (idxAngle) -theta_l) 3; end % and find the max (which depending on the local % orientation might not be 1 afNormTerm = max(abs (afAngleNorm) ) ; % find the max. abs-value of the measured values and
% normalise by afCos3NormTerm
% R_n contains the N=4 subband coefficient-amplitudes at
% the orientations phi_o afFinalNormTerm=max (LE) /afNormTerm;
R_Normalised = R/afFinalNormTerm;
Lastly, in step 33 the enor measure E is calculated from the theoretical responses derived from pre-processing step 31 and the normalised actual responses obtained from pre-processing step 32. hi general, it is possible to calculate any enor measure which represents the enor between the theoretical and actual responses.
Three different types of suitable enor measure will now be described, although these three types of enor measure are not merely intended to be illustrative and are exhaustive.
The first possible type of enor measure Es is the sum of the squares of the differences between (a) the theoretical responses and (b) the actual responses at each of the orientations at the responses R„ . Thus, the enor measure Es may be calculated in using equation (3):
1 No~l Es = Λ 2NλTo Σ (normalised) ~ Tn) (3) n=0
In equation (3) the term R^normalised) is the actual response R,, after normalisation in pre-processing step 32. In particular, it conesponds to the term R_Normalised in the pseudo code set out above. The tenn Tn is the theoretical response obtained in the pre-processing step 31. It conesponds to the term afCos3NomιTerm in the pseudo code set out above.
The constant 54 in equation (3) is present merely to adjust the range of possible responses of the enor measure Es . The summation may be divided by the number N0 of the responses R,, to obtain a measure in the range from 0 to 2, with 0 conesponding to a perfect fit and 2 conesponding to maximum disagreement. The maximum disagreement is two rather than one, because the filter response can vary in the range from +1 to -1. In practice, the summation may be further divided by 2 in order to normalise the measure to be in the range from 0 to 1.
A further point arises because local energy filters are being used. Consequently, the appropriate model for the theoretical responses is the modulus of the cube of a cosine ( | cos31 ). This means that the maximum difference is 1 , rather than 2, so the division by 2 inside the square root is not necessary. Thus the factor of 2 in equation (3) is optional.
The first type of enor measure Es work very well for finding general areas of features which are one-dimensional when using a local energy filter. It is accurate to the level of a single pixel when using a simple odd-symmetric filter or a even- symmetric filter instead of a local energy filter.
The second type of enor measure ER is a measure of the enor between the vector-sum of the theoretical responses and the vector-sum of the actual responses. For example, the measure may be the ratio of the length of the two-vector sums. Such an enor measure ERmay be calculated as follows. Firstly, the vector-sum VR of the actual responses is calculated in accordance with equation (4):
VR (4)
Figure imgf000021_0001
It will be noted that equation (4) is the inner sum of equation (2). Secondly, the vector-sum Vτ of the theoretical responses is calculated in accordance with equation (5):
1 n
^ = ∑ exp(t.2.7 .(-— -)) (5) o No - I
The enor measure is equal to the ratio of the vector-sums calculated by equations (4) and (5), that is | VR|/| Vτ| .
The second type of enor measure ERis in the range from 0 to 1, 0 conesponding to maximum disagreement and 1 conesponding to a perfect fit.
However, the second enor measure ER has the disadvantage that it does not identify an enor in the case that the two vector-sums are of the same length, but pointing in different directions. That being said, in practice it gives good results.
The third type of enor measure EPis a measure of the sum over orientations of the enor between the theoretical response in one orientation and the actual response in the same orientation. This may be considered as the projection of the N0 vectors represented by the actual responses onto the N0 vectors represented by the theoretical responses. The third type of enor measure EP may be calculated by the combination of the techniques used to calculate the first and second types of enor measure Es and ER. In particular, in respect of each orientation, the actual response is projected as a component vector onto the theoretical response as a component vector and a component enor measure is calculated as the magnitude of the enor vector between the actual and theoretical responses (the square root of the sum of the squares of the differences between the x, y and z components of the two component vectors). Then, the component enor measures for each orientation is summed over the respective orientations to derive the overall enor measure EP.
In summary, step 3 produces an enor measure E in respect of each point in the image I representing the uncertainty in the estimated orientation θ derived in step 2. The estimated orientation θ and the enor measure E may be used in a plurality of ways. In step 4, the estimated orientations may be subjected to post-processing as follows, although this is optional.
In particular, in step 4, the enor measure E is thresholded on the basis of the magnitude of the response R of the steerable filter. That is to say, a suitable threshold level is identified for the response R and the enor measure E at each point in the image is classified according to whether or not the response R at that point is above or below the threshold. This has the effect of excluding form consideration areas of the image where a high value of the error measure E is caused by noise. In other words, only areas with a high local energy response R are considered for further processing.
Another possible type of post-processing in step 4 is to threshold or classify the estimated orientation θ on the basis of the magnitude of the enor measure E. The post-processing in step 4 may also include conventional clean-up operations, for example filtering of the estimated orientation θ to remove spurious results.
To illustrate the effectiveness of the present invention, an example of the performance of the method in respect of specific images will now be given with reference to Figs. 4 to 11, the original versions of which were in colour.
Fig. 4 is a synthetic image including several one-dimensional features with a variety of types of junctions therebetween. The prefened method as described above was performed to analyse the image shown in Fig. 4, in particular to derive the estimated orientation θ at each point in the image I and also to derive the first and second types of enor measure Es and ER.
Fig. 5 shows the first type of enor measure Es. Low values indicate a high degree of certainty in the estimated orientation. Fig. 5 shows how the error measure Es indicates high certainty in the estimated orientation of one-dimensional features at positions separated from junctions, but uncertainty near junctions between the one- dimensional image features. This is due to the fact that such junctions are locally not one-dimensional.
Fig. 6 shows the second type of enor measure ER. High values of the enor measure ER represent a high degree of certainty. Again, Fig. 6 shows how the enor measure ER indicates high certainty in the estimated orientation of one-dimensional features at positions separated from junctions, but uncertainty near junctions between the one-dimensional image features.
Fig. 7 shows the enor measure Es of Fig. 5 thresholded with fixed thresholds on the basis of the magnitude of the response R of the steerable filter. Areas of high local energy (R is large) are shown in red in the original version of Fig. 7. Areas with a high enor measure Es are shown overlaid on the areas of high local energy in green in the original version of Fig. 7. It can be seen that the green areas indicating high uncertainty only overlap the red areas indicating a high local energy only in the regions of junctions. To illustrate this further, Fig. 8 shows regions where the thresholded error measure Es and response R overlap in regions with a high local energy above the threshold and an enor measure Es above the threshold indicating a high degree of uncertainty. These regions are shown in red in the original version of Fig. 7, overlayed on the original image I. Fig. 8 clearly shows how the enor measure indicates that there is uncertainty in the estimated orientation near junctions between the one-dimensional features. This illustrates how the enor measure of the present invention is sensitive to uncertainty due to the presence of features which are locally not one-dimensional. In general, junctions may be identified as the intersection of the thresholded error measure and local energy.
As a further example, Fig 9 is a synthetic image consisting of an edge extending vertically down the image with a noisy pixel at the centre. The prefened method as described above was performed to analyse the image shown in Fig. 9, in particular to derive the estimated orientation θ at each point in the image I and also to derive the second type of enor measure ER. Fig. 10 shows the enor measure ER. Fig. 11 shows the regions where the enor measure ER after thresholding and the local energy after thresholding overlap this, overlayed on the original image I. These regions are shown in red in the original version of Fig. 11. Thus in fact in Fig. 11 only the pixel in the centre of the image I in Fig. 9 is such a region. This indicates a high degree of uncertainty at the position of the noisy pixel in the centre of the image I, which illustrates how the error measure of the present invention is sensitive to uncertainty in the estimated orientation θ caused by noise.
In the above examples the thresholds are fixed, but better results can be obtained with an adaptive, variable threshold.
The prefened method described above analyses the image to detect orientation at the single scale, although that scale may be freely selected.
Alternatively, the orientation may be analysed at a plurality of different scales. This is done by performing the method described above in respect of a plurality of steerable filters, each having a different resolution. For example, the plurality of steerable filters may each have the same filter characteristic except with a different radial component in the frequency domain.

Claims

Claims
1. A method of analysing a digital image, the method comprising, in respect of each point in a target region of the image: filtering the image with a steerable filter produce a plurality of filter responses derived by the same filter characteristic oriented in different orientations; estimating the orientation of an image feature at the point in question from the filter responses of the steerable filter; and calculating an enor measure between (a) the theoretical responses of the steerable filter to a one-dimensional feature oriented in the estimated orientation, and
(b) the actual responses of the steerable filter.
2. A method according to claim 1, wherein the enor measure is the square root of the sum of the squares of the differences between (a) said theoretical responses of the steerable filter and (b) said actual responses of the steerable filter.
3. A method according to claim 1, wherein the enor measure is a measure of the enor between (a) the vector-sum of said theoretical responses of the steerable filter and (b) the vector sum of said actual responses of the steerable filter.
4. A method according to claim 1 , wherein the error measure is a measure of the sum over orientations of the error between the (a) a respective said theoretical response of the steerable filter in an orientation and (b) the actual response of the steerable filter in the same orientation.
5. A method according to any one of the preceding claims, wherein said step of calculating an enor measure includes the step of normalising said actual responses of the steerable filter before calculation of said error measure.
6. A method according to any one of the preceding claims, wherein the steerable filter is a local energy filter comprising two steerable sub-filters in quadrature.
7. A method according to any of the preceding claims, wherein said step of filtering the image is performed in the frequency domain.
8. A method according to any one of the preceding claims, wherein said steerable filter has a polar-separable filter characteristic in the frequency domain.
9. A method according to claim 8, wherein the steerable filter comprises a set of basis filters which have radial components in the frequency domain shaped as the square of a cosine on a logarithmic scale.
10. A method according to claim 8 or 9, wherein the steerable filter comprises a set of basis filters including basis filters which have odd-symmetric angular components in the frequency domain.
11. A method according to claim 8 or 9, wherein the steerable filter comprises a set of basis filters including basis filters which have angular components in the frequency domain shaped as the cube of a cosine.
12. A method according to claim 8 or 9, wherein the steerable filter is a local energy filter comprising two steerable sub-filters in quadrature, one of the steerable sub-filters comprising a set of basis filters which have angular components in the frequency domain which are odd-symmetric and the other of the steerable sub-filters comprising a set of basis filters which have angular components in the frequency domain which are even-symmetric.
13. A method according to claim 12, wherein said one of the steerable sub-filters comprises a set of basis filters which have angular components in the frequency domain which are shaped as the cube of a cosine and said other of the steerable sub- filters comprises a set of basis filters which have angular components in the frequency domain which are shaped as the modulus of the cube of a cosine, respectively. >
14. A method according to any one of the preceding claims, wherein said step of estimating the orientation of an image feature comprises performing a vector-sum of the responses of the steerable filter taking the orientation of the vector-sum as the estimated orientation of an image feature.
15. A method according to any one of the preceding claims, wherein the method is performed in respect of each of a plurality of steerable filters, each steerable filter having a different resolution.
16. A method according to claim 15, wherein the steerable filter comprises a plurality of basis filters which have radial components in the frequency domain having a bandpass characteristic.
17. A method according to any one of the preceding claims, wherein said target region is the entire image.
18. A method according to any one of the preceding claims, further comprising thresholding the enor measure on the basis of the magnitude of the responses of the steerable filter.
19. A computer program executable by a computer system, the computer program, on execution by the computer system, being capable of causing the computer system to execute a method according to any one of the preceding claims.
20. A storage medium storing in a form readable by a computer system a computer program according to claim 19.
PCT/GB2004/005247 2003-12-15 2004-12-15 Estimation of orientation of image features WO2005059834A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB0329010.3A GB0329010D0 (en) 2003-12-15 2003-12-15 Estimation of orientation of image features
GB0329010.3 2003-12-15

Publications (1)

Publication Number Publication Date
WO2005059834A1 true WO2005059834A1 (en) 2005-06-30

Family

ID=30130241

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2004/005247 WO2005059834A1 (en) 2003-12-15 2004-12-15 Estimation of orientation of image features

Country Status (2)

Country Link
GB (1) GB0329010D0 (en)
WO (1) WO2005059834A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8170308B2 (en) 2006-05-19 2012-05-01 Koninklijke Philips Electronics N.V. Error adaptive functional imaging
US9058541B2 (en) 2012-09-21 2015-06-16 Fondation De L'institut De Recherche Idiap Object detection method, object detector and object detection computer program
CN111104822A (en) * 2018-10-25 2020-05-05 北京嘀嘀无限科技发展有限公司 Face orientation recognition method and device and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5956427A (en) * 1995-06-15 1999-09-21 California Institute Of Technology DFT encoding of oriented filter responses for rotation invariance and orientation estimation in digitized images

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5956427A (en) * 1995-06-15 1999-09-21 California Institute Of Technology DFT encoding of oriented filter responses for rotation invariance and orientation estimation in digitized images

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ROSENTHALER L ET AL: "Detection of general edges and keypoints", COMPUTER VISION - ECCV '92. SECOND EUROPEAN CONFERENCE ON COMPUTER VISION PROCEEDINGS SPRINGER-VERLAG BERLIN, GERMANY, 1992, pages 78 - 86, XP002324637, ISBN: 3-540-55426-2 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8170308B2 (en) 2006-05-19 2012-05-01 Koninklijke Philips Electronics N.V. Error adaptive functional imaging
US9058541B2 (en) 2012-09-21 2015-06-16 Fondation De L'institut De Recherche Idiap Object detection method, object detector and object detection computer program
CN111104822A (en) * 2018-10-25 2020-05-05 北京嘀嘀无限科技发展有限公司 Face orientation recognition method and device and electronic equipment
CN111104822B (en) * 2018-10-25 2023-09-19 北京嘀嘀无限科技发展有限公司 Face orientation recognition method and device and electronic equipment

Also Published As

Publication number Publication date
GB0329010D0 (en) 2004-01-14

Similar Documents

Publication Publication Date Title
Xu et al. SAR image denoising via clustering-based principal component analysis
Yu et al. Efficient patch-wise non-uniform deblurring for a single image
Kamble et al. Performance evaluation of wavelet, ridgelet, curvelet and contourlet transforms based techniques for digital image denoising
Chen et al. Efficient registration of nonrigid 3-d bodies
Wietzke et al. The signal multi-vector
US20160131767A1 (en) Nonlinear processing for off-axis frequency reduction in demodulation of two dimensional fringe patterns
Liu et al. Automatic blur-kernel-size estimation for motion deblurring
Calatroni et al. A flexible space-variant anisotropic regularization for image restoration with automated parameter selection
Thomas et al. A robust motion estimation algorithm for PIV
Gonzalez Improving phase correlation for image registration
Ho et al. Optical flow estimation using fourier mellin transform
Magnier An objective evaluation of edge detection methods based on oriented half kernels
Shokouh et al. Ridge detection by image filtering techniques: A review and an objective analysis
Mishra et al. Design of Fractional Calculus based differentiator for edge detection in color images
Fedorov et al. Linear multiscale analysis of similarities between images on Riemannian manifolds: Practical formula and affine covariant metrics
Reinhardt et al. Multi-scale orientation estimation using higher order Riesz transforms
WO2005059834A1 (en) Estimation of orientation of image features
Mei‐Hong et al. Suppression of seismic random noise based on steerable filters
Felsberg et al. Energy tensors: Quadratic, phase invariant image operators
Ansari et al. Noise Filtering in High-Resolution Satellite Images Using Composite Multiresolution Transforms
Anand et al. Edge detection using directional filter bank
Li et al. Unmanned aerial vehicle image matching based on improved RANSAC algorithm and SURF algorithm
Magnier et al. Multi-scale crest line extraction based on half gaussian kernels
Joshi et al. Optimization of Nonlocal Means Filtering Technique for Denoising Magnetic Resonance Images: A Review
Gonzalez Robust image registration via cepstral analysis

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase