WO2011080081A2 - Image processing - Google Patents

Image processing Download PDF

Info

Publication number
WO2011080081A2
WO2011080081A2 PCT/EP2010/069815 EP2010069815W WO2011080081A2 WO 2011080081 A2 WO2011080081 A2 WO 2011080081A2 EP 2010069815 W EP2010069815 W EP 2010069815W WO 2011080081 A2 WO2011080081 A2 WO 2011080081A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
initial
transformed
unit
convex
Prior art date
Application number
PCT/EP2010/069815
Other languages
French (fr)
Other versions
WO2011080081A3 (en
Inventor
Kewei Zhang
Antonio Orlando
Elaine Craig Mackay Crooks
Original Assignee
Uws Ventures Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Uws Ventures Ltd. filed Critical Uws Ventures Ltd.
Priority to GB1210137.4A priority Critical patent/GB2488294B/en
Publication of WO2011080081A2 publication Critical patent/WO2011080081A2/en
Publication of WO2011080081A3 publication Critical patent/WO2011080081A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/36Applying a local operator, i.e. means to operate on image points situated in the vicinity of a given point; Non-linear local filtering operations, e.g. median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/16Image preprocessing
    • G06V30/168Smoothing or thinning of the pattern; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/18Extraction of features or characteristics of the image
    • G06V30/18143Extracting features based on salient regional features, e.g. scale invariant feature transform [SIFT] keypoints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/20Combination of acquisition, preprocessing or recognition functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Definitions

  • This disclosure relates generally to a method and a system for analysing and manipulating images
  • the disclosure relates to image processing and/or feature extraction from within an image.
  • the invention may be applied to an image, a domain in an image or a lower dimensional object in an image, for the purpose of, for example but not limited to image restoration, selective enhancement of features of an image, detection of edges, corners and turning points, crossing points, end points, and other singular points or interesting points.
  • This method comprises comparing, for each point or element into an image, the brightness of elements located within a predetermined region centred on the chosen element with the brightness of the element itself to determine those elements having substantially equal brightness, and these elements are regarded as belonging to the same surface in the image. Elements which are associated with a minimum number of equally bright elements within their local region are then located, and these elements will lie on or close to an edge of the image. Once the edge elements have been located in this way, standard methods of non-maximum suppression and edge thinning can apparently be applied to find the position of the edge more accurately.
  • the UK patent Application No. GB2218507A describes another method to locate edge and corner features in a digitised image wherein the image data is digitally processed to detect the positions of any sharp intensity variations present in adjacent ones of the pixels representing said image by comparing the intensity of each primary pixel with those of the secondary pixels which surround it (e.g. by applying a convolution function to the data) calculating whether said primary pixel can be classified as having a constant, an edge-like or a corner-like value, and collating this information to give positions of both edge and corner features which are present in said image.
  • the present disclosure provides a method and a system for processing an "initial" image.
  • the term "initial" image or initial input image is used herein for identifying the image on which the present method is applied.
  • the initial image may be the original analogue or digitized image or may be a selected domain in the original image or a lower dimensional object in the original image that has been selected for processing with the present method.
  • the initial image may be digitized image and the region of interest may be defined as at least one pixel in the digitized image.
  • the method may involve some pre-processing of the initial image as described further down to obtain a pre-processed image.
  • the pre-processed image or the initial image may be used as an input image.
  • the method comprises determining combination image values of at least one region of interest of the input image, wherein the combination image values are each determined based on a combination of two or more regions of the input image and arranged in proximity to the at least one region of interest.
  • the two or more regions may be arranged at opposite positions with respect to the region of interest or may be arranged to encage the region of interest or the regions may be arranged on a circumference of the region of interest.
  • Each one of the two or more regions of the image arranged in proximity to the at least one region of interest may be are arranged in a convex hull or convex envelope around the region of interest, i.e. an approximated directional convex envelope or the exact convex envelope or the exact directional convex envelope may be used.
  • the combination of two or more regions may comprise determining average image values, i.e. weight averages image values of the two or more regions.
  • the combination may also comprise a numerical approximation of the second derivatives at the region of interest. Other combinations are also possible.
  • the method further comprises selecting as a replacement value that combination value from the combination values determined above that best fits a replacement criteria.
  • the replacement criteria can be the lowest value, the highest value or another pre- determined criteria.
  • the method further comprises replacing the region of interest of the input image with the replacement value if the replacement value better fits the replacement criteria than the input image. If for example the replacement criteria is the lowest value, the image value of the input image will be replaced, if the replacement value is lower than the image value in the region or pixel of interest. If the replacement has a higher value, the value of the input image will remain unchanged.
  • the Determining, Selecting and Replacing may be iteratively repeated for a pre-determined or user identified of automatically defined number of times until a transform image is obtained.
  • the transform image may correspond to an exact or a approximate transform of the input image.
  • the transformed image may be output, further analysed or further processed.
  • the method may further comprise the steps of building a smooth image which is equal to the initial input image almost everywhere apart from those pixels where discontinuities in the initial image values and/or image features are located. The difference then between the initial input image and the smooth image will display then the sought feature.
  • this invention relates to methods comprising the application and definition of new class of transformations based on the notion of convex hull and convex envelope.
  • the transformations used in this invention are therefore different from the commonly used image transformations, based on the use of convolutions, Fourier and wavelet transforms.
  • the image, the domain in the image or the lower dimensional object in the image may be represented by a function fli ) of the two variables i and j.
  • the value, fli ) gives the intensity of the colour at the pixel of coordinates in a gray scale 8-bit image
  • a binary image on the other hand, is represented by a function which takes on only the values 0 (black) and 1 (white).
  • a RGB (red, green, blue) colour image is represented by a vector valued function which associates with each pixel three values: the red, green and blue intensity.
  • the invention is not limited to grey-scale or RGB images but can be applied to any representation of an image, either digitized or analogue or any other type of representation.
  • (3) The concept of convexity and of convex-based transformations are used to manipulate image values.
  • An object is convex if for every pair of points within the object, every point on the straight line segment that joins them is also within the object.
  • a solid cube is convex, but anything that is hollow or has a dent in it, for example, a crescent shape, is not convex.
  • a set A of elements which may be, for example, the locations of points and/or pixels in a two or three-dimensional space, is said to be convex if the point our +
  • (l-a)j belongs to A whenever the points x and y are in A and a is any real number greater than zero and lower than one. Geometrically, this means that the line segment with endpoints x and j, hereafter denoted by [x,y], is entirely contained in the set A. Given two points x and v a convex combination of the two points is any point of the segment joining the two points x and j. A real- valued transformation and image/ defined in a convex set C is said to be convex if the tangent planes to its graph lie below the graph everywhere, whereas a transformation is concave if its opposite transformation is convex.
  • the convex hull of a set of points X is the smallest convex set containing all points of the set of points X.
  • the convex envelope C( ) of the function is the largest convex function defined in C whose value at every point of C is not greater than that of the given function image /
  • convex envelope of a function is also called the convex hull which has the same meaning as the convex envelope of a function.
  • convex envelope of a function is also called the convex hull which has the same meaning as the convex envelope of a function.
  • a set C is called -convex if for any pairs of points x and of C such that the segment [x,y] with endpoints x andy is parallel to a non-zero vector in D, also all the other points of the segment belong to C.
  • a function / defined in a -convex set C is D- directional convex when the function/is convex along any direction d of the set D.
  • convex envelope Given a set D of directions, a function defined in a D- convex set C, the -directional convex envelope of , denoted by C(f D), is the largest D- convex function defined in C whose value at every point of C is not greater than the value therein of the function image /
  • C(f D) the convexification along the directions perpendicular to D is denoted by C(f, D ).
  • the convex-based transformations used in the methods according to the present invention require the computation of the convex envelope of the function / in a region A, not necessarily convex and not necessarily all the region where the image is defined, or the computation of the directional convex envelope of / in the region A along a given set D of directions.
  • a possible realisation of such transformations comprises replacing the convex envelope/hull and/or the directional convex envelope/hull definitions of the compensated convex transforms with the corresponding numerical and/or approximated one.
  • There are several numerical schemes to approximate the convex envelope such as the one suggested in Y. Lucet A fast computational algorithm for the Legendre-Fenchel transform, Comput. Optimiz. and Appl. 6 (1996) 27-57; in G.
  • convex -based transform may comprise any type of convex transform such as the exact and/or approximate and/or numerical and/or directional approximations of convex and/or concave envelopes, lower and/or upper and/or mixed compensated convex transforms, and/or any combinations of these.
  • the convex transform may be a directional convex-based transform, for example along a predefined set of directions.
  • a method is affine invariant, it is meant that the method is able to extract from the image, or from the object, the same structure whatever is the viewpoint of the observer.
  • Aff is an affine transformation
  • T is a said method and T(f) its output when applied to the said image or geometrical object /
  • a method is stable under curvature perturbation, it is meant that the method is able to extract from the image, or from the object, the same structure also in presence of distortion.
  • the methods and the systems according to the present invention are fast (typically, from less than one second up to a minute, depending on the application) and accurate. They are global (i.e. do not require prior knowledge of the features present in the image), are robust against perturbations of the input, have desirable invariance properties, which most known methods do not have, and are able to perform operations on, for example, digital images that known methods are currently unable to perform, such as detection of turning points, crossing points and end points on curves.
  • a fast and accurate global method for finding a multiscale medial axis (also known as the skeleton or the cut locus) of an object in an image.
  • the object may be of any dimension.
  • the medial axis defined as the locus of points equidistant from two or more points of the object boundary or from a given set of points, is a geometric structure that together with a radius function captures global shape properties of an object.
  • the multiscale medial axis represents objects at multiple scales simultaneously, including both large-scale gross shape properties of the object, and small-scale fine details that are defined by the image data itself.
  • the medial axis or medial surface is a geometric entity of interest to engineering and animation disciplines.
  • a second aspect of the present invention there are provided global methods of processing an image to remove, or reduce the effects of, lines, curves and scratches.
  • the term 'scratch' used herein means a discontinuity in the intensity image that is localized over a thin region, which can be a matter of a very few pixels wide.
  • this exemplary method falls in the category of disocclusion algorithms in digital image inpainting [see, for example, S. Masnou, J.-M. Morel, Level lines based disocc lusion, 5 th IEEE International Conference (1998) 105- 138 or M. Betalmio, G. Sapiro, V. Caselles, C.
  • affine invariant and not to locate turning points and crossing points of lower dimensional objects, such as a curve or to locate curves representing turning points within 3D surfaces and surface to surface intersections.
  • the tasks that can be performed by the invention can be generally gathered under the common heading of detection of codimension-two objects such as points in a 2D image and curves in a 3D image. These geometric entities represent singularities for the image in the sense that therein either there is an abrupt change of direction of the tangent direction or of the tangent plane, respectively.
  • the invention does not require prior knowledge of the curve and/or surface type and does not require the parameterization of the curve and/or surface itself, hence there is no need for the solution of complex systems of nonlinear equations and inequalities.
  • the detection of turning and crossing points is of interest, for instance, for CAD applications.
  • an affine invariant global method to locate end points of lower dimensional objects such as end points of a curve or the boundary of lower dimensional objects.
  • the method is also stable under small curvature perturbations of the image graph.
  • the invention does not require prior knowledge of the curve and/or surface type and does not require the parameterization of the curve and/or surface itself, which therefore applies to any type of curves in 2D and surfaces in 3D..
  • affine invariant global methods stable under small curvature perturbations of processing an image to locate the edges, ridges, valleys and saddles in the said image.
  • the geometry of a ridge is defined herein as the part of the graph of the image that is concave and, at least along one direction, the possibly nonsmooth directional curvature is large and negative
  • a valley is defined herein as the part of the graph of the image that is convex and, at least along one direction, the possibly nonsmooth directional curvature is large and positive.
  • the invention can therefore mark the ridges and valleys by different colours.
  • the part where the ridge and valley curves are parallel to each other represents the edge.
  • the direction of the jump is indicated by the two coloured curves, a feature which is novel and is not known in any of the previously-proposed edge detectors.
  • a sixth aspect of the present invention there are provided global methods for processing an image to smooth the image or to smooth the angles of a geometric object or to smooth a function .
  • This aspect of invention also applies to the denoising of functions or images with noise.
  • a nonsmooth continuous function / is meant herein a function that is continuous but not continuously differentiable. In geometric terms, this means that there are points on the graph of / where one cannot define a unique tangent plane.
  • the term nonsmooth domain is intended to mean a domain with a boundary that might have comers.
  • affine invariant global methods stable under small curvature perturbations of processing an image / to detect corners, necks and small blobs in the image.
  • Figure 1 is a schematic representation of a first two dimensional array of elements comprising image values according to the present invention
  • Figure 2 illustrates a system to obtain the medial surface according to a first aspect of the present invention
  • Figure 3 is a schematic representation of a two dimensional array of elements comprising image values measuring the strength of each element;
  • Figure 4 shows a predetermined region of 3 elements by 3 elements with indication of the pixels and their weight used for the construction of the image representing an approximation of the convex envelope of the input image;
  • Figure 5 to Figure 8 illustrate objects with the medial axis determined in accordance with the present invention
  • Figure 9 illustrates a system that performs a first method for removing or reducing effects of scratches according to a second aspect of the present invention
  • Figure 10 illustrates a system that performs a second method for removing or reducing effects of large damaged area of an image according to a second aspect of the present invention.
  • Figure 1 1 illustrates a system that applies a first method for detecting turning and crossing points of lower dimensional objects according to a third aspect of the present invention
  • Figure 12 illustrates a system that applies a second method for detecting turning and crossing points of lower dimensional objects according to a third aspect of the present invention
  • Figures 13 to Figure 16 illustrate images and examples of detection of turning and crossing points determined in accordance with the present invention
  • Figures 17 illustrates a system that applies a first method for detecting end points of lower dimensional objects according to the fourth aspect of the present invention
  • Figures 1 8 illustrates a system that applies a second method for detecting end points of lower dimensional objects according to the fourth aspect of the present invention
  • Figure 19 illustrates a curve with localization of its end points determined in accordance with the present invention.
  • Figures 20 illustrates a system that applies a method for locating ridges in an image according to a fifth aspect of the present invention
  • Figures 21 illustrates a system that applies a method for locating valleys in an image according to a fifth aspect of the present invention
  • Figures 22 illustrates a system that applies a first method for locating edges in an image according to a fifth aspect of the present invention
  • Figures 23 illustrates a system that applies a second method for locating edges in an image according to a fifth aspect of the present invention
  • Figures 24 illustrates a system that applies a method for locating saddle points in an image according to a fifth aspect of the present invention
  • Figures 25 illustrates a system that applies a third method for locating edges in an image according to a fifth aspect of the present invention
  • Figures 26 illustrates a system that applies a first method for the smoothing of an image or function with and without noise according to a sixth aspect of the present invention
  • Figures 27 illustrates a system that applies a second method for the smoothing of an image or function with and without noise according to a sixth aspect of the present invention
  • Figures 28 illustrates a system that applies a third method for the smoothing of an image or function with and without noise according to a sixth aspect of the present invention
  • Figures 29 illustrates a system that applies a method for the smoothing of interior and/or exterior angles of a geometric domain according to a sixth aspect of the present invention
  • Figures 30 illustrates a system that applies a first method for detecting corners, necks and small blobs in the image according to a seventh aspect of the present invention
  • Figures 31 illustrates a system that applies a second method for detecting corners, necks and small blobs in the image according to a seventh aspect of the present invention
  • Figures 32 and Figure 33 illustrate an example of geometric object and of detection of its corners and neck determined in accordance with the present invention.
  • Figure 1 shows an example of a two dimensional array 1 1 of 10x10 elements which each contain at least one associated image value.
  • the elements may be considered as pixel with coordinates i and j denoting the row and column number, respectively.
  • the number of pixels is limited to 10 10 for illustrative memeposes only and any number of pixels and geometry of pixel arrangement can be used.
  • the image comprises a light region with values of approximately 210 and a dark region with image value approximately 45.
  • Figure 1 shows also a typical element 4 with its neighbour 3 of 3 elements by 3 elements which is generally used within the present invention. Neighbours of different size can also be considered.
  • Processe steps to be performed are designated by rectangular boxes, while circles represent user interfaces.
  • Directed lines represent flow of information usually in the form of images between processes, whereas dash lines represent flow of information that are not images, for instance, set of directions, size of the image, which are passed to the process unit through generally a user interface.
  • the arrow on each line indicates where the flow of information is going.
  • Figure 2 illustrates a system that performs an object medial representation.
  • the system includes an input unit 10 which might be connected to a unit 50 that converts the initial image / into a binary initial image F, through, for instance, thresholding.
  • a threshold value can be provided given by means of a user interface 51. Alternatively a reference list or look-up table can be used for obtaining a threshold value. If the initial image / is already given in binary form, then this step can be bypassed and the unit 10 would pass the input binary image F directly to the unit 400.
  • the unit 50 will communicate with the unit 51 to retrieve the value of a threshold chosen by the user or obtained from the look-up table and will operate on each pixel of the array 11 to locate those pixels whose value is for instance greater than the given threshold.
  • the converter unit 50 will then output an input binary image F with value 1 at the pixels having a value above the threshold and 0 otherwise.
  • the region with pixel value 1 is the geometrical object.
  • This input binary image F represents the input for distance transform unit 400.
  • the distance transform unit 400 produces a distance transform image DT following available standard processes.
  • the distance transform image DT is the image that, for each pixel DT (iJ) gives the distance of the pixel to the closest pixel with value 0.
  • the distance transform image DT is then processed by the block unit 5000.
  • This block unit 5000 shown in dashed line, implements on the distance transform image DT as input image one of the fundamental transformations introduced within the invention and comprises different process units.
  • the input image to the unit 5000 is, within this first aspect of the invention, the distance transform image DT.
  • the size of the distance transform image DT is for a 2D representation of a 2D object the number of rows times the number of columns, whereas for a 3D representation of a 3D object in terms of a 3D array, would include also the number of layers.
  • Figure 3 illustrates the two dimensional array 12 of elements measuring the strength of the pixels considering a value of ⁇ equal to 1.
  • the distance transform image DT and the strengthened image g are passed to the adder unit 30 which sums the two images to produce an input image J.
  • the output of the adder unit 30 is passed then to convex enveloper unit 100.
  • the convex enveloper unit 100 performs the following steps: (a) Operates on each pixel of the input image J to construct a first average image Jl .
  • the value of the new first average image Jl at the pixel is obtained by taking the smallest between the image J(i,j) and some convex combinations of the values of the input image J at some pixels belonging to a predetermined neighbourhood of, for instance, 3 elements by 3 elements with centre the pixel which is operated on.
  • the neighbourhood of 3 by 3 elements is used for illustrative purposes only and any other number and distribution of pixels convexly surrounding the centre pixel (i,j) or a central area can be used.
  • the neighbouring pixels form a convex set.
  • the unit 20 passes the strengthened image g to the image inverter unit 25 which constructs an opposite or negative image of g and passes such image to the unit 30 where it is summed to the n-th average image Jn,
  • the result is a processed image of the distance transform image which we call the lower compensated convex transform of the distance transform image DT and denote it by the symbol C l g (DT).
  • This image represents a tight smooth representation of the distance transform image DT, in the sense, that C l g (DT) is equal to the distance transform image DT almost everywhere apart from pixels where a singularity of distance transform image DT is localized.
  • the image C l g (DT) as output of the block unit 5000, is passed to the image inverter unit 25 which inverts the image and is then passed to the adder unit 30 where it is summed up to the distance transform image DT.
  • the branches of the medial axis are represented in the said image by a certain level that reflects their strength, it is possible to select the main branches by a simple threshold which can be performed by a threshold unit 50 using a value of the threshold thres provided through a user interface, a data base or a look-up table 51. It must be noted that both the geometric threshold thres used within the last process unit, and the value of the parameter ⁇ , used to compute the strengthened image g, are a measure of the geometric strength of individual branches within the multiscale medial axis map.
  • the multiscale medial axis profiles are novel in ranking the strength of the different branches as well as locating them, which gives a technique for filtering boundary noise.
  • the selection of branches can be realised either by modifying the parameter ⁇ (the higher ⁇ is, the less weak branches will be detected), or by using a threshold thres on the image DT - C l g(DT).
  • Figure 5 and Figure 6 illustrate 2D objects with their medial axis determined in accordance with the method described above.
  • the picture shown in Figure 5 is obtained by taking ⁇ equal to 1 and thres equal to 0, hence it represents all the branches of the medial axis corresponding to the ⁇ used, whereas the picture shown in Figure 6 displays only the main branches obtained by using thres equal to 25 for an 8-bit greyscale image.
  • the picture shown in Figure 8 displays the network of bronchioles for the image of the lung dog displayed in Figure 7 obtained by the method for determining medial axes described above.
  • Figure 9 illustrates a system that performs another exemplary embodiment of the present invention that comprises a method for removing or reducing the effects of line, curves and or scratches, particularly suitable for areas or a thin damaged area with bright pixel values, that is, the pixel value within the damaged area is much higher than the background and the damaged area is very few pixels wide.
  • the system includes the input unit 10 of the initial input image / containing thin damaged area, the lower convex enveloper unit 5000 described already with reference to Figure 2, and the display unit 1000 which displays the restored image.
  • FIG. 10 illustrates another system for removing or reducing the effects of large damaged area of an image with bright pixel values.
  • the system shown in Figure 10 presents in addition a preprocessing step which is aimed at reducing the damaged area width.
  • the system includes the input unit 10 which passes the initial input image / to the damage identifying unit 200.Now, this can be done manually through user interface or automatically. For instance, in the latter case, the damage identifying unit 200 operates on each pixel of the initial input image/ and locates those pixels with image value f i,j) not lower than a geometric threshold thres set through the user interface 201. Such information on the damaged area is passed then to the image definition unit 220 and the opposite image definition unit 240 which process the initial input image / to construct other two images: the initial restored image L and the initial opposite restored image M, respectively.
  • the image definition unit 220 operates on each pixel of the initial input image / to produce an initial restored image L which is equal to the initial input image / outside the damaged area and is equal to the geometric threshold thres at the pixels belonging to the said damaged area. If the damaged area is identified manually, the value of the initial restored image L within the damaged area is set equal to 255 for an 8-bit grey scale image.
  • the image definition unit 220 passes then the initial restored image image L and information on the damaged area to the local convex enveloper unit 150, which performs operations similar to the convex enveloper unit 100 but those operations are restricted now to only the pixels belonging to the damaged area.
  • the local convex enveloper unit 150 performs the following steps: (a) Operates on each pixel of the damaged area to construct afirst restored image LI which is obtained by taking the smallest between L ⁇ i,j) and, for instance, the following convex combinations: 0.5*(L(i-lj)+ L(i+l,]) 0.5*( L(i,]-l)+ L(i,j+1)), 0.5 1 j-1 )+ (i+ j+1 )), 0.5*(Z,(i-lj+l)+Z,(i+lj- 1)) and (L(i,j-l)+L(i-l,j)+L(i+,j+l))/3, (L(i-l,j-l)+L(i+l,j)+L(i,j+l))/3, (L(i+l,j-l)+L(i- l,j)+L(i,j+l))/3, (L(i+l,j-l
  • step (b) Replaces the initial restored image L with the first restored image LI and then repeats step (a) again and for a number n of times set through the user interface 103.
  • the output of this process is a new image, the nth restored image Ln which is passed to the average unit 250 described below.
  • the initial inverted restored image M constructed by the inverted image definition unit 240 is equal to the opposite of the initial input image / outside the damaged area and is equal to zero at the pixels (i,j) belonging to the said damaged area.
  • the initial restored image L also the initial inverted restored image M is processed by the local convex enveloper unit 150 which produces a new nth restored opposite image Mn with the values outside the damaged area equal to the opposite of the initial input image /
  • the image Mn is then passed to the unit 25 that computes the image opposite of Mn and sends the result to the average unit 250 which produces another image N as, for instance, the average image between an n-th restored image Ln and the opposite of the n-th restored opposite image Mn.
  • the so processed image N presents a reduced damaged area which can then be processed using the system illustrated in Figure 9 and repeated, for convenience, in Figure 10. Note that the image N is equal to the initial input image / outside the damaged area.
  • each pixel in the damaged area is operated on, either sequentially or simultaneously.
  • a region of 3 elements by 3 elements centred on the pixel being operated on is interrogated to compute the said above convex combinations.
  • the said region of 3 elements by 3 elements will contain at least one pixel (r,q) that does not belong to the damaged area and whose image value is equal to the input image / or to its opposite according to whether one is dealing with the processing of the initial restored image L or of the initial inverted restored image M, respectively.
  • Figure 9 and Figure 10 refer to the restoration of damaged area with bright pixel values, it will be appreciated by the one skilled in the art the adaption of the methods therein described for the restoration of damaged areas with dark pixel values, that is, for the case where the pixel values of the damaged area are lower than the background.
  • Figure 1 1 shows in a block diagram form a system that applies a first method for detecting turning and crossing points of lower dimensional objects, such as a curve or curves representing turning points within 3D surfaces and surface to surface intersections.
  • the system includes the input unit 10 which loads the geometric object /, for instance as an image, which is passed then to the image upper threshold unit 50 which converts such initial image / into a binary input image F, usually through thresholding by means of the user interface 51.
  • the system is described here for the case that the geometrical object is represented by pixel values equal to zero, though it would be apparent to one skilled in the art the modifications to make in the case the geometric object is represented by pixel values equal to one.
  • the binary input image F is then passed to the lower convex enveloper unit 5000 which has already been described with reference to its component process units in describing Figure 2.
  • the output of the process by the lower convex enveloper unit 5000 is a first image that we call the lower compensated convex transform of the input object F and denote by C l g (F).
  • This image is then passed to a second basic block unit, shown also in dashed lines, which is the upper convex enveloper unit 6000.
  • This unit realizes on the input image another fundamental transformation introduced within the invention and comprises different process units.
  • the input image to the unit 6000 is the transformed image C l g (F).
  • This image is passed to the image inverter 25 to compute the opposite image and information on the size of C l g (F) is passed to the image strengthening unit 20 to construct the strengthened image g which is, within this invention, for instance, equal to the one used in the lower convex enveloper unit 5000 described above.
  • the opposite of the image C' g iF) and the strengthened image g are then passed to the process adder unit 30 that generates another image as the sum of the two.
  • the sum image is then passed to the convex enveloper process unit 100 which produces another image representing basically an approximation of the convex envelope of the image that is passed to the unit.
  • the image created within the process unit 100 is then transformed into its opposite image within the image inverter 25 and summed finally to the strengthened image g.
  • the transformation realized by the upper convex enveloper unit 6000 is denoted by the symbol C g , hence the resulting image following the processing of the lower convex enveloper unit 5000 first and of the upper convex enveloper unit 6000 will be denoted by C g (C l g (F)).
  • C g C l g (F)
  • the geometrical effect of this transformation is to create an extremal point, localized at the interest point.
  • the image C g (C l g (F)) can be passed either to the process unit 65 or to the process unit 52.
  • the process unit 65 will operate on each pixel of the image C g (C l g (F)) and will locate those pixels that are strict local minima.
  • a pixel is a strict local minima if the value of the said image therein is lower than the values of the said image at all the pixels associated with the pixel which is operated on, within a predetermined region, for instance a region of 3 elements by 3 elements, with centre at the pixel
  • the unit 65 will then output to the display unit 1000 a binary image giving only the location of the turning and crossing points within the said geometric object represented by the binary input image F.
  • the image lower threshold unit 52 will operate on each pixel of the transformed image C g (C l g (F)) and will locate those pixels where the image values of the transformed image C g ⁇ C l g (F)) are not greater than a geometric threshold thres assigned through the user interface 53.
  • the unit 52 will then output to the display unit 1000 a binary image showing the location and orientation, that is the shape of the curve in the neighbourhood of the turning and crossing points within the said geometric object represented by the binary input image F.
  • the invention applied in the system described in Figure 1 1 produces an image of the turning and crossing points which is not affine invariant. It is however possible to construct a system which meets such a property at the expense of carrying out an additional operation as illustrated in Figure 12.
  • the output of the lower convex enveloper unit 5000 which is the image C l g (F) is converted into the opposite image by the process unit 25 and sent to the adder unit 30 where it is added to the image C g iC' g iF) obtained by processing the image C l g (F) by the upper convex enveloper unit 6000 as described with reference to Figure 11.
  • Figure 13 to Figure 16 show turning and crossing points for plane curves determined in accordance with the first method of the present invention, with Figure 16 showing only the location of the branching points of the vessel network of the retina shown in Figure 15, whereas Figure 14 displays also the orientation of the branching points of the planar curve shown in Figure 13.
  • Figure 17 illustrates a system that applies a first method for detecting end points of lower dimensional objects.
  • the system includes the input unit 10 which loads the geometric object /, for instance as an image, which is passed then to the image upper threshold 50 which converts such image / into a binary image F, usually through threshold by means of the user interface or data base 51.
  • the system is described here for the case that the geometrical object is represented by pixel values equal to zero.
  • the binary image F is then passed to the directional lower convex enveloper unit 5500 which is shown in dashed line and includes different process units.
  • the directional lower convex enveloper unit 5500 applies on the input image another fundamental transformation introduced within the invention, which is the lower compensated directional convex transform along a given direction which is defined in terms of directional convexity.
  • the image F communicates its size to the image strength unit 20 to construct the strengthened image g and passes then such image to the adder unit 30 to produce a new image K as the sum of the strengthened image g and of the input binary image F.
  • the sum image is passed to the block unit 7000 which comprises different process subunits according to the number of directions that are passed to the block unit 7000 through the user interface 80.
  • the set of directions d which in 2D can be, for instance, represented by unit vectors, represent predefined directions along which the unit 7000 will produce directional convex envelopes.
  • Each process unit within the block unit 7000 is made of two subunits, which are the directional convex enveloper unit 700 and the direction input unit 701. More in particular, for each unit vector d, of the predefined set of directions, stored in the direction input unit 701 , the directional convex enveloper unit 700 performs the following steps: (a) The directional convex enveloper unit 700 operates on each pixel of the said image K to construct an array HES which represents a numerical approximation of the second derivatives of the image K.
  • the directional convex enveloper unit 700 communicates then with the direction input subunit 701 and retrieves the direction d.
  • the directional convex enveloper unit 700 will then operate on each said pixel to compute the following quantity (HES*d)*d where (HES*d) denotes the standard product of row by column of a matrix by a vector, which has a vector as its result, whereas (HES*d)*d represents the inner product between the two vectors HES*i and d and gives a real number as its result,
  • the directional convex enveloper unit 700 will then operate on each said pixel to construct another image Kid which we call first directional average image along d and which is obtained by taking the smallest between K ⁇ i,j) and the value (HES*d)*d obtained at the previous step (e).
  • the directional convex enveloper unit 700 will then replace the image K with the image Kid and will then perform steps (a) to (f) again and it will repeat for a user determined number n of times set through the user interface 101 to produce the n-th directional average image along d Knd.
  • This image is passed to the adder unit 30 where it is summed to the opposite of the strengthened image g to produce a first processed image, which we call the lower compensated directional convex transform of F along the direction d and we denote it by the symbol C l g (F;d).
  • each of this image C l g (F;d), one for each direction d, is then passed to the directional upper convex enveloper unit 6500 which is also shown in dashed line and applies on the input image another fundamental transformation introduced within the invention, which is the upper compensated directional convex transform along a given direction. More in detail, the image C l g (F;d) is passed to the unit 25 that computes the corresponding opposite image and it is then summed within the adder unit 30 to another strengthened image h which is built by the unit 20 and which can be in general different from the first strengthened image g used within the directional lower convex enveloper unit 5500.
  • the measuring strengthened image h at the pixel can be given for instance, by T*(i A 2+j A 2) where ⁇ is a control user parameter assigned through the user interface 101.
  • the parameter ⁇ can be in general different from ⁇ used to define the strengthened image g.
  • the image H is then passed to the unit 700 which performs the step (a) to step (f) as described above by replacing the image K therein with the image H and the direction d which is operated on with its perpendicular direction d and denoted by d L , passed to the unit 701 through the perpendicular direction input user interface 85.
  • the result will be a new image Hn, which is then used to produce its opposite image within the process unit 25 and summed to the strengthened image h within the adder unit 30.
  • the geometrical effect of this transformation is to create an extremal point localized at the end points.
  • the image EP g/l (F;D) can be passed, for instance, to the process unit 60 for the localization of the strict local maxima.
  • Figure 18 illustrates a system that applies a second method for detecting end points of lower dimensional objects. Such system replaces the directional upper convex enveloper unit 6500 shown in Figure 17 by the directional lower convex enveloper unit 5500 which modifies each image C l g (F;d) by applying the lower compensated directional convex transform along the corresponding perpendicular direction.
  • Figure 19 illustrates a curve with localization of its end points determined in accordance with the present invention.
  • Figure 20 illustrates a system that performs another exemplary embodiment of the present invention that comprises a method for locating ridges in an image.
  • the system includes the input unit 10 of the initial input image / that is then processed by the lower convex enveloper unit 5000, which has already been described with reference to Figure 2, to produce the lower compensated transform image of the initial input image / This is passed to the unit 25 to be transformed into its opposite and then summed to the initial input image / within the adder unit 30. The resulting image of the ridge map is then passed to the display unit 1000. From a geometrical point of view, the lower compensated convex transform of the initial input image / fits the graph of the initial input image / from below with a fixed negative curvature which is controlled through the user interface 21.
  • the gap between the original ridge of the image and the smoother lower transform provides therefore a marker for the ridge that records the relative strength of different ridges.
  • the ridge map so obtained can be further processed to obtain the edges of the image by a simple threshold which can be performed by the unit 50 using the threshold given through the user interface 51 as illustrated in Figure 22.
  • Figure 21 illustrates a system that performs another exemplary embodiment of the present invention that comprises a method for locating valleys in an image.
  • the system includes the input unit 10 of the initial input image / that is then processed by the block unit 6000, which has already been described with reference to Figure 11 , and produces the upper compensated transform image of / This is passed to the adder unit 30 to produce the image of the valley map obtained by summing the said upper compensated convex transform image of/ to the opposite of the original input image /.
  • a similar geometrical interpretation given for the ridge map holds also for the valley map.
  • the upper compensated convex transform of the input image / fits the graph of the image / from above with a fixed positive curvature which is controlled through the user interface 23.
  • the gap between the smoother upper compensated convex transform image of / and the original valley of the image provides a marker for the valley that records the relative strength of different valleys.
  • the valley map so obtained can be further processed to obtain the edges of the image by a simple threshold which can be performed by the unit 50 using the threshold given through the user interface 51.
  • the resulting system is illustrated in Figure 23.
  • Figure 24 illustrates a system that comprises a method for locating saddle points in an image. These are defined as those points that belong to a ridge and a map and are obtained by finding the common points to the edges obtained by the ridge map and to the edges obtained by the valley map. It follows that such system, as illustrate in Figure 24, includes the input unit 10 of the image f which is then processed by the system illustrated in Figure 22 and by the system illustrated in Figure 23 which are both shown in Figure 24. The resulting edges represented as binary images with for instance value zero at the pixels belonging to the edges, are passed to the unit 70 which will operate on each pixel of the image and will locate those pixels where both the edge from the ridge map and the edge from the valley map take the same value equal to zero.
  • Figure 25 illustrates a system that comprises another method within the present invention for locating edges in an image as the sum of the valley and ridge map.
  • the system includes the input unit 10 of the image / which is processed by the upper convex enveloper unit 6000 to produce the image that represents the upper compensated convex transform of / .
  • the image / is processed also within the lower convex enveloper unit 5000 to produce the lower compensated convex transform of / that is then input to the unit 25 where it is transformed in its opposite and summed to the upper transform within the adder unit 30.
  • the resulting image of the edge map is passed to the display unit 1000.
  • Figure 26 shows in a block diagram form a system that applies a first method for the smoothing of an image or a function/
  • the system includes the input unit 10 of the image/ which is processed first by the lower convex enveloper unit 5000 to produce another image which is the lower transform of / This image is subsequently processed within the upper convex enveloper unit 6000.
  • the sequence of the two block units with the lower convex enveloper unit 5000 first followed by the upper convex enveloper unit 6000 represents another fundamental transformation within the present invention that is called mixed compensated transform of / as upper of the lower transform.
  • the output of such sequence of processes is then passed to the display unit 1000.
  • the processes within the lower convex enveloper unit 5000 perform a smoothing of the singularities of the image that are concave by leaving unchanged the singularities of the image that are convex. These are then smoothed subsequently by the processes within the upper convex enveloper unit 6000.
  • By inverting the sequence of the processes that is, by having the input image / processed first by the upper convex enveloper unit 6000 and then followed by the lower convex enveloper unit 5000 one realizes another method for the smoothing of the image f.
  • Such sequence of processes represents also another fundamental transformation within the present invention that is called mixed compensated transform of / as lower of the upper transform.
  • the system that includes such sequence of operations on the image is illustrated in Figure 27.
  • Figure 28 illustrates a system that performs a third method for the smoothing of an image with noise.
  • the system includes the two systems shown in Figure 26 and Figure 27 with the respective output images passed to the process unit 250 that produces another image as a convex combinations of the two input images, which in particular can be the average arithmetic.
  • the strengthened function g used within the lower convex enveloper units 5000 can be also used within the upper convex enveloper units 6000.
  • the choice of the strengthened image which is done through the user interfaces 21 and 23 respectively, by which one input the parameter ⁇ equal to the parameter x, will depend on the noise frequency.
  • the effect of the upper and lower transforms is to reduce to the background value, that is to the clean function or to the clean image, the convex and concave oscillations present in the noise, respectively, with the addition of also a smoothing effect due to the action of the external component in evaluating each of the mixed transforms.
  • Figure 29 illustrates a system that performs the smoothing of the interior and/or exterior angles of a geometric object represented as binary input image F, with the pixels representing the domain with value equal to zero.
  • the system includes the input unit 10 of the image / which is first processed by the upper convex enveloper unit 6000 and then by the lower convex enveloper unit 5000 to produce the image C a 3 ⁇ 4 (C l g ( )) .
  • Such image is then passed to the unit 50 which locates the pixels of the said image C a 3 ⁇ 4 (C g /)) whose value C U h(C l g ( ))(i,j) is not lower than a geometric threshold thres assigned through the user interface 51.
  • the output is a binary image representing the domain with smoothed interior angles.
  • the binary image representing the domain with smoothed interior angles is processed first by the lower convex enveloper unit 5000 and then by the upper convex enveloper unit 6000.
  • the present invention thus provides a global method for smoothing which, when applied to nonsmooth functions, without prior knowledge of where the nonsmooth region is, replaces such a region with a smooth one and is equal to the original function in the other parts.
  • Figure 30 illustrates a system that applies a first method for detecting irregularities such as for example comers, necks or small blobs in an image.
  • the method is exemplified herein for the case of bright comers, that is, the region with bearing less than 180 degree has a larger value with respect to the surrounding region, and for the case of bright necks and bright blobs.
  • the bright comer is the region with pixel value one and with bearing less than 180 degree. It will be however apparent to the one skilled in the art the adaption of the method within the present invention for detecting dark comers, dark necks and dark blobs.
  • a dark comer in a binary geometric object denotes the region with pixel value zero and with bearing less than 180 degree.
  • the system includes an input unit 10 that loads the image/ and sends it to the lower convex enveloper unit 5000 where it is processed to produce another image which is the lower compensated convex transform image of This image is in turn processed by the upper convex enveloper unit 6000 which produces the mixed compensated convex transform of /
  • the effect of applying the two processes in the above sequence to, for instance a binary object representing a bright corner is to get a smooth image which is not much different from the input image apart from the region surrounding the corner where an extremal value is created.
  • the difference between the input image and the mixed transform will then be an image with an extremal point at the feature of interest.
  • the output of the upper convex enveloper unit 6000, which is the mixed transform of /, is therefore sent to the unit 25 to build the opposite image and summed to the input image within the adder unit 30.
  • the said image can be finally sent either to the process unit 60 for only the localization of the corners, or to the process unit 50 which will also show the orientation of the feature, that is the shape of the image in the neighbourhood of the feature.
  • Figure 31 illustrates a system that applies a second method for detecting corners, necks and small blobs in an image.
  • the system includes an input unit 10 that passes the image / to the directional lower convex enveloper unit 5500 which constructs a number of images equal to the number of directions d belonging to a predefined set D of directions given through the user interface 80.
  • the processes within the directional lower convex enveloper unit 5500 have already been described with reference to Figure 17 and produce the image of the lower compensated directional convex transform of the image / along the given direction d.
  • Each of this image is then passed to the directional upper convex enveloper unit 6500 which applies the upper compensated directional convex transform along the same direction d at variance of what done by the system illustrated in Figure 17 which uses the perpendicular direction to d.
  • the directional upper convex enveloper unit 6500 output the mixed compensated directional convex transforms images C" h (C l g (F;d);d) which on the b asis of an appropriate choice of the parameters ⁇ and ⁇ are meant to create an image with an extremal value at the possible discontinuity present along the given direction d.
  • the geometrical effect of this transformation is to create an extremal point localized at the corner, neck and blob on the assumption that for such features the strength of the discontinuity is higher.
  • the image CR gh (F;D) can be passed, for instance, to the process unit 60 for the localization of the strict local maxima or to the process unit 50 for the localization and orientation of the feature.
  • Figure 32 displays a picture of dark corner as meant within the present invention whereas Figure 33 shows the location and orientation of the corners and neck as result of applying the system described in Figure 31 adapted to the case of dark corner.
  • a method that detects thin object through enhancement i.e. by enhancing lines and curves which are faint in the original image.
  • This can be achieved by the same system illustrated in Figure 9 which refers to the case of dark thin object, that is, the pixel value therein are smaller than the background.
  • the system includes the input unit 10 which loads the image and passes it to the block unit 5000 that computes the lower compensated transform image. Since this transformation approximates the line which is a singularity of the input image from below, it will spread over a region around the thin object for a width that is controlled by the parameter ⁇ given through the user interface 21.
  • convex envelopes can be easily extended to other application. Examples include but are not limited to image expansion or image contraction or the determination of tangent and tangent point in lower dimensional objects in an image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Nonlinear Science (AREA)
  • Image Processing (AREA)

Abstract

The present disclosure relates to a method and a system for processing an image, a domain in an image or a lower dimensional object in the image. The method comprises determining a combination of image values of at least one region of interest of an input image (f, F, DT, J, L, N), the input image being the initial image (f) or a pre-processed image (DT; N), wherein the combination image values are each determined based on a combination of two or more regions of the input image, wherein the two or more regions are arranged in proximity to the at least one region of interest. A replacement value is selected from the combination values that best fits a replacement criteria. The region of interest of the input image is replaced with the replacement value if the replacement value better fits the replacement criteria than the input image (f, F, DT, J, L, N).

Description

IMAGE PROCESSING
The present application claims benefit of and priority to UK application GB 0921863.7, the entire contents of which are incorporated herein by reference.
Background to the Invention This disclosure relates generally to a method and a system for analysing and manipulating images The disclosure relates to image processing and/or feature extraction from within an image. The invention may be applied to an image, a domain in an image or a lower dimensional object in an image, for the purpose of, for example but not limited to image restoration, selective enhancement of features of an image, detection of edges, corners and turning points, crossing points, end points, and other singular points or interesting points.
Description of the prior art
There are many known methods of image processing. Most require that the image is given in digitized form, i.e. the image is represented by a two or three dimensional data array of elements comprising image values. All the current methods either process the image by comparing pixel values in a predetermined mask using some ad hoc problem designed, convolution function or require solution of ad hoc problem- dependent partial differential equations. For example, UK Patent Application No. GB2272285A describes a method for digitally processing a digital black and white image in order to locate edges and/or corners of objects in the black and white image. This method comprises comparing, for each point or element into an image, the brightness of elements located within a predetermined region centred on the chosen element with the brightness of the element itself to determine those elements having substantially equal brightness, and these elements are regarded as belonging to the same surface in the image. Elements which are associated with a minimum number of equally bright elements within their local region are then located, and these elements will lie on or close to an edge of the image. Once the edge elements have been located in this way, standard methods of non-maximum suppression and edge thinning can apparently be applied to find the position of the edge more accurately.
The UK patent Application No. GB2218507A describes another method to locate edge and corner features in a digitised image wherein the image data is digitally processed to detect the positions of any sharp intensity variations present in adjacent ones of the pixels representing said image by comparing the intensity of each primary pixel with those of the secondary pixels which surround it (e.g. by applying a convolution function to the data) calculating whether said primary pixel can be classified as having a constant, an edge-like or a corner-like value, and collating this information to give positions of both edge and corner features which are present in said image. However, known methods for image processing, feature extraction, etc. often have drawbacks or limitations, such as a trade-off between speed and accuracy, or not being globally applicable, or they do not meet some desirable properties such as affine invariance and robustness against perturbations of noise and curvature which mean that the same features are extracted from the said image independently on the observer point of view and on the presence of noises and distortions due to the apparatus used for the acquisition of the said image. For example, most of the known medial axis extraction methods are very sensitive to small perturbations. Also, there are currently no effective methods to find features on lower dimensional objects such as turning points and crossing points.
Invention Summary It is therefore an object of the present invention to provide new methods and systems for image processing which are fast and accurate.
The present disclosure provides a method and a system for processing an "initial" image. The term "initial" image or initial input image is used herein for identifying the image on which the present method is applied. The initial image may be the original analogue or digitized image or may be a selected domain in the original image or a lower dimensional object in the original image that has been selected for processing with the present method. The initial image may be digitized image and the region of interest may be defined as at least one pixel in the digitized image.
The method may involve some pre-processing of the initial image as described further down to obtain a pre-processed image. The pre-processed image or the initial image may be used as an input image. The method comprises determining combination image values of at least one region of interest of the input image, wherein the combination image values are each determined based on a combination of two or more regions of the input image and arranged in proximity to the at least one region of interest. For example, the two or more regions may be arranged at opposite positions with respect to the region of interest or may be arranged to encage the region of interest or the regions may be arranged on a circumference of the region of interest. Each one of the two or more regions of the image arranged in proximity to the at least one region of interest may be are arranged in a convex hull or convex envelope around the region of interest, i.e. an approximated directional convex envelope or the exact convex envelope or the exact directional convex envelope may be used.
The combination of two or more regions may comprise determining average image values, i.e. weight averages image values of the two or more regions. The combination may also comprise a numerical approximation of the second derivatives at the region of interest. Other combinations are also possible. The method further comprises selecting as a replacement value that combination value from the combination values determined above that best fits a replacement criteria. The replacement criteria can be the lowest value, the highest value or another pre- determined criteria.
The method further comprises replacing the region of interest of the input image with the replacement value if the replacement value better fits the replacement criteria than the input image. If for example the replacement criteria is the lowest value, the image value of the input image will be replaced, if the replacement value is lower than the image value in the region or pixel of interest. If the replacement has a higher value, the value of the input image will remain unchanged.
The Determining, Selecting and Replacing may be iteratively repeated for a pre-determined or user identified of automatically defined number of times until a transform image is obtained. The transform image may correspond to an exact or a approximate transform of the input image.
The transformed image may be output, further analysed or further processed.
The method may further comprise the steps of building a smooth image which is equal to the initial input image almost everywhere apart from those pixels where discontinuities in the initial image values and/or image features are located. The difference then between the initial input image and the smooth image will display then the sought feature. In order to build the new image, this invention relates to methods comprising the application and definition of new class of transformations based on the notion of convex hull and convex envelope. The transformations used in this invention are therefore different from the commonly used image transformations, based on the use of convolutions, Fourier and wavelet transforms.
Features of the Invention
Prior to describing the aspects of the invention, some background information will be provided to facilitate the reader's understanding of the invention and to set forth the meaning of some technical terms. (1) The methods and the systems according to the present invention apply to images and geometric objects of any dimension, and in general to, and not limited only to, data set.
(2) In 2D, for instance, the image, the domain in the image or the lower dimensional object in the image may be represented by a function fli ) of the two variables i and j. The value, fli ) gives the intensity of the colour at the pixel of coordinates in a gray scale 8-bit image, the function /is a scalar function with integer values in the range [0,255], with 0 representing the black colour, 255 the white colour and any other integer in ]0,255[ corresponding to different tonality of grey. A binary image, on the other hand, is represented by a function which takes on only the values 0 (black) and 1 (white). A RGB (red, green, blue) colour image is represented by a vector valued function which associates with each pixel three values: the red, green and blue intensity. The invention, however, is not limited to grey-scale or RGB images but can be applied to any representation of an image, either digitized or analogue or any other type of representation. (3) The concept of convexity and of convex-based transformations are used to manipulate image values. An object is convex if for every pair of points within the object, every point on the straight line segment that joins them is also within the object. For example, a solid cube is convex, but anything that is hollow or has a dent in it, for example, a crescent shape, is not convex. More in general, a set A of elements, which may be, for example, the locations of points and/or pixels in a two or three-dimensional space, is said to be convex if the point our +
(l-a)j belongs to A whenever the points x and y are in A and a is any real number greater than zero and lower than one. Geometrically, this means that the line segment with endpoints x and j, hereafter denoted by [x,y], is entirely contained in the set A. Given two points x and v a convex combination of the two points is any point of the segment joining the two points x and j. A real- valued transformation and image/ defined in a convex set C is said to be convex if the tangent planes to its graph lie below the graph everywhere, whereas a transformation is concave if its opposite transformation is convex. The convex hull of a set of points X is the smallest convex set containing all points of the set of points X. Given the image described by a function f(i,j hereafter also termed image f(i,j) not necessarily convex, defined in a convex set C, the convex envelope C( ) of the function /is the largest convex function defined in C whose value at every point of C is not greater than that of the given function image / It should also be noted that in literature concerning convex set and convex envelope, sometimes convex envelope of a function is also called the convex hull which has the same meaning as the convex envelope of a function. For more details, one can refer to R.T. Rockafellar, Convex Analysis, Princeton Univ. Press, 1970 or to J.-B. Hiriart-Urruty and C. Lemarechal,
Fundamental of Convex Analysis , Springer- Verlag, 2001.
(4) The application of the lower, upper and mixed compensated convex transformations by K.
Zhang in Compensated Convexity and its Applications, Ann. I. H. Poincare - AN 25 (2008) 743-771, were originally defined to approximate functions. The mathematical transformations disclosed therein may be applied to the present invention. The invention repeatedly uses such convex transforms to create singularities or remove singularities of the image function / representing the image or the geometrical object. Such singularities represent the features of interest of the image or of the geometrical object.
(5) Given a set D of directions, a set C is called -convex if for any pairs of points x and of C such that the segment [x,y] with endpoints x andy is parallel to a non-zero vector in D, also all the other points of the segment belong to C. A function / defined in a -convex set C is D- directional convex when the function/is convex along any direction d of the set D. Similarly to the definition of convex envelope, given a set D of directions, a function defined in a D- convex set C, the -directional convex envelope of , denoted by C(f D), is the largest D- convex function defined in C whose value at every point of C is not greater than the value therein of the function image / In the case D = {d} , that is, in the case D is constituted by only one direction d, the notation C(f; d) is also used whereas the convexification along the directions perpendicular to D is denoted by C(f, D ).
(6) The convex-based transformations used in the methods according to the present invention require the computation of the convex envelope of the function / in a region A, not necessarily convex and not necessarily all the region where the image is defined, or the computation of the directional convex envelope of / in the region A along a given set D of directions. A possible realisation of such transformations comprises replacing the convex envelope/hull and/or the directional convex envelope/hull definitions of the compensated convex transforms with the corresponding numerical and/or approximated one. There are several numerical schemes to approximate the convex envelope, such as the one suggested in Y. Lucet A fast computational algorithm for the Legendre-Fenchel transform, Comput. Optimiz. and Appl. 6 (1996) 27-57; in G. Dolzmann, Numerical computation of rank-one convex envelopes, SIAM J. Numer. Anal. 36 (1999) 1621-1635; in B. Brighi, M. Chipot, Approximated convex envelope of a function, SIAM J. Numer. Anal., 31 (1994) 128-148; and in A Oberman, The convex envelope is the solution of a nonlinear obstacle problem, Proc. Amer. Math. Soc. 135 (2007), 1689-1694.
(7) Within the present invention, the term convex -based transform may comprise any type of convex transform such as the exact and/or approximate and/or numerical and/or directional approximations of convex and/or concave envelopes, lower and/or upper and/or mixed compensated convex transforms, and/or any combinations of these. The convex transform may be a directional convex-based transform, for example along a predefined set of directions.
(8) By saying that a method is affine invariant, it is meant that the method is able to extract from the image, or from the object, the same structure whatever is the viewpoint of the observer. In abstract notation, this means that iff denotes the image or the geometrical object, Aff is an affine transformation, T is a said method and T(f) its output when applied to the said image or geometrical object /, then affine invariance of T means that there occurs T{f) = T(f+Aff) for any affine transformation Aff. (9) By saying that a method is stable under curvature perturbation, it is meant that the method is able to extract from the image, or from the object, the same structure also in presence of distortion. In abstract notation, this means that if / denotes the image or the geometrical object, w is a given perturbation, T is a said method and T(f) its output when applied to the said image or geometrical object /, then stability under small curvature perturbation means that there occurs T(f)∞T(f+w) for any perturbation w of the image or object / such that some norm of the second order derivative of w is small enough.
(10) It will be appreciated by a person skilled in the art that statements given herein which involve the convex envelope, can be equivalently described in terms of a concave envelope, noting that the concave envelope of an image /is equal to the opposite of the convex envelope of the image /
(1 1) The methods and the systems according to the present invention are fast (typically, from less than one second up to a minute, depending on the application) and accurate. They are global (i.e. do not require prior knowledge of the features present in the image), are robust against perturbations of the input, have desirable invariance properties, which most known methods do not have, and are able to perform operations on, for example, digital images that known methods are currently unable to perform, such as detection of turning points, crossing points and end points on curves.
Examples of applications of the Invention
Eight examples of how the present invention may be used are given below. However, the invention is not limited to these examples and other applications may become evident to a person skilled in the art.
I. According to a first aspect of the present invention, there is provided a fast and accurate global method for finding a multiscale medial axis (also known as the skeleton or the cut locus) of an object in an image. For selecting the branches of a given level, the object may be of any dimension. The medial axis, defined as the locus of points equidistant from two or more points of the object boundary or from a given set of points, is a geometric structure that together with a radius function captures global shape properties of an object. The multiscale medial axis represents objects at multiple scales simultaneously, including both large-scale gross shape properties of the object, and small-scale fine details that are defined by the image data itself. The medial axis or medial surface is a geometric entity of interest to engineering and animation disciplines.
II. According to a second aspect of the present invention, there are provided global methods of processing an image to remove, or reduce the effects of, lines, curves and scratches. For the avoidance of doubt, the term 'scratch' used herein means a discontinuity in the intensity image that is localized over a thin region, which can be a matter of a very few pixels wide. Given the small width of the image region to be restored, this exemplary method falls in the category of disocclusion algorithms in digital image inpainting [see, for example, S. Masnou, J.-M. Morel, Level lines based disocc lusion, 5th IEEE International Conference (1998) 105- 138 or M. Betalmio, G. Sapiro, V. Caselles, C. Ballester, Image Inpainting, Proceedings of SIG-GRAPH 2000, New Orleans, USA, 2000] . Unlike known methods, however, the proposed method is optimal and fast for regions of very few pixels width. It should be noted that classical image denoising algorithms do not apply to this case.
According to a third aspect of the present invention, there are provided global methods, affine invariant and not, to locate turning points and crossing points of lower dimensional objects, such as a curve or to locate curves representing turning points within 3D surfaces and surface to surface intersections. The tasks that can be performed by the invention can be generally gathered under the common heading of detection of codimension-two objects such as points in a 2D image and curves in a 3D image. These geometric entities represent singularities for the image in the sense that therein either there is an abrupt change of direction of the tangent direction or of the tangent plane, respectively. The invention does not require prior knowledge of the curve and/or surface type and does not require the parameterization of the curve and/or surface itself, hence there is no need for the solution of complex systems of nonlinear equations and inequalities. The detection of turning and crossing points is of interest, for instance, for CAD applications.
According to a fourth aspect of the present invention, there is provided an affine invariant global method to locate end points of lower dimensional objects, such as end points of a curve or the boundary of lower dimensional objects. The method is also stable under small curvature perturbations of the image graph. Likewise the third aspect, the invention does not require prior knowledge of the curve and/or surface type and does not require the parameterization of the curve and/or surface itself, which therefore applies to any type of curves in 2D and surfaces in 3D..
According to a fifth aspect of the present invention, there are provided affine invariant global methods stable under small curvature perturbations of processing an image to locate the edges, ridges, valleys and saddles in the said image. Within the present invention the geometry of a ridge is defined herein as the part of the graph of the image that is concave and, at least along one direction, the possibly nonsmooth directional curvature is large and negative, whereas a valley is defined herein as the part of the graph of the image that is convex and, at least along one direction, the possibly nonsmooth directional curvature is large and positive. The invention can therefore mark the ridges and valleys by different colours. The part where the ridge and valley curves are parallel to each other represents the edge. The direction of the jump is indicated by the two coloured curves, a feature which is novel and is not known in any of the previously-proposed edge detectors.
VI. According to a sixth aspect of the present invention, there are provided global methods for processing an image to smooth the image or to smooth the angles of a geometric object or to smooth a function . This aspect of invention also applies to the denoising of functions or images with noise. A nonsmooth continuous function / is meant herein a function that is continuous but not continuously differentiable. In geometric terms, this means that there are points on the graph of / where one cannot define a unique tangent plane. The term nonsmooth domain is intended to mean a domain with a boundary that might have comers.
VII. According to a seventh aspect of the present invention, there are provided affine invariant global methods stable under small curvature perturbations of processing an image / to detect corners, necks and small blobs in the image.
VIII. According to an eighth aspect of the present invention, there are provided global methods of processing an image to detect thin objects through enhancement.
These and other examples of the present invention will be apparent from, and elucidated with reference to, the embodiments described herein. As a result of the application of the described transformations to process images in all of the claimed aspects of the invention, the desired results are obtained in a robust, global and flexible manner.
While the invention has been described with respect to image processing or processing of domains or objects in an image, it is apparent to a person skilled in the art that the same method may be applied to data treatment in general.
It should be noted that all the examples of the present invention do not necessarily relate to digital images. Analogue images, such as those produced directly by a camera and still used, for instance, in geography and information systems, can also be used. Indeed, this is another distinguishing feature of the present invention, in contrast with known methods, which require that the image obtained by a camera, or any other apparatus, be transformed into digitised form; that is, represented in terms of an array of elements, called pixels. Further, it should be noted that the present invention does not necessarily relate to two-dimensional images. Data sets stored by vectors or arrays of any dimension can be also processed and analysed with the methods and systems according to the present invention.
It should be also noted that the methods described herein are applicable in particular to any digital image, including colour images. Description of the Drawings
Embodiments of the various examples of the present invention will now be described by way of example only, and with reference to the accompanying drawings, in which:
Figure 1 is a schematic representation of a first two dimensional array of elements comprising image values according to the present invention;
Figure 2 illustrates a system to obtain the medial surface according to a first aspect of the present invention;
Figure 3 is a schematic representation of a two dimensional array of elements comprising image values measuring the strength of each element; Figure 4 shows a predetermined region of 3 elements by 3 elements with indication of the pixels and their weight used for the construction of the image representing an approximation of the convex envelope of the input image;
Figure 5 to Figure 8 illustrate objects with the medial axis determined in accordance with the present invention; Figure 9 illustrates a system that performs a first method for removing or reducing effects of scratches according to a second aspect of the present invention;
Figure 10 illustrates a system that performs a second method for removing or reducing effects of large damaged area of an image according to a second aspect of the present invention.
Figure 1 1 illustrates a system that applies a first method for detecting turning and crossing points of lower dimensional objects according to a third aspect of the present invention;
Figure 12 illustrates a system that applies a second method for detecting turning and crossing points of lower dimensional objects according to a third aspect of the present invention;
Figures 13 to Figure 16 illustrate images and examples of detection of turning and crossing points determined in accordance with the present invention; Figures 17 illustrates a system that applies a first method for detecting end points of lower dimensional objects according to the fourth aspect of the present invention;
Figures 1 8 illustrates a system that applies a second method for detecting end points of lower dimensional objects according to the fourth aspect of the present invention; Figure 19 illustrates a curve with localization of its end points determined in accordance with the present invention.
Figures 20 illustrates a system that applies a method for locating ridges in an image according to a fifth aspect of the present invention; Figures 21 illustrates a system that applies a method for locating valleys in an image according to a fifth aspect of the present invention;
Figures 22 illustrates a system that applies a first method for locating edges in an image according to a fifth aspect of the present invention;
Figures 23 illustrates a system that applies a second method for locating edges in an image according to a fifth aspect of the present invention;
Figures 24 illustrates a system that applies a method for locating saddle points in an image according to a fifth aspect of the present invention;
Figures 25 illustrates a system that applies a third method for locating edges in an image according to a fifth aspect of the present invention; Figures 26 illustrates a system that applies a first method for the smoothing of an image or function with and without noise according to a sixth aspect of the present invention;
Figures 27 illustrates a system that applies a second method for the smoothing of an image or function with and without noise according to a sixth aspect of the present invention;
Figures 28 illustrates a system that applies a third method for the smoothing of an image or function with and without noise according to a sixth aspect of the present invention;
Figures 29 illustrates a system that applies a method for the smoothing of interior and/or exterior angles of a geometric domain according to a sixth aspect of the present invention;
Figures 30 illustrates a system that applies a first method for detecting corners, necks and small blobs in the image according to a seventh aspect of the present invention; Figures 31 illustrates a system that applies a second method for detecting corners, necks and small blobs in the image according to a seventh aspect of the present invention;
Figures 32 and Figure 33 illustrate an example of geometric object and of detection of its corners and neck determined in accordance with the present invention.
Detailed Description of Embodiments of the Invention Various methods and systems according to specific exemplary embodiments of the present invention will now be described in more detail. The several aspects of the invention will be herein illustrated and described using a digitized representation of the input image which is taken to be monochrome, for example grey-scale, as for instance shown in Figure 1. One skilled in the art will however appreciate that the invention is not limited to these monochrome image representations. The monochrome digitized images have been chosen for illustrative purposes only and the invention can be equally applied to any type of image representation that contains information on the position of the pixels and corresponding image values. For example, colour images can be used, wherein the proposed method will become more complex. Furthermore, the method can be equally applied to analogue images of regions of in the analogue image are identified.
Figure 1 shows an example of a two dimensional array 1 1 of 10x10 elements which each contain at least one associated image value. The elements may be considered as pixel with coordinates i and j denoting the row and column number, respectively. The number of pixels, however is limited to 10 10 for illustrative puiposes only and any number of pixels and geometry of pixel arrangement can be used. In the example of figure 1 the pixel of coordinate z'=l and j=l is a first element 1 at the top left corner of the array, whereas a second element 2 corresponds to the pixel of coordinates z'=3 and =2. The image comprises a light region with values of approximately 210 and a dark region with image value approximately 45. Figure 1 shows also a typical element 4 with its neighbour 3 of 3 elements by 3 elements which is generally used within the present invention. Neighbours of different size can also be considered. The several aspects of the invention will be next illustrated in the form of flow diagrams. Processe steps to be performed are designated by rectangular boxes, while circles represent user interfaces. Directed lines represent flow of information usually in the form of images between processes, whereas dash lines represent flow of information that are not images, for instance, set of directions, size of the image, which are passed to the process unit through generally a user interface. The arrow on each line indicates where the flow of information is going.
Figure 2 illustrates a system that performs an object medial representation. The system includes an input unit 10 which might be connected to a unit 50 that converts the initial image / into a binary initial image F, through, for instance, thresholding. A threshold value can be provided given by means of a user interface 51. Alternatively a reference list or look-up table can be used for obtaining a threshold value. If the initial image / is already given in binary form, then this step can be bypassed and the unit 10 would pass the input binary image F directly to the unit 400. Considering, for instance, the initial input image / represented as the two dimensional array 11 of elements as shown in Figure 1 , the unit 50 will communicate with the unit 51 to retrieve the value of a threshold chosen by the user or obtained from the look-up table and will operate on each pixel of the array 11 to locate those pixels whose value is for instance greater than the given threshold. The converter unit 50 will then output an input binary image F with value 1 at the pixels having a value above the threshold and 0 otherwise. The region with pixel value 1 is the geometrical object. This input binary image F represents the input for distance transform unit 400. The distance transform unit 400 produces a distance transform image DT following available standard processes. The distance transform image DT is the image that, for each pixel DT (iJ) gives the distance of the pixel to the closest pixel with value 0. The distance transform image DT is then processed by the block unit 5000. This block unit 5000, shown in dashed line, implements on the distance transform image DT as input image one of the fundamental transformations introduced within the invention and comprises different process units. The input image to the unit 5000 is, within this first aspect of the invention, the distance transform image DT. The size of the distance transform image DT is for a 2D representation of a 2D object the number of rows times the number of columns, whereas for a 3D representation of a 3D object in terms of a 3D array, would include also the number of layers. The distance transform image DT is passed to an image strengthening unit 20 to construct a strengthened image g with the same size as the binary input image F and whose pixel value measures the strength of the pixel, for instance, the strengthened image g could be given by g(i,j)= *(iA2+jA2) where the parameter λ is passed to the strengthening unit 20 by the user interface, data base or look-up table 21. Figure 3 illustrates the two dimensional array 12 of elements measuring the strength of the pixels considering a value of λ equal to 1. The distance transform image DT and the strengthened image g are passed to the adder unit 30 which sums the two images to produce an input image J.
Returning to Figure 2, the output of the adder unit 30 is passed then to convex enveloper unit 100. The convex enveloper unit 100 performs the following steps: (a) Operates on each pixel of the input image J to construct a first average image Jl . The value of the new first average image Jl at the pixel is obtained by taking the smallest between the image J(i,j) and some convex combinations of the values of the input image J at some pixels belonging to a predetermined neighbourhood of, for instance, 3 elements by 3 elements with centre the pixel which is operated on. The neighbourhood of 3 by 3 elements is used for illustrative purposes only and any other number and distribution of pixels convexly surrounding the centre pixel (i,j) or a central area can be used. The neighbouring pixels form a convex set. Staying with the the neighbourhood of 3 by 3 elements, one can consider the following combinations: 0.5*( J(i-1 j)+ J(i+1 j)), 0.5*( J(ij-1)+ J(ij+1)), 0.5*( J(i- l,j-l)+ J(i+,j+l)), 0.5*(J(i-l,j+l)+ J(i+l,j-l)) and (J(i,j-1) + J(i-l,j) + J(i+l,j+l))/3, (J(i-l,j- l)+J(i+lj)+J(ij+l))/3, (J(i+l,j-l)+J(i-lJ)+J(ij+l))/3, (J(i,j-1)+J(i+1 j)+J(i-l J+l))/3. The pixels of the neighbour of the 3 elements by 3 elements with centre at the pixel and the respective weights are displayed in Figure 4; (b) Replaces the input image J with the first average image Jland then repeat step (a) again and for a number n of times set through the user interface 101. The output of the convex enveloper unit 100 is the n-th average image Jn which is passed to the unit 30 and summed therein to the output of the image inverter unit 25. For the specific case, the unit 20 passes the strengthened image g to the image inverter unit 25 which constructs an opposite or negative image of g and passes such image to the unit 30 where it is summed to the n-th average image Jn, The result is a processed image of the distance transform image which we call the lower compensated convex transform of the distance transform image DT and denote it by the symbol Cl g(DT). This image represents a tight smooth representation of the distance transform image DT, in the sense, that Cl g(DT) is equal to the distance transform image DT almost everywhere apart from pixels where a singularity of distance transform image DT is localized. It is known now that such singularity represents the medial axis, hence the difference between the distance transform image DT and Cl g{DT) will deliver the multiscale medial axis map. In order to perform such operation, the image Cl g(DT), as output of the block unit 5000, is passed to the image inverter unit 25 which inverts the image and is then passed to the adder unit 30 where it is summed up to the distance transform image DT. The resulting image given by DT- Cl g(DT), depending on the value of the parameter λ, represents the geometric object at multiple scales simultaneously, including both large-scale gross shape properties of the object, and small-scale fine details that are defined by the image data itself. Since the branches of the medial axis are represented in the said image by a certain level that reflects their strength, it is possible to select the main branches by a simple threshold which can be performed by a threshold unit 50 using a value of the threshold thres provided through a user interface, a data base or a look-up table 51. It must be noted that both the geometric threshold thres used within the last process unit, and the value of the parameter λ, used to compute the strengthened image g, are a measure of the geometric strength of individual branches within the multiscale medial axis map. All known methods of detecting the medial axis of binary images and/or objects, in contrast to the method proposed herein, have the drawback of being very sensitive to noise on the boundary, because a new branch of the medial axis results from each small variation and the length of this branch is not indicative of the intensity of the noise. In the method proposed herein, the multiscale medial axis profiles are novel in ranking the strength of the different branches as well as locating them, which gives a technique for filtering boundary noise. The selection of branches can be realised either by modifying the parameter λ (the higher λ is, the less weak branches will be detected), or by using a threshold thres on the image DT - Clg(DT). Figure 5 and Figure 6 illustrate 2D objects with their medial axis determined in accordance with the method described above. The picture shown in Figure 5 is obtained by taking λ equal to 1 and thres equal to 0, hence it represents all the branches of the medial axis corresponding to the λ used, whereas the picture shown in Figure 6 displays only the main branches obtained by using thres equal to 25 for an 8-bit greyscale image. The picture shown in Figure 8 displays the network of bronchioles for the image of the lung dog displayed in Figure 7 obtained by the method for determining medial axes described above.
Figure 9 illustrates a system that performs another exemplary embodiment of the present invention that comprises a method for removing or reducing the effects of line, curves and or scratches, particularly suitable for areas or a thin damaged area with bright pixel values, that is, the pixel value within the damaged area is much higher than the background and the damaged area is very few pixels wide. Referring to Figure 9, the system includes the input unit 10 of the initial input image / containing thin damaged area, the lower convex enveloper unit 5000 described already with reference to Figure 2, and the display unit 1000 which displays the restored image. The damaged area is interpreted as a singularity of the initial input image /, and the process of the image by the lower convex enveloper unit 5000 is actually replacing geometrically the singularity of the image, that is the region where there is the abrupt change of intensity, due to the thin scratch, with a smooth image that interpolates the pixel values in an outer neighbourhood of the region boundary. Figure 10 illustrates another system for removing or reducing the effects of large damaged area of an image with bright pixel values. With respect to the system shown in Figure 9, the system shown in Figure 10 presents in addition a preprocessing step which is aimed at reducing the damaged area width. Referring to Figure 10, the system includes the input unit 10 which passes the initial input image / to the damage identifying unit 200.Now, this can be done manually through user interface or automatically. For instance, in the latter case, the damage identifying unit 200 operates on each pixel of the initial input image/ and locates those pixels with image value f i,j) not lower than a geometric threshold thres set through the user interface 201. Such information on the damaged area is passed then to the image definition unit 220 and the opposite image definition unit 240 which process the initial input image / to construct other two images: the initial restored image L and the initial opposite restored image M, respectively. In more detail, the image definition unit 220 operates on each pixel of the initial input image / to produce an initial restored image L which is equal to the initial input image / outside the damaged area and is equal to the geometric threshold thres at the pixels belonging to the said damaged area. If the damaged area is identified manually, the value of the initial restored image L within the damaged area is set equal to 255 for an 8-bit grey scale image. The image definition unit 220 passes then the initial restored image image L and information on the damaged area to the local convex enveloper unit 150, which performs operations similar to the convex enveloper unit 100 but those operations are restricted now to only the pixels belonging to the damaged area. In more detail the local convex enveloper unit 150 performs the following steps: (a) Operates on each pixel of the damaged area to construct afirst restored image LI which is obtained by taking the smallest between L{i,j) and, for instance, the following convex combinations: 0.5*(L(i-lj)+ L(i+l,]) 0.5*( L(i,]-l)+ L(i,j+1)), 0.5 1 j-1 )+ (i+ j+1 )), 0.5*(Z,(i-lj+l)+Z,(i+lj- 1)) and (L(i,j-l)+L(i-l,j)+L(i+,j+l))/3, (L(i-l,j-l)+L(i+l,j)+L(i,j+l))/3, (L(i+l,j-l)+L(i- l,j)+L(i,j+l))/3, (L(i,j-l)+L(i+l,j)+L(i-l,j+l))/3. (b) Replaces the initial restored image L with the first restored image LI and then repeats step (a) again and for a number n of times set through the user interface 103. The output of this process is a new image, the nth restored image Ln which is passed to the average unit 250 described below. Note that further to the steps (a) and (b) above, only those pixel values of the initial input image / that belong to the damaged area are changed to the value of the new nth restored image Ln whereas in the region outside the damaged area the values of the nth restored image Ln are equal to those of the input image The initial inverted restored image M constructed by the inverted image definition unit 240 is equal to the opposite of the initial input image / outside the damaged area and is equal to zero at the pixels (i,j) belonging to the said damaged area. Likewise the initial restored image L, also the initial inverted restored image M is processed by the local convex enveloper unit 150 which produces a new nth restored opposite image Mn with the values outside the damaged area equal to the opposite of the initial input image / The image Mn is then passed to the unit 25 that computes the image opposite of Mn and sends the result to the average unit 250 which produces another image N as, for instance, the average image between an n-th restored image Ln and the opposite of the n-th restored opposite image Mn. The so processed image N presents a reduced damaged area which can then be processed using the system illustrated in Figure 9 and repeated, for convenience, in Figure 10. Note that the image N is equal to the initial input image / outside the damaged area. Also, each pixel in the damaged area is operated on, either sequentially or simultaneously. When such said pixel is operated on, a region of 3 elements by 3 elements centred on the pixel being operated on is interrogated to compute the said above convex combinations. For the pixels belonging to the border of the damaged area, the said region of 3 elements by 3 elements will contain at least one pixel (r,q) that does not belong to the damaged area and whose image value is equal to the input image / or to its opposite according to whether one is dealing with the processing of the initial restored image L or of the initial inverted restored image M, respectively. Though Figure 9 and Figure 10 refer to the restoration of damaged area with bright pixel values, it will be appreciated by the one skilled in the art the adaption of the methods therein described for the restoration of damaged areas with dark pixel values, that is, for the case where the pixel values of the damaged area are lower than the background.
Figure 1 1 shows in a block diagram form a system that applies a first method for detecting turning and crossing points of lower dimensional objects, such as a curve or curves representing turning points within 3D surfaces and surface to surface intersections. The system includes the input unit 10 which loads the geometric object /, for instance as an image, which is passed then to the image upper threshold unit 50 which converts such initial image / into a binary input image F, usually through thresholding by means of the user interface 51. The system is described here for the case that the geometrical object is represented by pixel values equal to zero, though it would be apparent to one skilled in the art the modifications to make in the case the geometric object is represented by pixel values equal to one. The binary input image F is then passed to the lower convex enveloper unit 5000 which has already been described with reference to its component process units in describing Figure 2. The output of the process by the lower convex enveloper unit 5000 is a first image that we call the lower compensated convex transform of the input object F and denote by Cl g(F). This image is then passed to a second basic block unit, shown also in dashed lines, which is the upper convex enveloper unit 6000. This unit realizes on the input image another fundamental transformation introduced within the invention and comprises different process units. Within this first method, the input image to the unit 6000 is the transformed image Cl g(F). This image is passed to the image inverter 25 to compute the opposite image and information on the size of Cl g(F) is passed to the image strengthening unit 20 to construct the strengthened image g which is, within this invention, for instance, equal to the one used in the lower convex enveloper unit 5000 described above. The opposite of the image C'giF) and the strengthened image g are then passed to the process adder unit 30 that generates another image as the sum of the two. The sum image is then passed to the convex enveloper process unit 100 which produces another image representing basically an approximation of the convex envelope of the image that is passed to the unit. The image created within the process unit 100 is then transformed into its opposite image within the image inverter 25 and summed finally to the strengthened image g. For convenience and ease of referencing, the transformation realized by the upper convex enveloper unit 6000 is denoted by the symbol Cg, hence the resulting image following the processing of the lower convex enveloper unit 5000 first and of the upper convex enveloper unit 6000 will be denoted by Cg(Cl g(F)). The geometrical effect of this transformation is to create an extremal point, localized at the interest point. To single out such points, the image Cg(Cl g(F)) can be passed either to the process unit 65 or to the process unit 52. The process unit 65 will operate on each pixel of the image Cg(Cl g(F)) and will locate those pixels that are strict local minima. A pixel is a strict local minima if the value of the said image therein is lower than the values of the said image at all the pixels associated with the pixel which is operated on, within a predetermined region, for instance a region of 3 elements by 3 elements, with centre at the pixel The unit 65 will then output to the display unit 1000 a binary image giving only the location of the turning and crossing points within the said geometric object represented by the binary input image F. The image lower threshold unit 52, on the other hand, will operate on each pixel of the transformed image Cg(Cl g(F)) and will locate those pixels where the image values of the transformed image Cg{Cl g(F)) are not greater than a geometric threshold thres assigned through the user interface 53. The unit 52 will then output to the display unit 1000 a binary image showing the location and orientation, that is the shape of the curve in the neighbourhood of the turning and crossing points within the said geometric object represented by the binary input image F. The invention applied in the system described in Figure 1 1 produces an image of the turning and crossing points which is not affine invariant. It is however possible to construct a system which meets such a property at the expense of carrying out an additional operation as illustrated in Figure 12. Given the binary input image F representing the geometrical objects, the output of the lower convex enveloper unit 5000 which is the image Cl g(F) is converted into the opposite image by the process unit 25 and sent to the adder unit 30 where it is added to the image CgiC'giF) obtained by processing the image Cl g(F) by the upper convex enveloper unit 6000 as described with reference to Figure 11. Figure 13 to Figure 16 show turning and crossing points for plane curves determined in accordance with the first method of the present invention, with Figure 16 showing only the location of the branching points of the vessel network of the retina shown in Figure 15, whereas Figure 14 displays also the orientation of the branching points of the planar curve shown in Figure 13.
Figure 17 illustrates a system that applies a first method for detecting end points of lower dimensional objects. The system includes the input unit 10 which loads the geometric object /, for instance as an image, which is passed then to the image upper threshold 50 which converts such image / into a binary image F, usually through threshold by means of the user interface or data base 51. The system is described here for the case that the geometrical object is represented by pixel values equal to zero. The binary image F is then passed to the directional lower convex enveloper unit 5500 which is shown in dashed line and includes different process units. The directional lower convex enveloper unit 5500 applies on the input image another fundamental transformation introduced within the invention, which is the lower compensated directional convex transform along a given direction which is defined in terms of directional convexity. As within the lower convex enveloper unit 5000, the image F communicates its size to the image strength unit 20 to construct the strengthened image g and passes then such image to the adder unit 30 to produce a new image K as the sum of the strengthened image g and of the input binary image F. The sum image is passed to the block unit 7000 which comprises different process subunits according to the number of directions that are passed to the block unit 7000 through the user interface 80. The set of directions d, which in 2D can be, for instance, represented by unit vectors, represent predefined directions along which the unit 7000 will produce directional convex envelopes. Each process unit within the block unit 7000 is made of two subunits, which are the directional convex enveloper unit 700 and the direction input unit 701. More in particular, for each unit vector d, of the predefined set of directions, stored in the direction input unit 701 , the directional convex enveloper unit 700 performs the following steps: (a) The directional convex enveloper unit 700 operates on each pixel of the said image K to construct an array HES which represents a numerical approximation of the second derivatives of the image K. For instance, for a 2D image, HES is an array with 2 rows and 2 columns with the element at row 1 and column 1 given by HES(l , l)=( "(i-l ,j)-2A"(i,j)+A"(i+l ,j))/2; the element at row 2 and column 2 is given by HES(2,2)=(K(i, \)-2K(i,i)+K(iJ+\ )/2 whereas the element at row 1 and column 2 given by HES(1 ,2) is equal at the element at row 2 and column 1 given by HES(2, 1) and are given by (K(i+\ j+\)+K(i- \ ,j-\)-K(i+\ ,y\)-K(i-\ ,}+\))/4. (b) The directional convex enveloper unit 700 communicates then with the direction input subunit 701 and retrieves the direction d. (c) The directional convex enveloper unit 700 will then operate on each said pixel to compute the following quantity (HES*d)*d where (HES*d) denotes the standard product of row by column of a matrix by a vector, which has a vector as its result, whereas (HES*d)*d represents the inner product between the two vectors HES*i and d and gives a real number as its result, (d) The directional convex enveloper unit 700 will then operate on each said pixel to construct another image Kid which we call first directional average image along d and which is obtained by taking the smallest between K{i,j) and the value (HES*d)*d obtained at the previous step (e). (f) The directional convex enveloper unit 700 will then replace the image K with the image Kid and will then perform steps (a) to (f) again and it will repeat for a user determined number n of times set through the user interface 101 to produce the n-th directional average image along d Knd. This image is passed to the adder unit 30 where it is summed to the opposite of the strengthened image g to produce a first processed image, which we call the lower compensated directional convex transform of F along the direction d and we denote it by the symbol Cl g(F;d). Each of this image Cl g(F;d), one for each direction d, is then passed to the directional upper convex enveloper unit 6500 which is also shown in dashed line and applies on the input image another fundamental transformation introduced within the invention, which is the upper compensated directional convex transform along a given direction. More in detail, the image Cl g(F;d) is passed to the unit 25 that computes the corresponding opposite image and it is then summed within the adder unit 30 to another strengthened image h which is built by the unit 20 and which can be in general different from the first strengthened image g used within the directional lower convex enveloper unit 5500. The output is another image H whose pixel values are therefore given by H(i,j)= A(i,j) -
Figure imgf000019_0001
The measuring strengthened image h at the pixel can be given for instance, by T*(iA2+jA2) where τ is a control user parameter assigned through the user interface 101. The parameter τ can be in general different from λ used to define the strengthened image g. The image H is then passed to the unit 700 which performs the step (a) to step (f) as described above by replacing the image K therein with the image H and the direction d which is operated on with its perpendicular direction d and denoted by dL, passed to the unit 701 through the perpendicular direction input user interface 85. The result will be a new image Hn, which is then used to produce its opposite image within the process unit 25 and summed to the strengthened image h within the adder unit 30. The resulting image is denoted by the symbol CUh(Cl g(F;d); d ) whose pixel values CUh(Cl g(F;d);d )(i,j) are given by
Figure imgf000019_0002
d);d ) is then summed to the opposite of the image Cl g(F;d) to produce another image called within this invention the end point transform of F along the direction d and denoted by EPgk(F;d), that is, EPgh(F;d)(i,i)= C (C' g(F;d); d )(i,j) - Cl^F;d){i,j). The end point transform images EFgh(F;d) of F along each direction d are then passed to the unit 900 to produce another image denoted by EPgh(f;D) whose value at the pixel is obtained by adding the value EPgk(F;d)(i,j) for each unit vector d, that is, EPgh(F;D)(i,j)= SVM{EPgh(F;d)(i,j), for all unit vectors d belonging to the predefined set D of directions} . The geometrical effect of this transformation is to create an extremal point localized at the end points. To single out such points, the image EPg/l(F;D) can be passed, for instance, to the process unit 60 for the localization of the strict local maxima.
Figure 18 illustrates a system that applies a second method for detecting end points of lower dimensional objects. Such system replaces the directional upper convex enveloper unit 6500 shown in Figure 17 by the directional lower convex enveloper unit 5500 which modifies each image Cl g(F;d) by applying the lower compensated directional convex transform along the corresponding perpendicular direction. Figure 19 illustrates a curve with localization of its end points determined in accordance with the present invention.
Figure 20 illustrates a system that performs another exemplary embodiment of the present invention that comprises a method for locating ridges in an image. The system includes the input unit 10 of the initial input image / that is then processed by the lower convex enveloper unit 5000, which has already been described with reference to Figure 2, to produce the lower compensated transform image of the initial input image / This is passed to the unit 25 to be transformed into its opposite and then summed to the initial input image / within the adder unit 30. The resulting image of the ridge map is then passed to the display unit 1000. From a geometrical point of view, the lower compensated convex transform of the initial input image / fits the graph of the initial input image / from below with a fixed negative curvature which is controlled through the user interface 21. The gap between the original ridge of the image and the smoother lower transform provides therefore a marker for the ridge that records the relative strength of different ridges. The ridge map so obtained can be further processed to obtain the edges of the image by a simple threshold which can be performed by the unit 50 using the threshold given through the user interface 51 as illustrated in Figure 22.
Figure 21 illustrates a system that performs another exemplary embodiment of the present invention that comprises a method for locating valleys in an image. The system includes the input unit 10 of the initial input image / that is then processed by the block unit 6000, which has already been described with reference to Figure 11 , and produces the upper compensated transform image of / This is passed to the adder unit 30 to produce the image of the valley map obtained by summing the said upper compensated convex transform image of/ to the opposite of the original input image /. A similar geometrical interpretation given for the ridge map holds also for the valley map. Given its geometrical interpretation, the upper compensated convex transform of the input image / fits the graph of the image / from above with a fixed positive curvature which is controlled through the user interface 23. The gap between the smoother upper compensated convex transform image of / and the original valley of the image provides a marker for the valley that records the relative strength of different valleys. Likewise the ridges, the valley map so obtained can be further processed to obtain the edges of the image by a simple threshold which can be performed by the unit 50 using the threshold given through the user interface 51. The resulting system is illustrated in Figure 23.
Figure 24 illustrates a system that comprises a method for locating saddle points in an image. These are defined as those points that belong to a ridge and a map and are obtained by finding the common points to the edges obtained by the ridge map and to the edges obtained by the valley map. It follows that such system, as illustrate in Figure 24, includes the input unit 10 of the image f which is then processed by the system illustrated in Figure 22 and by the system illustrated in Figure 23 which are both shown in Figure 24. The resulting edges represented as binary images with for instance value zero at the pixels belonging to the edges, are passed to the unit 70 which will operate on each pixel of the image and will locate those pixels where both the edge from the ridge map and the edge from the valley map take the same value equal to zero. Figure 25 illustrates a system that comprises another method within the present invention for locating edges in an image as the sum of the valley and ridge map. The system includes the input unit 10 of the image / which is processed by the upper convex enveloper unit 6000 to produce the image that represents the upper compensated convex transform of / . Sequentially or in parallel, the image / is processed also within the lower convex enveloper unit 5000 to produce the lower compensated convex transform of / that is then input to the unit 25 where it is transformed in its opposite and summed to the upper transform within the adder unit 30. The resulting image of the edge map is passed to the display unit 1000.
Figure 26 shows in a block diagram form a system that applies a first method for the smoothing of an image or a function/ The system includes the input unit 10 of the image/ which is processed first by the lower convex enveloper unit 5000 to produce another image which is the lower transform of / This image is subsequently processed within the upper convex enveloper unit 6000. The sequence of the two block units with the lower convex enveloper unit 5000 first followed by the upper convex enveloper unit 6000 represents another fundamental transformation within the present invention that is called mixed compensated transform of / as upper of the lower transform. The output of such sequence of processes is then passed to the display unit 1000. The processes within the lower convex enveloper unit 5000 perform a smoothing of the singularities of the image that are concave by leaving unchanged the singularities of the image that are convex. These are then smoothed subsequently by the processes within the upper convex enveloper unit 6000. By inverting the sequence of the processes, that is, by having the input image / processed first by the upper convex enveloper unit 6000 and then followed by the lower convex enveloper unit 5000 one realizes another method for the smoothing of the image f. Such sequence of processes represents also another fundamental transformation within the present invention that is called mixed compensated transform of / as lower of the upper transform. The system that includes such sequence of operations on the image is illustrated in Figure 27. Figure 28 illustrates a system that performs a third method for the smoothing of an image with noise. The system includes the two systems shown in Figure 26 and Figure 27 with the respective output images passed to the process unit 250 that produces another image as a convex combinations of the two input images, which in particular can be the average arithmetic. In using the two systems shown in Figure 26 and Figure 27, the strengthened function g used within the lower convex enveloper units 5000 can be also used within the upper convex enveloper units 6000. The choice of the strengthened image, which is done through the user interfaces 21 and 23 respectively, by which one input the parameter λ equal to the parameter x, will depend on the noise frequency. The effect of the upper and lower transforms is to reduce to the background value, that is to the clean function or to the clean image, the convex and concave oscillations present in the noise, respectively, with the addition of also a smoothing effect due to the action of the external component in evaluating each of the mixed transforms.
Figure 29 illustrates a system that performs the smoothing of the interior and/or exterior angles of a geometric object represented as binary input image F, with the pixels representing the domain with value equal to zero. The system includes the input unit 10 of the image / which is first processed by the upper convex enveloper unit 6000 and then by the lower convex enveloper unit 5000 to produce the image Ca ¾(C l g( )) . Such image is then passed to the unit 50 which locates the pixels of the said image Ca ¾(C g /)) whose value CUh(Cl g( ))(i,j) is not lower than a geometric threshold thres assigned through the user interface 51. The output is a binary image representing the domain with smoothed interior angles. In the case that also the exterior angles need to be smoothed, the binary image representing the domain with smoothed interior angles is processed first by the lower convex enveloper unit 5000 and then by the upper convex enveloper unit 6000. The selection of the threshold thres through the user interface 51 and the selection of the parameters λ and x, through the user interfaces 21 and 23, respectively, if the convex strengthened functions g^*(iA2+jA2) and h = x*(iA2+jA2) are used, control the roundness of the corners of the domain. A significant advantage of using the transformations described above is that the present invention thus provides a global method for smoothing which, when applied to nonsmooth functions, without prior knowledge of where the nonsmooth region is, replaces such a region with a smooth one and is equal to the original function in the other parts. The same applies to nonsmooth domains, in the sense that the method employed herein will replace all of the corners of the domain with smooth ones, without prior knowledge of where the corners are located.
Figure 30 illustrates a system that applies a first method for detecting irregularities such as for example comers, necks or small blobs in an image. The method is exemplified herein for the case of bright comers, that is, the region with bearing less than 180 degree has a larger value with respect to the surrounding region, and for the case of bright necks and bright blobs. For a geometric object represented in binary form, the bright comer is the region with pixel value one and with bearing less than 180 degree. It will be however apparent to the one skilled in the art the adaption of the method within the present invention for detecting dark comers, dark necks and dark blobs. A dark comer in a binary geometric object, for instance, denotes the region with pixel value zero and with bearing less than 180 degree. Referring to Figure 30 the system includes an input unit 10 that loads the image/ and sends it to the lower convex enveloper unit 5000 where it is processed to produce another image which is the lower compensated convex transform image of This image is in turn processed by the upper convex enveloper unit 6000 which produces the mixed compensated convex transform of / By a careful choice of the parameters λ and τ, the effect of applying the two processes in the above sequence to, for instance a binary object representing a bright corner, is to get a smooth image which is not much different from the input image apart from the region surrounding the corner where an extremal value is created. The difference between the input image and the mixed transform will then be an image with an extremal point at the feature of interest. The output of the upper convex enveloper unit 6000, which is the mixed transform of /, is therefore sent to the unit 25 to build the opposite image and summed to the input image within the adder unit 30. To single out the corners, the blobs or the necks, the said image can be finally sent either to the process unit 60 for only the localization of the corners, or to the process unit 50 which will also show the orientation of the feature, that is the shape of the image in the neighbourhood of the feature.
Figure 31 illustrates a system that applies a second method for detecting corners, necks and small blobs in an image. The system includes an input unit 10 that passes the image / to the directional lower convex enveloper unit 5500 which constructs a number of images equal to the number of directions d belonging to a predefined set D of directions given through the user interface 80. The processes within the directional lower convex enveloper unit 5500 have already been described with reference to Figure 17 and produce the image of the lower compensated directional convex transform of the image / along the given direction d. Each of this image is then passed to the directional upper convex enveloper unit 6500 which applies the upper compensated directional convex transform along the same direction d at variance of what done by the system illustrated in Figure 17 which uses the perpendicular direction to d. The directional upper convex enveloper unit 6500 output the mixed compensated directional convex transforms images C"h(Cl g(F;d);d) which on the b asis of an appropriate choice of the parameters λ and τ are meant to create an image with an extremal value at the possible discontinuity present along the given direction d. As in the system shown in Figure, each of the mixed transforms is then passed to the unit 25 to build the opposite image and summed to the input image / within the adder unit 30 to produce another image called within this invention the corner transform of / along the direction d and denoted by CRgh(f;d), that is, CRgk(f;d)(i,})= f(i,j) - CU h(Cl g(f;d); d)(i,j). The corner transform images CRgh(f;d) of / along each direction d are then passed to the unit 800 to produce another image denoted by CRgk(f;D) whose value at the pixel (z' ) is obtained by taking the maximum of the values CRgh(f;d)(iJ) for each unit vector d, that is, CRgh(f;D)(i,j)= MAX{CRgh(f;d)(iJ), for all unit vectors d belonging to the predefined set D of directions} . The geometrical effect of this transformation is to create an extremal point localized at the corner, neck and blob on the assumption that for such features the strength of the discontinuity is higher. To single out such elements, the image CRgh(F;D) can be passed, for instance, to the process unit 60 for the localization of the strict local maxima or to the process unit 50 for the localization and orientation of the feature.
Figure 32 displays a picture of dark corner as meant within the present invention whereas Figure 33 shows the location and orientation of the corners and neck as result of applying the system described in Figure 31 adapted to the case of dark corner.
Within the present invention, it is also provided a method that detects thin object through enhancement, i.e. by enhancing lines and curves which are faint in the original image. This can be achieved by the same system illustrated in Figure 9 which refers to the case of dark thin object, that is, the pixel value therein are smaller than the background. The system includes the input unit 10 which loads the image and passes it to the block unit 5000 that computes the lower compensated transform image. Since this transformation approximates the line which is a singularity of the input image from below, it will spread over a region around the thin object for a width that is controlled by the parameter λ given through the user interface 21.
A person skilled in the art will appreciate that the present disclosure and in particular the use of convex envelopes can be easily extended to other application. Examples include but are not limited to image expansion or image contraction or the determination of tangent and tangent point in lower dimensional objects in an image.
It should be noted that the present invention is not restricted to the above-described embodiments and preferred embodiments may vary within the scope of the appended claims. The term "comprising", when used in the specification including the claims, is intended to specify the presence of stated features, means, steps or components, but does not exclude the presence or addition of one or more other features, means, steps, components or groups thereof. Furthermore, the word "a" or "an" preceding an element in a claim does not exclude the presence of a plurality of such elements. Moreover, any reference sign does not limit the scope of the claims. The invention can be implemented by means of both hardware and software, and several "means" may be represented by the same item of hardware. Finally, the features of the invention, which features appear alone or in combination, can also be combined or separated so that a large number of variations and applications of the invention can be readily envisaged.

Claims

Claims:
1. A method for processing an initial image (f), the method comprising:
a) Determining combination image values of at least one region of interest of an input image (f, F, DT, J, L, N), the input image being the initial image (f) or a pre-processed image (DT; N), wherein the combination image values are each determined based on a combination of two or more regions of the input image, wherein the two or more regions are arranged in proximity to the at least one region of interest;
b) Selecting as replacement value the combination value from the combination values determined in step a) that best fits a replacement criteria;
c) Replacing the region of interest of the input image with the replacement value if the replacement value better fits the replacement criteria than the input image (f, F, DT, J, L, N);
d) Repeating steps a) to c) iteratively to obtain a transformed image (C'g (j); C'g (DT);
C'g (F); C'g (N);C'g (f;d) or C\(f); C (F) or C\ (f;d)); and
e) At least one of outputting the transformed image (C'g (/); C'g (DT); C'g (F); C'g (N);
C'g (f;d); or C ( ); C (F) or C\ (f;d)), or analysing or modifying the initial image (f) using said transformed image(C'g (DT); C'g (F); C'g (f); C'g (N); C'g (f;d) or C (f>; C\(F) or C\ (f;d))).
2. The method of claim 1 , wherein the initial image (f) is digitized image and the region of interest is at least one pixel in the digitized image.
3. The method of claim 1 or 2, wherein each one of the two or more regions of the image arranged in proximity to the at least one region of interest are arranged in a convex hull around the region of interest.
4. The method of any one of the preceding claims, further comprising the steps
a) Repeating steps a) to e) by using as input image a transformed image obtained by the step e)
b) At least one of outputting the transformed image (Cu h(C'g (/)); Cu h(C'g (F); Cu h(C'g F;dl),d2); C'g(C (f)) ; C'g(C\ (F); C'g(C\ (F;dl),d2);) or C'g(C'h (F))), or analysing or modifying the initial image (f) using said transformed image (C (C'g ( )); C (C'g (F)); C\(C'g (F;dl),d2); C'g(C\ (f)) ; C'g(C (F); C'g(C\ F;dl),d2)) or C'g(C'h (F))).
5. The method of any one of the preceding claims, further comprising determining a strengthened image (g; h), the strengthened images associating a strength value to each image region.
The method of any one of the preceding claims, further comprising constructing at least one intermediate image (F, L, M, K) from the initial image (f) identifying the region of interest based on points of sharp change in the initial image (f).
The method of claim 6, wherein constructing the intermediate image comprises constructing a binary image (F) from the initial image (f) using a predetermined threshold.
The method of claim 7 used for identifying a medial axis in the initial image (f) and further comprising generating a distance transform image (DT) from the binary image (F), the binary image (F) being the intermediate image.
The method of claim 7 or 8, in combination with claim 4, wherein an image (J) is used as input image in step a) which is the addition of the distance transform image (DT) and the strengthened image (g, h).
The method of any one of claims 7 to 9, used for identifying a medial axis wherein step f) comprises outputting, as processed image, a difference image defined as the difference between the distance transform image (DT) and the transformed image (C'g (DT)).
The method of any one of claims 1 to 7, used for identifying ridges in an image (f) wherein step e) comprises outputting, as processed image, a difference image defined as the difference between the input image (f) and the transformed image (C'g (f)).
The method of any one of claims 1 to 7, used for identifying valley in an image (f) wherein step e) comprises outputting, as processed image, a difference image defined as the difference between the transformed image (C (f)) and the input image (f).
The method of any one of claims 1 to 7, used for identifying edges in the initial image (f), wherein the initial image (f) is the input image and wherein step e) and step g) comprise outputting, as processed image, a difference image defined as the difference between the transformed image (Cu g (f)) and the transformed image (C'g (f)) or as the difference between the transformed image (C'g (C (f))) and the initial image (f).
The method of any one of claims 1 to 7, used for identifying at least one irregularity in the initial image (f) wherein step g) comprises outputting, as processed image, a difference image defined as the difference between the transformed image (C'g (CU h (f))) and the initial input image (f) or as the difference between the initial input image (f) and the transformed image
Figure imgf000026_0001
The method of any one of claims 1 to 7, used for smoothing the initial image (f) with point sharp changes comprises outputting, as processed image, the image obtained by computing an average of the transformed images C'g (C (f)) and C\ (C'g (f)).
The method of any one of claims 1 to 7, used for smoothing the corners of a geometrical object and comprising outputting, as processed image (SM), the processes image (SM) obtained by computing the transformed image (C'g (CU h (f))) and by thresholding said image..
17. The method of claim 16, wherein the processed image (SM) is used to compute Cu h (C] g (SM)).
18. The method of any one of claims 1 to 7, for removing damaged areas from the initial image (f), wherein the determining of the average image values is performed of the initial image (f) and on each one of the region of the initial image (f).
19. The method of claim 18, wherein two intermediate images (L) and (M) are constructed based on a predetermined threshold identifying damaged areas and used to construct an average intermediate image (N) wherein determining the average image values is performed based on the average intermediate image.
20. The method of claim 18 or 19, wherein step e) comprises outputting, as processed image the transformed image (C'g (N)).
21. The method of any one of claims 1 to 7, used for identifying turning and crossing points in a lower dimensional object loaded as initial image (f) wherein step g) comprises outputting, as processed image, the transformed image (C (C'g (F))) and the location of the pixels that are local minima.
22. The method of any one of claims 1 to 7 used for identifying end points on lower dimensional objects comprising computing for a pair of directions (dl) and (d2) as processed image, a difference image defined as the difference between the transformed image (C (C'g (F;dl); d2))) and the transformed image (C'g (F;dl ) or as the difference between the transformed image (C h (C'g (F;dl); d2))) and the transformed image (C'g (¥;dl)).
23. The method of claim 22, comprises outputting, as processed image the image obtained by summing the images obtained for each directions and locating the strict local maxima of such image.
24. The method of any one of claims 1 to 7 used for identifying corner, small blobs and necks comprising computing for each direction (d) as processed image, a difference image defined as the difference between the transformed image (C (C'g (F;d); d))) and the input initial image (f).
25. The method of claim 24, comprises outputting, as processed image the image obtained by computing the pointwise maximum of the images obtained for each directions and locating the strict local maxima of such image.
26. A system arranged and configured to perform the method according to any one of the preceding claims.
27. A computer program product configured to perform the method according to any one of the preceding claims.
PCT/EP2010/069815 2009-12-15 2010-12-15 Image processing WO2011080081A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1210137.4A GB2488294B (en) 2009-12-15 2010-12-15 Image processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0921863.7 2009-12-15
GBGB0921863.7A GB0921863D0 (en) 2009-12-15 2009-12-15 Image Processing

Publications (2)

Publication Number Publication Date
WO2011080081A2 true WO2011080081A2 (en) 2011-07-07
WO2011080081A3 WO2011080081A3 (en) 2011-09-01

Family

ID=41667097

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/069815 WO2011080081A2 (en) 2009-12-15 2010-12-15 Image processing

Country Status (2)

Country Link
GB (2) GB0921863D0 (en)
WO (1) WO2011080081A2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104331697B (en) * 2014-11-17 2017-11-10 山东大学 A kind of localization method of area-of-interest

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2218507A (en) 1989-05-15 1989-11-15 Plessey Co Plc Digital data processing
GB2272285A (en) 1992-06-10 1994-05-11 Secr Defence Determining the position of edges and corners in images.

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5574803A (en) * 1991-08-02 1996-11-12 Eastman Kodak Company Character thinning using emergent behavior of populations of competitive locally independent processes

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2218507A (en) 1989-05-15 1989-11-15 Plessey Co Plc Digital data processing
GB2272285A (en) 1992-06-10 1994-05-11 Secr Defence Determining the position of edges and corners in images.

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
A. OBERMAN: "The convex envelope is the solution of a nonlinear obstacle problem", PROC. AMER. MATH. SOC., vol. 135, 2007, pages 1689 - 1694
B. BRIGHI; CHIPOT: "Approximated convex envelope of a function", SIAM J. NUMER. ANAL., vol. 31, 1994, pages 128 - 148
G. DOLZMANN: "Numerical computation of rank-one convex envelopes", SIAM J. NUMER. ANAL., vol. 36, 1999, pages 1621 - 1635
J.-B. HIRIART-URRUTY; C. LEMARECHAL: "Fundamental of Convex Analysis", 2001, SPRINGER-VERLAG
K. ZHANG: "Compensated Convexity and its Applications", ANN. 1. H. POINCARE - AN, vol. 25, 2008, pages 743 - 771
M. BETALMIO; G. SAPIRO; V. CASELLES; C. BALLESTER, IMAGE INPAINTING, 2000
R.T. ROCKAFELLAR: "Convex Analysis", 1970, PRINCETON UNIV. PRESS
S. MASNOU; J.-M. MOREL: "Level lines based disocclusion", 5TH IEEE INTERNATIONAL CONFERENCE, 1998, pages 105 - 138
Y. LUCET: "A fast computational algorithm for the Legendre-Fenchel transform", COMPUT. OPTIMIZ. AND APPL., vol. 6, 1996, pages 27 - 57

Also Published As

Publication number Publication date
GB201210137D0 (en) 2012-07-25
GB0921863D0 (en) 2010-01-27
WO2011080081A3 (en) 2011-09-01
GB2488294A (en) 2012-08-22
GB2488294B (en) 2015-10-28

Similar Documents

Publication Publication Date Title
CA2948226C (en) Detecting edges of a nucleus using image analysis
CN109299720B (en) Target identification method based on contour segment spatial relationship
CN110427932B (en) Method and device for identifying multiple bill areas in image
Ta et al. Nonlocal PDEs-based morphology on weighted graphs for image and data processing
Farhan et al. A novel method for splitting clumps of convex objects incorporating image intensity and using rectangular window-based concavity point-pair search
Spira et al. An efficient solution to the eikonal equation on parametric manifolds
CN110751620B (en) Method for estimating volume and weight, electronic device, and computer-readable storage medium
Ćurić et al. Adaptive mathematical morphology–a survey of the field
CN112016546A (en) Text region positioning method and device
US5550933A (en) Quadrature shape detection using the flow integration transform
CN113516053A (en) Ship target refined detection method with rotation invariance
CN104268550A (en) Feature extraction method and device
Coleman et al. Gradient operators for feature extraction and characterisation in range images
Omidalizarandi et al. Segmentation and classification of point clouds from dense aerial image matching
WO2011080081A2 (en) Image processing
CN111160142A (en) Certificate bill positioning detection method based on numerical prediction regression model
CN115984211A (en) Visual positioning method, robot and storage medium
CN115619678A (en) Image deformation correction method and device, computer equipment and storage medium
Sintunata et al. Skeleton extraction in cluttered image based on Delaunay triangulation
Lefèvre et al. Morphological template matching in color images
Pujol et al. On searching for an optimal threshold for morphological image segmentation
Priyadharshini et al. Research and analysis on segmentation and thresholding techniques
Zheng et al. An improved NAMLab image segmentation algorithm based on the earth moving distance and the CIEDE2000 color difference formula
Mejías et al. A new algorithm to extract the lines and edges through orthogonal projections
CN104143198A (en) Image description method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10795320

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 1210137

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20101215

WWE Wipo information: entry into national phase

Ref document number: 1210137.4

Country of ref document: GB

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10795320

Country of ref document: EP

Kind code of ref document: A2