GB2529813A - Scale estimation for object segmentation in a medical image - Google Patents

Scale estimation for object segmentation in a medical image Download PDF

Info

Publication number
GB2529813A
GB2529813A GB1415252.4A GB201415252A GB2529813A GB 2529813 A GB2529813 A GB 2529813A GB 201415252 A GB201415252 A GB 201415252A GB 2529813 A GB2529813 A GB 2529813A
Authority
GB
United Kingdom
Prior art keywords
boundary shape
processing range
image
centre
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1415252.4A
Other versions
GB2529813B (en
GB201415252D0 (en
Inventor
Yuta Nakano
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1415252.4A priority Critical patent/GB2529813B/en
Publication of GB201415252D0 publication Critical patent/GB201415252D0/en
Publication of GB2529813A publication Critical patent/GB2529813A/en
Application granted granted Critical
Publication of GB2529813B publication Critical patent/GB2529813B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20156Automatic seed setting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20161Level set
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present invention defines a method and apparatus for scale estimation of an object in an image. The method comprises displaying the image containing the object, LN, a user inputting a boundary shape or boundary box, 40, around the object and a processing range, 310, being defined based on the boundary shape. The processing range may be defined so that it is less likely to contain artefacts that could be mistaken for object edges but so that it is still likely to contain the object edges. Scale estimation is then performed, preferably using a Laplacian-of-Gaussian (LoG) filter, on the points contained in the processing range. The method of scale estimation may be used as part of an object segmentation process in, for example, the processing of medical images containing objects such as lymph nodes, nodules, tumours or lesions. Also disclosed are a method and apparatus for estimating a centre of an object in which a boundary shape is input around the object, a central point of the boundary shape is established and then the position of the centre of the object estimated based on the central point of the boundary shape.

Description

Scale Estimation for Object Segmentation in a Medical Image
TECHNICAL FIELD
[0001] The present invention relates to object segmentation in images and in particular, to the segmentation of objects such as lymph nodes, nodules, lesions, tumours or other foreign or pathological bodies in medical images.
BACKGROUND ART
[0002] Object segmentation is a technique of recognising a shape or boundary of an object within a digital image. The aim of object segmentation is to change the representation of objects in an image (such as a medical image) so that the image is easier to analyse, for example, to determine whether an object has changed over time, from one image to another. There are many techniques that accomplish this object recognition which may be more or less appropriate depending on the range of contrast within the image, the extent of contrast between the object and the background, the scale or relative size of the object, the format of the image data, and so on. However, what most techniques have in common is that each pixel in an image (or voxel in a 3-D image) is assigned some sort of label corresponding to certain visual characteristics (such as grey scale, intensity, etc.). In this way, pixels (voxels) assigned a same or similar label may be seen by a viewer as being grouped together for some reason, for example to show that they are part of the same structure (e.g. the same object) in the image.
[0003] A type of object segmentation known as "blob detection" is described in T. Lindeberg's "Feature detection with automatic scale selection" in the International Journal of Computer Vision, Vol. 30, pp.79-"61 1998. In this paper, a form of scale determination is described that uses a Laplacian operator of a Gaussian model (Laplacian-of-Gausian or LoG operator or filter) to determine relative positions and scales of "blobs" in an image.
[0004] A. Jirapatnakul, S. Fotin et al.'s "Automated Nodule Location and Size Estimation Using a Multi-scale Laplacian of Gaussian Filtering Approach" in the Annual International Conference of the IEEE EMBS, pp.1028-1031, 2009 describes the application of Laplacian-of-Gaussian filtering from a seed point placed near the centre of a blob to find the size of the blob.
[0005] In both techniques mentioned above, an overall estimate of the scale of the blob is to be found using the LoG filter, which iteratively checks different scales (sigma values, to be described later) and finds one that is most likely to be the scale of the blob in question. In one dimension, the scale may be a distance from an origin to an estimated edge of the blob; in two dimensions, the scale may be a radius of a circle
-
representing the blob and in three dimensions, the scale may be a radius of a sphere representing the blob. The scales determined by LoG filters are based on positions of change in contrast relative to a centre position. The change in contrast is assumed to be the edge of the blob. A "region-growing" approach analyses each pixel from a central point outwards, determining whether each pixel conforms to parameters used to identity an object. When pixels that do conform cease to be found, it is assumed that the positions at which the pixels cease to conform are positions corresponding to the edge of the object.
[0006] This process of "scale estimation", which uses an LoG operation to estimate a scale of the object to be segmented, is a useful tool in object segmentation. For example, in the region-growing approach, scale estimation can be used to constrain an excessive region expansion (i.e. the process of analysing pixels increasingly outwards from the central point) because an approximate maximum size of the object is known (i.e. outside of a certain region found by scale estimation, pixels conforming to the parameters for inclusion in the object are unlikely to exist) . However, there are potential problems in the scale estimation process. One of the problems is the risk of a very large computation time. In conventional scale estimation methods, such as those referred to above that use a LoG filter, the computation time is very large because the LoG filter is applied over a large processing range (i.e. over a large search space) and to every point therein. In general, scale estimation is an iterative process with changing parameters related to the number of points in the processing range and giving rise to a scale for each point. One then determines an appropriate scale from all computational results which, for a large processing range, may be many.
[0007] Figure 13A shows the estimated scale according to a conventional scale estimation method using an LoG filter. In the figure, the horizontal axis represents a sigma avalue, which is a value associated with (and identifying) each point in the processing range. The point 60 labels the highest LoG value in the processing range. The midmost peak 70 is the peak originated from the appropriate scale (i.e. the actual edge) of the inputted target object. The left-most peaks 80 are further peaks in the LoG value arising from, for example, artefacts in the image.
[0008] Tn the scale estimation process using the LoG filter, the scale of the object is taken to be defined by the highest LoG value. In figure 13A, it is clearly a risk that the conventional method determines the incorrect scale as the estimated scale, as peak 60 would be understood to be the edge marker when in fact peak 70 should be the object edge marker.
[0009] A centre position of the object plays an important role in the segmentation process because a distance from the centre position is determined for constraining the excessive region expansion in region-growing processing. Indeed, an incorrect centre position may lead to a fatal segmentation error.
However, a correct centre position may be difficult to identify because it is often based on a user clicking on a point near the centre with a mouse or some other similar user input via a user interface. Ihe main problem with this is that different users do not reliably click on consistent positions as centre positions and so there can be a large margin for error.
[0010] A next step in object segmentation prooessing (after the scale estimation step) may be the segmentation itself involving a region-growing method. From a central "seed" point, all points in a region of interest are analysed to determine whether they satisfy the criteria chosen to define the object.
All points satisfying those criteria are determined to belong to the object.
[0011] An alternative segmenuation process may be used. It may involve a level set algorithm as described in S. Usher and J. Sethian's "Fronts Propagating with Curvature-Dependent Speed: Algorithm Based on Hamilton-Jacobi Formulations" in the Journal of Computational Physics, Vol. 79, pp.12-49, 1988 and in J. Sethian's "Level Set Method Vt Ed." in the Cambridge University Press, 1996. These documents describe an algorithm that defines a front propagating with curvature-dependent speed. The way this is done is to take a surface, the "movement" (used as a time-dependent metaphor for variation over a dimension in space) of which is to be predicted, and to intersect it with a plane to define a contour. For example, a sphere may be bisected about its equator with a plane to define a circle on the plane. This contour (e.g. the circle) has a definable shape at time t=C and in order to define how the contour changes over time (e.g. the diameter of the circle decreasing as the plane moves from the eguator to a pole of the sphere) , forces are applied to its "fronts" (e.g. its circumference) to define the direction in which the fronts will travel (e.g. inwards) as t changes.
[0012] In a case of a series of medical images such as computed tomography (CT) scans that combine to give a three-dimensional representation of an object in the human body for instance, the level set algorithm may be used to define the outer surface of the three-dimensional object. In order to do this, however, the edges of the object in all of the images making up the three-dimensional image must first be determined.
[0013] It is known to find the edges of the object using a segmentation process such as region-growing from a central seed point. To build up the three-dimensional object shape, the region-growing process may be applied to multiple parallel images. The level set algorithm helps to reduce processing by reducing the need for a user to input a seed point in each image.
It does this by using a limited number of object edges to build up the full 3-D shape. However, the processing time and load is still a major limiting factor in the efficiency of the segmentation process.
SUMMARY OF THE INVENTION
[0014] In consideration of the above problems, it is desired to provide an apparatus and method for reducing the processing load during object segmentation or reducing the likelihood of false object edges being found by usefully limiting the processing range within the image in which the object segmentation is to be performed.
[0015] According to a first aspect of the present invention, there is provided a method for scale estimation of an object in an image comprising: displaying the image containing the object; inputting a boundary shape around the object; defining a processing range based on the boundary shape; and performing scale estimation of the object within the processing range.
[0016] Thus, the processing range for the scale estimation is defined and it is based on the boundary shape. How it is based on the boundary shape will form the majority of the detailed description below. The size of the processing range is based in large part on the size of the input boundary shape. The position of the processing range is found by correcting a central point of the boundary shape to be a central point of the cbject being analysed. This positioning is also based on the size of the input boundary shape, as well as on its position.
[0017] According to a second aspect of the present invention, there is provided a method of segmenting an object from an image comprising: obtaining the image; performing scale estimation of the object; performing rough segmentation for estimating a circular shape representing the object; performing refinement comprising a level-set method for refining the shape of the object based on the circular shape; and extracting the refined shape of the object from the image.
[0018] According to a third aspect of the present invention, there is provided an information processing apparatus for estimating a scale of an object in an image comprising: display means for displaying at least one image; input means for receiving an input of a boundary shape around the object in the at least one image; calculating means for calculating a processing range based on the input boundary shape; processing means for performing scale estimation of the object within the calculated processing range.
[0019] According to a fourth aspect of the present invention, there is provided a method of estimating a centre of an object in an image comprising: displaying the image containing the object; inputting a boundary shape around the object; determining a central point of the boundary shape; and performing a position estimation process of the centre of the object based on the central point of the boundary shape. The position estimation process is preferably performed using a Laplacian-of-Gaussian filter based on each position within a neighbourhood of the boundary shape central position. The centre of the object may then be used to position the processing range for the scale estimation more accurately.
[0020] Thus, according to embodiments of the present invention, the processing range is reduced or at least more precisely positioned and sized by using information obtained (in the form of a boundary shape or "bounding box") via a user interface. There are two aspects of the creation of a more efficient processing range: determining an accurate centre position of the object and therefore of the processing range; and determining a processing range that is as small as possible while still covering an area that will include the edges of the object.
[0021] An advantage of having a more precisely positioned processing range for scale estimation (further to reducing the processing load) is that the scale estimation of the object to be segmented is more likely to be accurate. Specifically, if the processing range is more likely to include the edges of the object tc be segmented and if it is less likely to include artefacts that may be mistaken for edges, the accurate positions of the edges of the object are more likely to be found and the object is thus more likely to be identified and segmented accurately.
[0022] Dealing with the segmentation of three-dimensional objects from three-dimensional images is improved by performing scale estimation in the three dimensions. Again, more precisely positioning the processing range for each dimension is advantageous in that the scale of all dimensions of the object are more likely to be accurately found.
BRIEF DESCRIPTION OF THE DRAWINGS
[0023] The invention will be described below purely by way of example with reference to the following figures.
[0024] Fig. 1 is a flowchart showing a lymph node segmentation process.
[0025] Fig. 2 is a flowchart showing a pre-processing process from the segmentation process.
[0026] Fig. 3 is a flowchart showing a scale estimation process from the segmentation process.
[0027] Fig. 4 is a flowchart showing the use of a LoG filter in the scale estimation process. -10-
[0028] Fig. 5 is a flowchart showing a rough segmentation process from the segmentation process using a region-growing method.
[0029] Fig. 6 is a flowchart showing segmentation refinement from the segmentation process using level set segmentation.
[0030] Fig. 7 shows an image of a lymph node surrounded by a user-input bounding box.
[0031] Fig. 8 shows an estimated centre of a lymph node surrounded by a cuboid neighbourhood volume to be used for scale estimation processing.
[0032] Fig. 9 show a centre position of a bounding box and a corrected centre position of an object bounded by the bounding box.
[0033] Figs. iDA and lOB show a processing range for scale estimation processing compared to an input bounding box size.
[0034] Fig. 11 shows a Lapiacian-of-Gaussian kernel.
[0035] Figs. 12A and 12B show display representations of processing ranges according to the prior art and according to an embodiment of the present invention.
[0036] Figs. 13A and 138 show graphs of processing ranges of a LoG filter according to the prior art and an embodiment of the present invention.
DESCRIPTION OF THE EMBODIMENTS
[0037] The following description will describe preferred embodiments for segmenting lymph ncdes from three-dimensional medical images such as computed tomography (CT) images. Of course, the described embodiments can be used for segmenting other objects such as lesions, tumours or nodules or other objects discernable in a medical image.
[0038] The preferred embodiments follow an ordered process that involves manipulating a region of interest in a volume (three-dimensional or 3-D) image in order to perform Laplacian-of-Gaussian (LoG) filtering on the object image to estimate the object's scale. Then, a seed region is defined and a region-growing segmentation method is applied to the object image, its thresholds and seed regions being based on the estimated scale and finally, the segmentation is refined using a level set process and the segmented object is displayed. The embodiments described below are primarily concerned with the manipulation of the region of interest in the volume image in order to find an optimum processing range for the LoG filter.
[0039] Figures l2A and 12B show the comparison between the processing ranges of the conventional method (Figure 12A) and that of an embodiment of the present invention (Figure 123) when a scale estimation step has been performed on an object LN.
Arrow 300 represents a radius of the conventional processing range PR. Figure 12A shows the processing range PR in the -12 -conventional method, and figure 123 shows the processing range FR according to an embodiment. In the conventional method, the large processing range PR has to be analysed to find the edges of the object LN, from the centre K to the outer periphery of the processing range PR with radius 3CC). On the other hand, the processing range PR of an embodiment shown in figure 12B is reduced on the basis of a bounding box EB input by a user to an annulus with outer limit 20 and inner limit 10, of thickness shown by the arrow 310. Having the inner and outer limits of the processing range reduces the area processed and thus reduces the time taken in the scale estimation step (i.e. during edge identification of the object LI'T) . How this limited processing range is calculated will be described below.
[0040] Figures 13A and 133 show a parallel situation to figures]2A and 12B, but from a point of view of the algorithm used for scale estimation. Figure 13A shows a LoG histogram with processing range radius 300 corresponding to the processing range radius 300 of figure 12A and figure 13B shows a histogram with processing range radius 3O corresponding to the processing range radius 310 shown in figure 123. The relevance of these figures will be discussed with reference to the specific embodiments below, but the hatched processing range is clearly more limited in figure 133.
-13 - [0041] A preferred process of lymph node segmentation from a three-dimensional image data set is illustrated in Figure 1 and described below. A similar process can be applied to the segmentation of the lymph node from a two-dimensional image data set.
[0042] The first step 1000 shown in figure 1 is tc obtain the images using a processor such as a computer connected to a modality or imaging device such as a CT scanner, ultrasound scanner, X-ray detector, and the like. While viewing such images, a user inputs information to identify a target lymph node in step 2000. such information may comprise a central point in the middle of the lymph node or preferably a "bounding box" around the outside of the lymph node. The information may be input using a user interface and input means such as a mouse, touchscreen, joystick or keyboard. The bounding box may be square, rectangular, circular or any shape inputtable by a user.
The following specific embodiments are based on the assumption that a rectangular bounding box has been input or that a square bounding box has been extrapolated by a processor based on a central point input by a user.
[0043] Once identified, the lymph node of interest undergoes a segmentation process that comprises four modules: 1. pre-processing in step 3000; 2. scale estimation in step 4000; -14 - 3. rough segmentation in step 5000; and 4. refinement in step 6000.
[0044] Finally, the processor stores the segmentation results in a storage means such as a hard drive or other such memory and displays the results on a display means such as a computer screen or other digital display In step 7000.
[0045] The preferred embodiments use the bounding box input by the user to specify a processing range for the scale estimation that is different from that of the prior art. The specified processing range will not be the whole of the region of interest, but will be, in a preferred embodiment, an annular shape (or sphere with a hollow core in three dimensicns) that is intended to overlap with the edge of the lymph node so that the LoG filter peaks are more likely to correspond to edges of the lymph node and the likelihood of false peaks is reduced. This same method may then be applied to all three dimensions of the image data set to obtain a processing range in three dimensions (from which the scale estimation may be found in three dimensions) . Alternatively, a scale estimation value in three dimensions may be extrapolated from the scale estimated in two dimensions without going through the step of finding the processing range in all three dimensions and performing scale estimation in all three dimensions. This latter alternative may save time but risks being less precise in the third dimension.
-15 - [0046] A preferred method of specifying this mote accurate processing range using the bounding box is described below with reference to the four-step segmentation process 3000 to 6000 of figure 1.
[0047] The first module, pre-processing 3000, itself comprises five steps as shown in figure 2 and as listed below.
[0048] 1. Determine a size of a region of interest (ROl) around the lymph node in step 3100. The ROT is cropped from an original image. The size (or at least the position) of the ROl is generally determined based on a two-dimensional (2-D) bounding box input by the user in an x-y plane. If the image is a 2-D image, the ROl may be the area contained within the bounding box but it is more usually a larger area including the bounding box but considerably smaller than the initial image. If the image is a 3-D image, the ROl may be a 3-D volume defined by extrapolating the bounding box in a third dimension, or by having the user input bounding boxes in more than one (often parallel) image to build up a 3-D bounding box, then selecting an ROl to crop around that 3-D bounding box. The purpose of the cropped ROl is to reduce the full image to a size that is more manageable for processing, but focussed around the identified lymph node.
[0049] Alternatively, the ROI may be a specific size and shape around a centre point chosen by the user, the specific -16-size and shape being determined as one likely to oontain a lymph-node-sized object according to the scale of the medical image. For example, the ROl may be specified in the 2-D image as a size corresponding to 3cm by 3cm in the real patient around a user-inputted centre point; or the ROl may be specified in a 3-D image as a size corresponding to 3cm by 3cm by 3cm in the real patient.
[0050] Generally, as the bounding box is input by a user to identity the position of a lymph node, the bounding box is presumed to be roughly the size of the lymph node, or just slightly larger. In the preferred embodiment, the size of the bounding box is taken as having a side length approximately equal to the longer of the side lengths of the bounding box as input by the user. Figure 7 shows an example of a lymph node IN enclosed by a bounding box EB. The x-directional length and y-directional length of the bounding box are determined. For consistency and ease of explanation, the longest length of the bounding box will hereinafter be referred to as the "width" of the bounding box. In an embodiment, the measured width falls into one of five ranges, each of which gives rise to a side length of the cubic ROl measured in number of voxels and using the following formula (1) -17- 91 if HWidth98 < 30 111 else if HWicIthBB C 40 Size of edge ofROl = 131 else if HWidthBs C 50 / (1) 151 else if HWidthBB C 60 251 otherwise where HWidthBB is the half width of the bounding box given by equation (2) and also measured in voxels: HWidth8 = width of bounding box (2) [0051] Of course, any other suitable thresholds may be chosen based on typical bounding box sizes and accuracy of results of the associated ROIs. The system may "learn" what ROl-to-bounding box width associations are appropriate based on historical data.
[0052] The sizes of the ROl and the half width of the bounding box are measured in one dimension (an edge of a cube) but counted in voxels. The ROl at this stage will be larger than the bounding box but will be more likely to contain the whole lymph node even if the user input a bounding box that was not correctly surrounding the lymph node.
[0053] Thus, if the bounding box is input as having a width of 70 voxels, HWidth5 = 35 voxets according to eguation (2) and the edge length of the ROl = 111 voxels according to formula (1) [0054] 2. In step 3200 of Figure 2, a centre position of the lymph node is found. The guickest way to find the centre position of the lymph node is to take the centre of the bounding box as that centre position. Alternatively, of course, if the -18-user has indicated a centre pcint of the lymph ncde, that point can be taken as the centre positicn. The ROl is thus centred so that its centre matches the centre of the lymph node as far as possible (further embodiments in this step to be discussed later) [0055] 3. The ROl is then cropped (i.e. "cut out" or "isolated") in step 3300 from the original volume image based on the dimension (of the ROl as calculated from the bounding box) and position (centre position or bounding box position) determined in steps 3100 and 3200 respectively.
[0056] 4. A shrink rate is determined in step 3400 that will be applied to the cropped ROl in step 3500. This shrink rate will be used to resize the ROl (including the lymph node) and to make it isotropic. The advantage of the isotropism is that any calculation made in one dimension can be applied to the other dimensions, thus reducing the processing load. The shrink rate SR is determined from the half width of the bounding box and is given by the following formula (3) ( 1 if HWidthpB < 10 -3 3/4 else if HWidthsB < 20 SR -) 2/3 else if HWidth88 <60 H) k1/2 otherwise [0057] 5. Finally, the ROl is resized and rendered isotropic in step 3500 by the multiplication of the ROT size by the shrink rate determined in step 3400. This is shown as the box 40 in -19-figure iDA. Thus, if the bounding box is 70 voxels wide, its half-width is 35 voxels and the shrink rate is 2/3. The resized ROl thus has a side length equal to 2/3 of ill = 74 voxels. The shrinking processing is for reducing resolution and thus reducing processing time) . The lymph node is also shrunk during the shrinking process.
[0058] Returning to figure 1, the second module 4000 of the object segmentation system comprises a scale estimation module.
A hoG filter estimates the scale of the lymph node LN by using a contrast of intensity distributions between the inside and the outside of a sphere defined by a LoG operator (the size of sphere is determined by a sigma cvalue of the Gaussian filter) By contrast, object edges in the segmentation process to be described later are computed from the contrast between neighbourhood voxels.
[0059] The LoG filter processes every point in the processing range, starting in the middle and working outwards iteratively.
In order to have the most efficient iterative processes, the processing ranges are preferably as small as possible (containing the smallest number of points possibie) whiie still containing enough points that the edge of the lymph node is likely to be contained within the processing range. The way to do this is to estimate at least one of the position of the lymph node and/or the size of the lymph node. The most accurate is of -20 -course to have an accurate position and size so that the processing range may be smallest and most precise.
[0060] Scale estimation is the estimation of size and relative position of the lymph node with respect to the full image data. The process performed by the scale estimation module 4000 is divided into steps 4100 to 4900 shown in figure 3 and described below.
[0061] Step 4100 of figure 3 comprises determining the parameters that will be used in the scale estimation processing.
There are various types of parameter that are determined. The first type of parameter is the type that defines the points that belong to the group of points (i.e. to the lymph node, lesion, etc.) being segmented from the rest of the image. The parameters could thus be a range of intensities, texture, etc., of the pixels/voxels. Other types of parameter include likely position of the lymph node with respect to the rest of the image and the likely size of the lymph node with respect to the rest of the image as mentioned above.
[0062] According to an embodiment, a parameter to be determined in this step 4100 is a relative position of the lymph node with respect to the rest of the image. One way of finding the position of the lymph node is to find its centre point (e.g. its centre of mass) . As an initial rough estimate, a centre point of a bounding box input by a user may be used. This is -21 -more precise, generally, than having a user specify a single centre point on a screen.
[0063] However, this method becomes less accurate when a lymph node is longer in one dimension than in another because the user is usually limited to a symmetrical shape for the bounding box (such as a circle or rectangle) . This method is also less accurate when the lymph node is to be segmented in three dimensions. This is because the user typically inputs a bounding box in only one plane and so the centre point of the perpendicular plane is in fact completely unknown. In this case, according to an embodiment, the centre point of the perpendicular plane may be taken to be at the same position as the centre point of the bounded plane, but this risks an inaccuracy in the perpendicular plane. Further to this, different users may have different methods of inputting bounding boxes and so there may be a lack of consistency in the bounding box position with respect to the lymph node position.
[0064] According to another embodiment, thus, it is preferred to find a corrected centre position based on the centre position of the bounding box. To do this, a processing range is defined around an estimated centre position in which to search for a more accurate ("corrected") centre position. There is thus a process for shifting the centre position of the lymph node from the estimated position based on the centre of the bounding box -22 -to a more accurate centre of mass position of the lymph node.
The scaie estimation (by LoG fiitering, for instance) will then be performed within a further processing range, the position of which is defined by the corrected centre position.
[0065] The processing range that is be used to determine the new centre position is referred to herein as a "neighbourhood" around the estimated centre of lymph node (i.e. the centre of the bounding box) determined in step 3200 in the pre-processing module. The neighbourhood is shown in Figure 8 and is taken to be a predetermined volume (or area in the case of a 2-D scale estimation process) 200 surrounding the estimated centre Cx, Cy, Cz. A method of calculating the neighbourhood 200 is now described.
[0066] A parameter ParamShIfIflQ that is used to define the processing neighbourhood for shifting the centre position of the lymph node is determined by: I[Wdthpp SR ParamShffl9 = 10 (4) [0067] The value for ParamSh&rLjflY may usefully be rounded to the nearest whole voxel.
[0068] Thus, in the example used earlier, if a bounding box is input by a user with a width of 70 voxels, the shifting parameter, Paramshiftjnq is 2/30 of 35, i.e. 2.3 voxels, rounded to 2 voxels.
-23 - [0069] In order actually tc determine the "neighbourhood" of the search for the corrected centre, the processing volume (with values in x, y, z dimensions) in which the new centre position is to be found (i.e. "shifted" to) is defined as follows: (C -ParamShlflQ «= x «= C,, + ParamShlf(flQ -ParamShlfLLflY «= y «= C, + Paramshjftiflq, (5) (1⁄4 C -Paramshjftjflg c z c + ParamShfIflQ where C,, C, and C are the x, y and z coordinates respectively of the estimated centre of the lymph node. If the bounding box has a side length of 70 voxels, Parainshifting = 2. Thus, the new centre position will be looked for in the range (Cx-2 to Cx+2, Cy-2 to Cy+2, Cz-2 to Cz+2) . Figure 8 shows the cube-shaped neighbourhood 200 thus calculated in the lymph node LN.
[0070] The neighbourhood defined above then undergoes processing, for example, by investigating LoG values for all points in the neighbourhood. If a higher LoG value is found in the neighbourhood than the estimated centre position, the position with the higher value is taken to be the corrected centre position.
[0071] Figure 9 shows a lymph node tic with an estimated centre K based on the centre position of the bounding box BE and a new centre position 0 which is the corrected centre position based on the centre of mass of the lymph node LN.
[0072] An advantage of finding a more precise central point is that reproducibility of the scale estimation processing (and -24 -other processing of the lymph node image) is improved. Even as different users input different centre positions (or different bounding boxes) for a same object shape, finding a consistent central point improves consistency in multiple segmentation results. Furthermore, the accuracy of the segmentation is improved because segmentation processes effectively scan an area from the centre outward. If the starting point is not central, an edge of the object on one side will be reached before another side. If there are other objects or artefacts that look like edges, these may be mistaken for edges if they appear at a position a distance away from the starting point that is the same as the distance away of an actual edge in another direction, for instance.
[0073] Returning to figure 3 and the determination of parameters for scale estimation in step 4100, according to another embodiment, a different parameter that can be found is a parameter for defining a processing range of the scale estimation of the lymph node to be used in step 4200 described below. This processing range defines the limits between which the scale estimation processing will occur. Ideally, the lower limit is a circle with a radius smaller than the actual edge of the lymph node and the higher limit is a circle wich a radius larger than the actual edge of the lymph node as shown in figure lOB so that the edge of the lymph node LN is located between the -25 -upper 20 and lower 10 limits and is likely to be found by the LoG filter. If the prooessing range is not defined, the whole of the area/volume within the RCI is prooessed to find the scale of the lymph node.
[0074] Of course, it is most useful to use all parameters (corrected centre position and processing range for scale estimation, as well as point characteristics) determined in step 4100. In this case, the processing range for the scale estimation prooess is reduced compared to the ROl and is defined with its centre at the corrected centre position.
[0075] The problem with existing scale estimation processes is that they need to process an entire area or volume within an ROl. The intention with scale estimation is to define a further processing range for the segmentation steps (region expansion and level set) . The LoG filter gives rise to a histogram as shown in figure 13A with peaks at positions of higher change in intensity (i.e. higher contrasu) . Specifically, a peak means that a higher contrast exists between the inside and the outside of a sphere defined by the sigma cvalue of the Gaussian filter in LoG. Such peaks occur typically at the edges of objects, but the LoG histogram will also have peaks at other points of intensity change that may be misinterpreted as edges. Such peaks are shown in figure 13A and may include image artefacts or other objects within the image that have points containing -26 -characteristics defined by the LoG filter as being the same as those of the points in the lymph node.
[0076] Thus, the determination of a reduced or at least more preciseiy-positioned processing range according to the following description and as shown in figure 133 increases the likelihood that a peak found in a LoG filter histogram is indeed an edge of the lymph node.
[0077] Figures bA and lOB illustrate the finding of the processing range parameter (Para?nscaie) of scale estimation (found in step 4100 of figure 3) , which is determined by the following equation (6) and which is typically rounded to the nearest whole number: HWLctt hpp SR Paramscaje = 20 + offset (6) where offset is a value that is predetermined to give a useful processing range according to iterative experiments. In the present embodiment, the offset has a value of 1.
[0078] Thus, with an input bounding box of width 70 voxels, Paramsca,c is ((35x2/3)/20)-Fl = 2.15, rounded to 2.
[0079] After calculating the processing range parameter of scale estimation, the processing range PR (with annular radius 310 in figure lOB) is defined between a lower limit 10 and an upper limit 20. These limits are calculated according to inequality (7) (shown below) on either side of the resized ROl -27 -of figure iDA (previously calculated using LThidth.SR and shown as dotted line 30 in figure 108) and have voxeis as their units.
HWidthsB SR -Paramsca(e «= PR «= HWidthBB SR + Paramscaie (7) [0080] Thus, the processing range for a bounding box of a 70 voxel width is 23 ± 2 voxels, namely between 21 and 25 voxels from the centre of the bounding box or the corrected centre. In other words, the lower limit 10 is 21 voxeis from the centre and the upper iimit 20 is 25 voxels from the centre of the lymph node IN.
[0081] Turning to figures 13A and 133, figure 13A shows the processing range (shaded) on a histogram with the LoG result, V2G*J, on the y-axis prior to the processing range being determined. G is the Gaussian filter function. cis standard deviation. cis one of the parameters in the Gaussian filter function used to define the size of the Gaussian distribution.
Therefore, if clis changed, the size of the Gaussian distribution is changed. That is to say, the scale of LoG filter (the size of LoG operator) is changed. V2G is the Laplacian of Gaussian operator, which is described below. The processing range in figure 13A corresponds to the processing range PR in figure 12A, which shows this processing range (dotted line) PR as an area on -28 -a 2-D image of the lymph node. I-low this histogram is obtained is described below with reference to step 4211 shown in Figure 4.
[0082] Assuming the origin of the histogram is the centre of the lymph node, figure 13B shows the LoG histogram with the new processing range. Figure 12B shows the new processing range as an annular shape overlapping the edge of the lymph node with an inner limit as dotted line 10 and outer limit as dotted line 20.
Specifically, figure 12B shows the processing range defined by the equation (7) above. As can be seen in figure 12B, the processing range is defined on either side of the shrunken (or resized) ROl 40.
[0083] Once the parameters for sca'e estimation have been determined, step 4200 estimates the scale of the lymph node. A first step involves the estimation of the scale of the lymph node in three dimensions. The details of the scale estimation processing are now described with reference to figure 4.
[0084] Step 4210 is an iterative filtering process (repeating steps 4211 and 4212) with a dynamic parameter () . In the preferred embodiments, a Laplacian of Gaussian (LoG) filter is used for estimating the scale of the lymph node. Step 4211 comprises performing the IoG filter on the shrunken ROl. The algorithm of the LoG filter is as follows: * first, a Gaussian filter is applied to the input image; -29 - * then, a Laplacian filter (second derivative filter) is applied to the Gaussian-smoothed image.
[00851 The above processing can be represented by V2(I(x,y,z)*G)=V2G*I(x,y,z) , (8) where J(x,y,z) is an input image, and G represents the Gaussian filter.
[0086] In equation (8), the right hand side means that the V2G kernel (LoG kernel) is applied to the input ROT having radius r within the image I(x,y,z) . Figure 11 shows the LoG kernel which is the second derivative of Gaussian kernel. The width (size) of LoG kernel is determined by the sigma ci value. When the LoG filter is applied to an ROl including a lymph node, the output value V2G at the centre of the lymph node takes a large value if the width ci of LoG kernel matches the width of lymph node. A LoG kernel may be one-, two-or three-dimensional in order to perform the LoG estimation in one, two or three dimensions simultaneously.
[0087] In the embodiment in which the centre point is corrected, the LoG value is calculated not only from the centre point (Cx, Cy, Cz) of the bounding box but also in the neighbourhood volume (or area) shown in figure 8, varying the value of sigma c(that represents a kernel position of the LoG -30 -filter) to perform the LoG filtering iteratively from all points, i.e. voxels, within the neighbourhood.
[0088] In step 4212, then, the sigma a value and the corresponding voxel position having the largest LoG value are updated if there is a LoG value which is higher than the previously recorded highest LoG value in step 4211. This highest LoG value and the associated sigma cv value are assumed to correspond to a voxel positioned in the centre of the lymph node, and thus the corrected centre position is determined.
[0089] Steps 4211 and 4212 are performed iteratively N times, N corresponding to the number of starting points in the processing neighbourhood. These steps apply a LoG filter iteratively while changing the a (sigma) value, i.e. the LoG filter is applied to each point in the processing range, one after the other. The approximate scale of the lymph node can be known from the a value which outputs the largest LoG value. The a value is converted from a standard deviation to a voxel value by equation (9) Paramstgnta 7= (9) 4V where Paramsma represents the parameter (i.e. the distance from the centre of the ROl) that varies throughout the processing range of scale estimation which is determined in step 41CC). The unit of Paramsj:a is the voxel. As described above, sigma is -31 -standard deviation which is used in the Gaussian filter.
Equation (9) is employed so thau the cvalue is obtained from a voxel value. That is to say, when the appropriate scale is searched iteratively while changing the radius from 5, 6, 7... to n voxels, such values (length measured in voxels) can be input into because the equation (9) converts from the voxel value to the cv value. In the paper by A. Jirapatnakul and S. Fotin, the equation is represented as d2=12a2, where "d" is the diameter of the nodule.
[0090] The value of Parainsma is changed incrementally, one by one in value, in an iterative process. Therefore, the number of iterations is determined in dependence on the extent of the processing range of scale estimation. For example, if the processing range is defined as "from 3 to 11", the number of iterations is 9 (3 to 11 inclusive) . As a further example, if the original bounding box was 70 voxels in width, the processing range parameter is from 21 to 25 as calculated above and thus the number of iterations is 5, with Paramsic,,, increasing (or decreasing) by one voxel for each iteration.
[0091] After finishing the iterative processing 4210, the system can determine the sigma a value and the voxel position which together have the largest LoG value (i.e. a peak in the -32 -histogram of figures 13) and this position is presumed to be the scale (i.e. outer edge) of the lymph node.
[0092] In step 4220 of figure 4, the scale and/or the centre position of the lymph node is determined from the results of the step 4210. The scale in this case may be the radius of the lymph node and so the sigma value outputted from step 4210 is converted into a radius value, giving an estimated scale for the lymph node in 3-P. The voxel position which has the largest LoG value becomes the centre of the lymph node. The determined scale and/or centre of lymph node may then be used in the next steps of object segmentation.
[0093] According to a further embodiment, if the size of the lymph node cannot be estimated within the defined processing range, it is assumed that either the processing range is too small or is incorrectly positioned. Thus, the scale estimation processing is performed again over a larger range. There are several ways in which the range may be increased and a selection of one or more of these ways may be made based on a (logged) history of other similar processes or on further user input. For example, if it is found that a bounding box is typically made smaller than expected, a way of increasing the processing range or processing neighbourhood may be selected that starts not with a half width of the bounding box, but with a 60 or 70E or other percentage width of the bounding box.
-33 - [0094] AlternatIvely, if it is found that a user tends to position a bounding box somewhat skewed around the lymph node, the processing neighbourhood parameter, Paramshictjflq, for finding the corrected oentre of the lymph node may be made larger.
[0095] Yet further alternatively, the processing range may be increased by increasing Pararnscaje, for example by increasing the offset or by reducing the denominator or increasing the shrink rate SR in eguation (6) [0096] Returning to figure 3 and the scale estimation process, step 4300 comprises determining the parameters for z-directional scale estimation processing in the case of a 3-P image.
[0097] The bounding box that is used to estimate the scale of the lymph node in the x-and y-directions is of course only input by the user in the x-y plane and so the scale of the lymph node in the z-direction is not alluded to by the user's input.
Thus, further steps may be performed to find the scale estimation of the lymph node in the z-direction (also referred to as the third dimension) [0098] One way to estimate the scale in the z-direction is to use the scale estimation of the x-y plane and extrapolate it to the z-direction so that a circle, for example, becomes a sphere.
[0099] A second way to estimate the scale in the z-direction is to use the processing range determined for the x-y plane and -34 -to perform the scale estimation (e.g. using the LoG filter) in the z-direction based on this processing range.
[00100] Step 4400 of figure 3 comprises the estimation of the z-directional scale of the lymph node possibly including the modification of the centre position of the lymph node in the z-direction. The algorithm of this step is almost same as in step 4200. In z-directional scale estimation and z-directional centre position modification, the system uses a z-directional LoG kernel in the LoG calculation. Specifically, the LoG kernel in step 4200 is a 3-dimensional kernel whereas the LoG kernel in this step is 1-dimensional kernel.
[00101] Step 4800 of figure 3 comprises the creation of the edge-emphasized ROI which is used in the later steps. First, the system applies a median filter (a smoothing filter) to the ROl created earlier. Then a Sobel filter (a first derivative filter to emphasize an edge in an image) is applied to the median-filtered ROl.
[00102] Finally, step 4900 comprises the creation of a smoothed ROI which is used in the later steps. Here, the system applies a Gaussian filter (another smoothing filter) to the ROl created in the step 4800.
[00103] Figure 5 shows the processing flow in a rough segmentation (step 5000 in figure 1) module, specifically the "region growing" process. As mentioned above, scale estimation -35 -is useful in object segmentation because it defines starting points and boundaries for the region growing process. The segmentation using the region growing process is performed over all points in a processing range derived from the scale estimation to find points (or voxels) with the characteristics chosen to define the lymph node.
[00104] Each processing step in the rough segmentation process will now be described.
[00105] Step 5100 comprises the determination of the positions of at least one seed point which is/are the starting point/s of a "region growing" processing. The positions of seed points are set up on the surface of a sphere located at the centre of the imaged lymph node. The radius of the "seed" sphere is determined by SCUIeLn space Radius= . (10)
S
[00106] Tn other words, the sphere at the centre of the lymph node on which seed points are specified is, in this embodiment, a fifth of the size of the estimated scale of the lymph node as estimated during the scale estimation processing discussed with respect to steps 4200 in figure 3. Cf course, other embodiments will have other ways of determining the radius of the sphere. It may be a different fraction of the estimated 3-D scale of the lymph node or it may be an arbitrary size based on likely scales of lymph nodes or even based on the individual scale results for -36 -the different dimensions (such as their average or some other function) . Alternatively, the seed points may be specified not at positions on the surface of a sphere, but at a centre point either as estimated based on the position of the bounding box or as corrected using the LoG filter. Yet alternatively, the seed points will be points having a specific parameter value, such as a specific voxel intensity value.
[00107] Once the seed points have been specified, a mask image is created in step 5200 to constrain excessive region expansion.
The mask image is created by using low/high thresholds which are hard-coded in the program. For example, the high threshold is and the low threshold is -20 in the Gaussian-filtered image.
These values are determined from historical data. In general, most organs and tissues in a body satisfy the above range (from
HU to -20 MU in CT value (Hounsfield Unit)
[00108] Alternatively, for example, the mask may be specified by referring to a histogram of voxel intensity, with intensities that fall below a certain value being a low threshold below which voxels having that intensity are outside the mask. The same may be done for high intensity. Alternatively or additionally, the mask may be a shape slightly larger than the estimated scale of the lymph nodes so that voxels with similar properties to the seed points but that lie outside the expected lymph node shape are not included in the segmented region.
-37 - [00109] In step 5300, the region-growing method proper is performed. In a preferred embodiment, the edge-emphasized ROl oreated in step 4800 is used as an input image to the region-growing prooessing. That is to say, the seed points are installed within the edge-emphasized Rd. The regions are then grown from these seed points to adjaoent points depending on a region membership parameter. The parameter could be, for example, pixel intensity, grey level texture, or colour.
[00110] Starting with the seed points, the voxels around the seed points (the target" voxels) are included in the region if they satisfy the region membership parameter and if they are not outside the thresholds or the mask.
[00111] The judgement of whether a target voxel should be included in the region or not is defined using one threshold in the equation below, though further thresholds will be derivable by the skilled person. The threshold ( Threshoicleage) is computed using equation (ii) Thresholdeaqe = 5.0 x T7avera,qe, (11) where Vaverage is the average edge value in the sphere region which is defined in step 5100. If an edge value (the voxel value of the Sobel-filtered ROl) of target voxel is lower than the upper threshold, the target voxel is included in the segmented region.
[00112] Step 5400 comprises the application of morphological opening to remove any noise within the segmented region.
Specifically, the morphological opening is applied to the binary ROl created by step 5300.
[00113] With the use of plural seed points, there is the possibility that more than one region in fact grow in parallel.
Step 5500 comprises applying labelling processing (also known as connected-component labelling) to remove all regions except for the largest one in the binary ROl created in step 5400.
Labelling processing labels each region in an image.
Segmentation methods often output binary images. In the binary image, the system cannot understand how many regions exist, how many pixels (or voxels) the each region has and so on because the binary image has only two values, "foreground" or "background". That's why the labelling processing is performed.
A system can then obtain the number of regions, the number of voxels of each region and so on.
[00114] Figure 6 shows the processing flow of the refinement process, which is step 6000 in figure 1. The main technology of the refinement is the level set method as described above. Each step of the refinement process is now described.
[00115] First, in step 6100, a mask image is created to constrain excessive region expansions. The mask image is created by using low and high thresholds, i.e. minimum and maximum Gaussian filtered values (low: -20; high: 200), which are hard-coded in the program. The parameters of the mask are changed -39 -depending on the estimated scale cf the lymph node. The estimated scale described with reference to figure 4 is used for defining the mask region. Specifically, the mask region is created by using the low and high thresholds (low: -20, high: 200) and the estimated scale (voxels that have a longer distance than the "estimated scale * 2" from the corrected central point are regarded as "not processing regions") [00116] Next in in step 6200, the level set segmentation proper is performed. The details of a possible level set algorithm is described in the paper "Level set method l' ed." published by J.Sethian et al in 1996. In a preferred embodiment, the rough segmentation result of step 5500 is used as the image to determine the initial zero level set (i.e. the "front") in level set processing. Specifically, the zero level sets that define the initial interface are set on the borderline of the segmented region created in step 5500. At each iteration, the interface changes according to forces defined by the segmented surface of the lymph node. A narrow band technigue is employed in the iterative processing of level set, which means that at least some of the iterated processes are restricted to a thin band of active voxels immediately surrounding (or immediately adjacent) the interface. In this step, the system uses level set computation to obtain the binary image that represents the segmented region (lymph node) and the background (other tissues) -40 -For example, the voxel value 1" represents the lymph node region, the voxel value "0" represents the other tissues in binary image. Voxels with value "1" are thus segmented from the rest.
[00117] Finally, after the level set computation, in step 6300, a transformation method is performed to resize the ROl including the region segmented by step 6200 from its shrunken size (in step 3500) to the original resolution. This is to overlap the segmentation result on the original input image and to display it. The segmentation result obtained in step 6200 is resized by a transformation method such as nearest neighbour interpolation.
Mcdi fications [00118] Instead of using a LoG filter for the scale estimation, other filters can be used, such as a Hough transform. The use of the Hough transform enables the detection of the sphere using the intensity information at the voxels that are located within a distance r (a parameter related to radius) from the centre of the lymph node. Thus, rather than changing cand applying the LoG filter, the Hough transform can be applied with a changing r value.
[00119] The present invention is most usefully implemented using a processor such as a personal computer (PC) and a -41 -computer program that may be stored in a non-transitory format on a storage medium.
-42 -

Claims (19)

  1. Claims 1. A method for scale estimation of an object in an image comprising: displaying the image containing the object; inputting a boundary shape around the object; defining a processing range based on the boundary shape; and performing scale estimation of the object within the processing range.
  2. 2. A method according to claim 1, wherein: inputting the boundary shape around the object is performed using a user interface by a user.
  3. 3. A method according to claim 1 or 2, further comprising: determining an outline of the boundary shape, wherein defining the processing range comprises defining the processing range based on the outline.
  4. 4. A method according to claim 1, 2 or 3, further comprising: calculating a region of interest based the boundary shape, wh e r e i n -43 -defining the processing range comprises defining the processing range based on an outline of the region of interest.
  5. 5. A method according to claim 4, wherein: calculating the region of interest comprises calculating a circle with a radius equal to an average of at least one distance of the boundary shape outline from a central point of the boundary shape.
  6. 6. A method according to claim 4, wherein: calculating the region of interest comprises: i) obtaining a value equal to half of a longest length of the input boundary shape, referred to as HWidthBB; and ii) calculating a side length of the region of interest in voxels according to the following formula: 91 if HWidth88 C 30 voxels 111 else if HWidthBD < 40 side length of ROl = 131 else if HWidth85 C 50 151 else if HWidthBB C 60 251 otherwise
  7. 7. A method according to any one of claims 4 to 6, further comprising: reducing the size of the region of interest before using it to determine the processing range.-44 -S. A method according to claim 7, wherein: reducing the size of the region of interest comprises multiplying the region of interest by a shrink rate SR according to the following formula: ( 1 if HWidthBp < 10 voxels SR -J 3/4 else if HWidthBB < 20 -) 2/3 else if HWidthBfi < 60 k%1/2 otherwise wherein HWidthEB is half of the longest length or diameter of the boundary shape.9. A method according to any one of claims 4 to 8, wherein: defining the processing range comprises defining an annular shape with: an outer limit based on a radius of the region of interest plus a distance calculated based on the size of the boundary shape; and an inner limit based on the radius of the region of interest minus a distance calculated based on the size of the boundary shape.10. A method according to any preceding claim, wherein the processing range PR is defined according to the following formula: HWidthRB SR -Paramsca(e «= PR «= HWidthBB SR + Paramscaie, wh e r e i n: IIWÜUhBB SR Param1 = 20 + offset, -45 -SR is a shrink rate determined acccrding to ( 1 if HWidthBp < 10 voxels SR -) 3/4 else if HWidth88 < 20 -) 2/3 else if HWidthBfi C 60 kJ/2 otherwise HWidthBB is half of the longest length of the boundary shape, and Offset is a predetermined number.11. A method according to claim 1, further comprising: determining a central point of the boundary shape, wherein defining the processing range comprises defining a range with the central point as its centre.12. A method according to claim 1, further comprising: determining a central point of the boundary shape; performing a position estimation process of the centre of the object based on the central point of the boundary shape; and defining the processing range with the estimated centre of the object as the centre of the processing range.13. A method for scale estimation of an object in an image comprising: displaying the image containing the object; inputting a boundary shape around the object; determining a central point of the boundary shape; -46 -performing a position estimation process of the centre of the object based on the centrai point of the boundary shape; defining a processing range for scale estimation with the estimated centre of the object as the centre of the processing range; and performing scale estimation of the object within the processing range.14. A method according to claim 12 or 13, wherein performing the position estimation process of the centre of the object comprises: obtaining the central point of the boundary shape; defining a neighbourhood around the central point; and performing analysis of points in the neighbourhood to find the centre of the object.15. A method according to claim 14, wherein the neighbourhood around the central point Cx,Cy,Cz is defined by: -c x «= C,, + PctramSftIffl9 C. -Paramghjftjflg y C. + Paramshjftjflg -ParamShlf tiny z «= C + ParantShLfLIflQ wh e r e i n: -I[WdthBB SR ParamShfflQ -10 -47 - ( 1 if HWIGIthBD < 10 voxels 3/4 else if HWidth88 < 20 SR = ) 2/3 else if HWidthBfi C 60! and otherwise HWI0'thBs is half of the longest length of the boundary shape.16. A method according to claim 14 or 15, wherein the analysis comprises a Lapiacian-of-Caussian filter based on each point within the neighbourhood.17. A method according to any preceding claim, wherein the scale estimation is performed using a Laplacian-of-Gaussian filter based on each point within the processing range.18. A method according to any preceding claims, wherein in the object is from a group containing at least a lymph node, nodule, tumour and lesion; and the image is a medical image.19. A method of segmenting an object from an image comprising: obtaining the image; performing scale estimation of the object according to any one of claims 1 to 18; performing rough segmentation for estimating a circular shape representing the object; -48 -performing refinement comprising a ievei-set method for refining the shape of the object based on the circular shape; and extracting the refined shape of the object from the image.20. A method of estimating a centre of an object in an image comprising: displaying the image containing the object; inputting a boundary shape around the object; determining a central point of the boundary shape; performing a position estimation process of the centre of the object based on the central point of the boundary shape.21. A program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 20.22. A storage medium having stored thereon a program according to claim 21.23. An information processing apparatus for estimating a scale of an object in an image comprising: display means for displaying at least one image; input means for receiving an input of a boundary shape around the object in the at least one image; -49 -calculating means for calculating a processing range based on the input boundary shape; processing means for performing scale estimation or central point estimation of the object within the calculated processing range.24. An information processing apparatus for estimating a centre of an object in an image comprising: display means for displaying at least one image; input means for receiving an input of a boundary shape around the object in the at least one image; calculating means for calculating a central point of the boundary shape; and analysis means for analysing a neighbourhood around the central point of the boundary shape to find a likely centre of the object.25. A method for scale estimation of an object in an image substantially as herein described and as illustrated in figures 1 to 6 and 12.26. A method for object segmentation substantially as herein described and as illustrated in figure 1.-50 -Amendments to the claims have been filed as follows: Claims 1. A method for scale estimation of an object in an image comprising: displaying the image containing the object; inputting a boundary shape around the object; determining a feature point of the inputted boundary shape; defining a processing range based on the inputted boundary shape and the feature point; and performing scale estimation of the obj cot within the U) processing range.2. A method according to claim 1, wherein: inputting the boundary shape around the object is performed r using a user interface by a user.3. A method according to claim 1 or 2, further comprising: determining an outline of the boundary shape, wherein defining the processing range comprises defining the processing range based on the outline.4. A method according to claim 1, 2 or 3, further comprising: calculating a region of interest based the boundary shape, wh e r e i n defining the processing range comprises defining the processing range based on an outline of the region of interest.5. A method according to claim 4, wherein: calculating the region of interest comprises calculating a circle with a radius equal to an average of at least one distance of the boundary shape outline from a central point of the boundary shape.6. A method according to claim 4, wherein: U) calculating the region of interest comprises: i) obtaining a value equal to half of a longest length of the input boundary shape, referred to as HWidthBB; and ii) calculating a side length of the region of interest in r voxels according to the following formula: 91 if HWidth88 C 30 voxels 111 else if HWidthBD < 40 side length of ROl = 131 else if HWidth85 C 50 151 else if HWidthBB C 60 251 otherwise 7. A method according to any one of claims 4 to 6, further comprising: reducing the size of the region of interest before using it to determine the processing range.
  8. S. A method according to claim 7, wherein: reducing the size of the region of interest comprises multiplying the region of interest by a shrink rate SR according to the following formula: ( 1 if HWidthBp < 10 voxels SR -J 3/4 else if HWidthBB < 20 -) 2/3 else if HWidthBfi < 60 k%1/2 otherwise wherein HWidthBB is half of the longest length or diameter of the boundary shape.
  9. 9. A method according to any one of claims 4 to 8, wherein: IC) defining the processing range comprises defining an annular shape with: an outer limit based on a radius of the region of interest plus a distance calculated based on the size of the boundary shape; and an inner limit based on the radius of the region of interest minus a distance calculated based on the size of the boundary shape.
  10. 10. A method according to any preceding claim, wherein the processing range PR is defined according to the following formula: HWidthRB SR -Paramsca(e «= PR «= HWidthBB SR + Paramscaie, wh e r e i n: IIWÜUhBB SR Param1 = 20 + offset, SR is a shrink rate determined according to SR = ( 1 if H Width99 C 10 voxels J 3/4 else if HWidth29 C 20 J 2/3 else if HWidth9 C 60 1/2 otherwise HWIdthBB is half of the longest length of the boundary shape, and Offset is a predetermined number.
  11. 11. A method according to any of claims 1 to 10, wherein the feature point is a central point of the boundary shape, and wh e r e i n defining the processing range comprises defining a range with the central point as its centre.
  12. 12. A method according to claim 1, further comprising: determining a central point of the boundary shape; performing a position estimation process of a centre of the object based on the central point of the boundary shape; and defining the processing range with the estimated centre of the object as the centre of the processing range.
  13. 13. A method according to claim 12, wherein performing the position estimation process of the centre of the object comprises: obtaining the centrai point of the boundary shape; defining a neighbourhood around the central point; and performing analysis of points in the neighbourhood to find the centre of the object.
  14. 14. A method according to claim 13, wherein the neighbourhood around the central point Cx,Cy,Cz is defined by: (C -Paramshjftjflg «= x «= C,1 + C. -ParamSh «= y «= C, + (C -ParamShIfIflQ «= z «= C + ParamSh1fIflQ wh e r e i n: IC) UWidth33 -SR ParamShIfIfl9 = 10 ( 1 if HWidth9 < 10 voxels O 3/4 else if HWidthBfi C 20 SR=c. , and 2/3 else if HWIdthBfi C 60 k1/2 otherwise r HWIdthBB is half of the longest length of the boundary shape.
  15. 15. A method according to claim 13 or 14, wherein the analysis comprises a Lapiacian-of-Caussian filter based on each point within the neighbourhood.
  16. 16. A method according to any preceding claim, wherein the scale estimation is performed using a Laplacian-of-Gaussian filter based on each point within the processing range.
  17. 17. A method according to any preceding ciaims, wherein in the object is from a group containing at least a lymph node, nodule, tumour and lesion; and the image is a medical image.
  18. 18. A method of segmenting an object from an image comprising: obtaining the image; performing scale estimation of the object according to any one of claims 1 to 17; performing rough segmentation for estimating a circular shape representing the object; U) performing refinement comprising a level-set method for refining the shape of the object based on the circular shape; o and extracting the refined shape of the object from the image. r19. A program which, when run on a computer, causes the computer to perform the method of any one of claims 1 to 18.20. A storage medium having stored thereon a program according to claim
  19. 19.21. An information processing apparatus for estimating a scale of an object in an image comprising: display means for displaying at least one image; input means for receiving an input of a boundary shape around the object in the at least one image; calculating means for calculating a feature point and a processing range, the processing range being calculated based on the inputted boundary shape and the feature point; processing means for performing scale estimation or central point estimation of the object within the calculated processing range.IC) 22. A method for scale estimation of an object in an image substantially as herein described and as illustrated in figures 1 to 6 and 12. aD r.23. A method for object segmentation substantially as herein described and as illustrated in figure 1.
GB1415252.4A 2014-08-28 2014-08-28 Scale estimation for object segmentation in a medical image Active GB2529813B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1415252.4A GB2529813B (en) 2014-08-28 2014-08-28 Scale estimation for object segmentation in a medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1415252.4A GB2529813B (en) 2014-08-28 2014-08-28 Scale estimation for object segmentation in a medical image

Publications (3)

Publication Number Publication Date
GB201415252D0 GB201415252D0 (en) 2014-10-15
GB2529813A true GB2529813A (en) 2016-03-09
GB2529813B GB2529813B (en) 2017-11-15

Family

ID=51752279

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1415252.4A Active GB2529813B (en) 2014-08-28 2014-08-28 Scale estimation for object segmentation in a medical image

Country Status (1)

Country Link
GB (1) GB2529813B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021233017A1 (en) * 2020-05-18 2021-11-25 腾讯科技(深圳)有限公司 Image processing method and apparatus, and device and computer-readable storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110838125B (en) * 2019-11-08 2024-03-19 腾讯医疗健康(深圳)有限公司 Target detection method, device, equipment and storage medium for medical image

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010048753A1 (en) * 1998-04-02 2001-12-06 Ming-Chieh Lee Semantic video object segmentation and tracking

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010048753A1 (en) * 1998-04-02 2001-12-06 Ming-Chieh Lee Semantic video object segmentation and tracking

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021233017A1 (en) * 2020-05-18 2021-11-25 腾讯科技(深圳)有限公司 Image processing method and apparatus, and device and computer-readable storage medium

Also Published As

Publication number Publication date
GB2529813B (en) 2017-11-15
GB201415252D0 (en) 2014-10-15

Similar Documents

Publication Publication Date Title
US9741123B2 (en) Transformation of 3-D object for object segmentation in 3-D medical image
Tareef et al. Multi-pass fast watershed for accurate segmentation of overlapping cervical cells
Maitra et al. Technique for preprocessing of digital mammogram
Mustra et al. Robust automatic breast and pectoral muscle segmentation from scanned mammograms
JP5993653B2 (en) Image processing apparatus, image processing method, and program
EP1646964B1 (en) Method and arrangement for determining an object contour
US8165376B2 (en) System and method for automatic detection of rib metastasis in computed tomography volume
GB2461558A (en) Image Segmentation
WO2007148284A2 (en) A method, a system and a computer program for determining a threshold in an image comprising image values
Wu et al. Image segmentation
Zinoveva et al. A texture-based probabilistic approach for lung nodule segmentation
Telea Feature preserving smoothing of shapes using saliency skeletons
Kaftan et al. A two-stage approach for fully automatic segmentation of venous vascular structures in liver CT images
GB2529813A (en) Scale estimation for object segmentation in a medical image
Karimov et al. Guided volume editing based on histogram dissimilarity
JP6516321B2 (en) Shape feature extraction method, shape feature extraction processing device, shape description method and shape classification method
Sachin et al. Brain tumor detection based on bilateral symmetry information
Khan et al. Segmentation of single and overlapping leaves by extracting appropriate contours
JP6397453B2 (en) Image processing apparatus, image processing method, and program
Purnama et al. Multiple Thresholding Methods for Extracting & Measuring Human Brain and 3D Reconstruction
Li et al. Object segmentation by saliency-seeded and spatial-weighted region merging
Wei et al. Research on ct image segmentation of computer-aided liver operation
Ananta Contour Extraction of Drosophila Embryos Using Active Contours in Scale Space
Mohammadi Alamdari Semi-automatic segmentation of mitochondria on electron microscopy images using kalman filtering approach
Boudier 3D Processing and Analysis with ImageJ