WO2003003303A2 - Image segmentation - Google Patents

Image segmentation Download PDF

Info

Publication number
WO2003003303A2
WO2003003303A2 PCT/GB2002/002945 GB0202945W WO03003303A2 WO 2003003303 A2 WO2003003303 A2 WO 2003003303A2 GB 0202945 W GB0202945 W GB 0202945W WO 03003303 A2 WO03003303 A2 WO 03003303A2
Authority
WO
WIPO (PCT)
Prior art keywords
grey
pixel
pixel unit
image
level intensity
Prior art date
Application number
PCT/GB2002/002945
Other languages
French (fr)
Other versions
WO2003003303A3 (en
Inventor
Keith J. Burnham
Olivier Haas
Maria Gloria Bueno
Original Assignee
Coventry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Coventry University filed Critical Coventry University
Priority to EP02748982A priority Critical patent/EP1399888A2/en
Priority to US10/482,196 priority patent/US20040258305A1/en
Priority to AU2002319397A priority patent/AU2002319397A1/en
Priority to CA002468456A priority patent/CA2468456A1/en
Publication of WO2003003303A2 publication Critical patent/WO2003003303A2/en
Publication of WO2003003303A3 publication Critical patent/WO2003003303A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20156Automatic seed setting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present invention relates to a process for segmenting images.
  • CT X-ray Computed Tomography
  • a computer stores a large amount of data from a selected region of the scanned object, for example, a human body, making it possible to determine the spatial relationship of radiation-absorbing structures within the scanning x-ray beam. Once an image has been acquired by scanning it is then subj ected to segmentation which is a technique for delineating the various organs within the scanned area.
  • Segmentation can be defined as the process which partitions an input image into its relevant constituent parts or objects, using image attributes such as pixel intensity, spectral values and textural properties.
  • image attributes such as pixel intensity, spectral values and textural properties.
  • the output of this process is an image represented in terms of edges, regions and their interrelationships. Segmentation is a key step in image processing and analysis, but it is one of the most difficult and intricate tasks. Many methods have been proposed to overcome image segmentation problems, but all of them are application dependent and problem specific.
  • Radiotherapy Treatment Planning RTP
  • RTP Radiotherapy Treatment Planning
  • PTV Planning Target Volume
  • the present invention seeks to provide an improved method of segmentation of an image.
  • the present invention provides a method of segmenting an image comprising:
  • apixel unit from a first group of pixel units in which the pixel units all have substantially the same grey-level intensity; comparing the grey-level intensity of said first pixel unit with the grey-level intensity of each of a plurality of selected pixel units of said image;
  • each said selected pixel unit as a pixel unit of the same region as said first pixel unit in response to the grey-level intensity of said adj acent pixel unit falling within a preselected grey-level intensity range;
  • the present invention also provides a method of segmenting an image comprising the steps of:
  • said first group of pixel units is the largest group of pixel units in the image and said further group of pixel units is the next largest group of pixel units.
  • ' 'pixel unit is used herein to refer to a single pixel or a group of adj acent pixels which are treated as a single pixel.
  • the method further comprises the steps of building a mosaic image, deriving the gradient of the mosaic image and applying a watershed transform to said gradient to provide said segmented image.
  • the method further comprises the step of applying a merging operation to said segmented image to reduce segmentation of the image.
  • each said pixel unit is a single pixel.
  • Figure 1 is a view of an image produced by a CT scan
  • Figure 1 a is a flow chart of an image processing technique according to the present invention which can be applied to the image of Figure 1;
  • Figure 2 is an image produced from the image of Figure 1 by application of a Watershed transform
  • Figure 3 is a mosaic image generated from the image of Figure 1;
  • Figure 4 is an image produced by a Watershed transformation of the image of Figure 3;
  • Figures 5A and 5B are frequency histograms of two of a set of image "slices" similar to that of Figure 1;
  • Figure 6 is a frequency histogram showing a Gaussian distribution curve and anon Gaussian distribution curve superimposed on one another;
  • FIG. 7 is a simplified flowchart showing the process of operation ofaprefened method according to the present invention.
  • FIG 8 is a detailed flowchart of part A of the process of Figure 7;
  • FIG. 9 is a detailed flowchart of part B of the process of Figure 6.
  • Figure 10 is a chart of histograms illustrating the effect of a couch and background on the Mstogram of Figure 9.
  • Figure 1 shows an original grey scale image which is produced by a CT scan.
  • Figure 1 a is a flow chart of an image processing technique according to the present invention which can be applied to the image of Figure 1.
  • the image is transformed into a mosaic image and the gradient image obtained. It is the magnitude of the gradient which is used in order to avoid negative peaks .
  • a morphological gradient operator would avoid the production of negative values and produces an image which can be used directly by a Watershed transform.
  • the Watershed transform followed by a merging process is then applied to provide the final image of Figure 2.
  • the number of discrete regions in the image of Figure 2 is considerable and would normally be of the order of several thousands, hi this particular example the number of regions is seven thousand nine hundred and sixty-eight.
  • This image would then need to be processed manually by a skilled operator in order to produce a reasonable image for viewing by the medical practitioner (given the large niunber of regions this may become prohibitive in terms of time).
  • the original image is digitally coded and stored with each unit (byte) of the digitally stored image representing the grey scale level of a pixel of the original image.
  • the loss of information which occurs when the original image of Figure 1 is transformed into the mosaic image of Figure 3 , is important, the main contours of the initial image of Figure 1 are preserved, hi such a simplified image, regions with identical grey levels may include actually different structures due to overgrowing.
  • the simplified image is further transformed.
  • the pixels of the image are stored in a temporary list (the boundary list) of pixels which are to be analysed. This list contains spatial information (x and y co-ordinates) and the intensity value of the pixels (grey-level).
  • a multi-region growing algorithm is used. This starts with a seed pixel which can be provided by the user who selects a seed point in the original image of Figure 1. This has previously been effected manually, for example by using a pointing device such as a mouse. The seed point chosen would nonnally be inside a region of interest in the image.
  • a frequency histogram of the grey-levels of the original image is first of all determined, hi this way, each grey-level is referenced to each pixel within the original image which belongs to that particular level.
  • Figures 5A and 5B show a histograms of two image slices similar to that of Figure 1 in which it can be seen that various parts of the body such as muscles, organs and bone structures are characterised by or exhibit different grey-levels and therefore different distributions in the histogram.
  • a predetermined grey-level in each distribution is taken as corresponding to the intensity value of a representative pixel of the region which is represented by that distribution.
  • the pixels of each distribution which form the representative pixels are selected as the seed pixels for each growing operation.
  • Each distribution of the histogram maybe a Gaussian or non Gaussian distribution and Figure 6 shows a diagrammatic representation of two distribution curves 10, 12 of a frequency histogram.
  • the curves represent two different regions of the histogram but are superimposed on one another to illustrate the differences between a Gaussian and anon Gaussian distribution.
  • Curve 10 shows a Gaussian distribution with the threshold minimum and maximum grey levels for the region represented by the curve 10beingchosenatJ m consult andZ, mffiC (points 14 and 16 on the curves).
  • Curve 12 shows a non Gaussian distribution superimposed on curve 10 with the minimum and maximum grey levels for the region represented by the curve also being chosen at L mm and L max .
  • the threshold grey levels would be different values, but they are shown here having the same values for ease of explanation.
  • the predetermined grey level used to define the representative pixel (seed pixel) for each region is the average grey level in each distribution.
  • the average grey level in the distribution will not be equal to the peak of the distribution (curve 12).
  • the predetermined grey level used to define the representative pixel (seed pixel) for each region could be the average grey level, the grey level corresponding to the pealc of the dishibution or the grey level corresponding to the central position between the thresholds L mm and L, mv
  • the grey level values of the pixels are sorted according to frequency in descending order, ie the pixels having an intensity value which occurs most frequently are placed first in the sorting order.
  • the effect of this is that the representative pixels will occur at the beginning of the ordered boundary list. It will be appreciated, therefore, that the region that occupies the largest portion of the image is grown first, the region occupying the second largest portion is grown second and so on.
  • the growing process for the first region begins with the first pixel at the head of the ordered boundary list.
  • the first pixel in the list is scanned in order to determine whether or not the grey-level of the pixel lies within a certain intensity range . If the scanned pixel meets the requirement it is transferred to a further store in a new list (the region list) . If the pixel does not meet the requirement then it is ignored.
  • the eight immediately adj acent, surrounding pixels (which may or may not belong to distributions other than the one currently being created) of the image are tested to detennine if they also meet the requirement and can therefore be included in the region being grown. If a neighbour pixel being tested has aheady been assigned to a region then it is ignored. If the neighbour pixel has not aheady been assigned to a region and passes a statistical test for homogeneity criteria (ie if the pixel grey-level lies within a certain intensity range) it is inserted in the region list and its identifier value in the original image is changed to the region value. This procedure is repeated until all the pixels in the image belong to one of the regions. It will be appreciated that whilst the scanning refers to eight adj acent pixels, the scan maybe effected using other connectivities e.g. four or six.
  • the following test is used as a basis for including a pixel in a region and applies for Gaussian distributions. It also applies for non Gaussian distributions where the average grey level intensity L ave is used to determine the seed pixel.
  • a pixel p xy of intensity L (xy) is included in the region list if it passes the similarity criteria, i.e., if the following condition is satisfied:
  • L ave is the average grey intensity level and T w is a threshold "window" control parameter
  • T w is a threshold "window" control parameter
  • L ave is equal to the pealc value grey level and is midway between L max and L min .
  • T w is equal to ( L max - L min )/2.
  • the parameter L ave acts as a cental value for growing the region, and the parameter T w acts as a tlrresholding distance in pixel intensity units from the central value.
  • the values of the level parameter L ave and window control parameter J H ,mustbe set appropriately.
  • the value o ⁇ L ave maybe set to the intensity value of the seed pixel, which in turn represents the central value of the region to be grown.
  • it maybe obtained from aprevious processing step, which includes a statistical analysis of pixels around the region of interest, h this case L ave can be set equal to the mean of the sample region.
  • L ave can be set equal to the mean of the sample region.
  • a 20 x 20 pixel matrix is taken for the sample, but larger samples introduce a degree of data smoothing and may give more accurate calculation of the region statistics.
  • the sample area is too large then the computational time can become too long.
  • the values of the parameter _T w can be set interactively or automatically.
  • the user can specify the value in a window which forms part of the GUI (graphical user interface) control panel for the algorithm.
  • a range of results can be quickly observed simply by setting the threshold value T w at different levels in order to extract different regions from the original image. As will be appreciated, if the seed pixel remains the same, a higher value for the threshold T w will normally result in larger regions being grown. Changing the seed pixel for the same threshold value T w will also produce a different grown region pattern.
  • the process produces good results with high contrasting obj ects within the image, such as pelvic bones and body contour. However, this is not the case when segmenting soft tissues such as the bladder and seminal vesicles where the contrasts are relatively low between obj ects .
  • T w threshold value
  • T w results in a relatively small niunber of regions being produced (typically several hundred) which results in a loss of structures.
  • T w it is possible to obtain segmentation of just the bones and the body contour.
  • the threshold value T w can be computed by the region growing algorithm which examines the statistics of the pixels within a sample regioni? of about 20 pixels i size (the figure of 20 may, of course, be varied as required). This sample regioni? is located centrally over the seed point of the region.
  • the window threshold parameter ⁇ T w is computed by multiplying the standard deviation of the sample region with a scaling factor K which is dependent on the signal to noise ratio in the image. A scaling factor K of value of 2.0 has been found to give reasonable results for CT and Magnetic Resonance (MR) images.
  • the threshold value T w for each region is calculated automatically by taking into account the histogram information.
  • the threshold value T w for each region is calculated prior to and independently of the growing process and is effected firstly by looking for sequences of pixels in the histogram that follow a "pealc like" pattern. To avoid identifying false pealcs because of noise, the process ignores pealcs wliich have a pixel widtli less than a preselected number, typically seven pixels. If the grey-level spacing between adj acent pealcs is relatively large then the threshold value T w for the region being grown can also be large. Where the adjacent pealcs are close together on the grey-level scale then the threshold value T w will need to be relatively small.
  • the segmented image may still contain some false regions that are produced as a result of CT artifacts. These are undesrred regions which are not wanted by the clinicians and are removed through a merging process.
  • the merging process looks at adj acent regions and will merge a first region into an adj acent second region if the number of elements of the first region are:
  • An element is a preselected area of a region and is typically a single pixel.
  • the intensity level of each of the pixels is adjusted to that of the pixels of the second region.
  • the resulting image is the mosaic image shown in Figure 3. It is a simplified image made of a mosaic of homogeneous pieces of constant grey-levels and is a homotopy modification of the original image.
  • the boundaries of the grey scale areas in the image are differentiated to provide boundary ridges to which a Watershed transform can be applied.
  • the above process can be applied in different domains without previous knowledge of the regions of interest within the original image.
  • the preferred method is based on homotopy modification of the original image prior to applying the Watershed transformation.
  • the homotopy modification of the original image produces a mosaic image.
  • Figures 7 illustrates a flow chart showing the steps which are carried out in order to obtain the image of Figure 4.
  • Figure 8 is a flow chart showing in more detail the steps for region growing of Figure 7 and
  • Figure 9 shows in more detail the steps for obtaining the gradient of the mosaic image of Figure 3 with Gaussian smoothing. It will be appreciated that other ways of obtaining the gradient used by the Watershed transform can be used, for example, morphological gradient/operators .
  • the technique of analysing histograms aims to determine a seed pixel and a threshold.
  • Figure 10 shows three different histograms 20, 22 and 24 similar to those of Figures 5 A and 5B of a pelvic CT image.
  • Graph 20 is from the original CT image
  • graph 22 is graph 20 with the couch removed
  • graph 24 is graph 20 without the couch and background.
  • this contains four distinct pealcs 30, 32, 34 and 36. These have been found automatically using relational operators to define pealcs in the histogram and a minimum height to allow small pealcs to be disregarded.
  • the first peak 30 is by far the largest, typically being composed of about half of all the image pixels. It is located at the low intensity end of the histogram and analysis of the image shows that this represents mainly air with some background counts.
  • the second pealc 32 very close to the first, is much smaller, with only about 1.5% of pixels at the pealc grey-level. This represents much of the image of the couch on which the patient lies, although this will vary between couches.
  • the final two pealcs 34, 36 are located further along the histogram and very close together. This indicates a degree of overlap in intensities between regions. These are separated by finding the local minimum between the peaks using a similar method to that used to findpeaks automatically.
  • the darker pealc 34 represents fat and soft tissue.
  • the brighter pealc 36 represents muscle and organs. These pixels include the bladder and prostate.
  • the bones and rectum region wliich include a wide range of grey-level are not represented by pealcs but by valleys or plateau.
  • the interior of the rectum is located at the grey-levels between pealcs 32 and 34 as depicted in the top left image in Figure 10.
  • the bones canbefoundat grey-levels above the fourth pealc 36.
  • the threshold value T w (L max . A - L min . A )/2
  • the seed point (L max . A + L min . A )/2
  • the threshold value T w (L max . D - L min . D )l2
  • the seed point (L max . D + L min . D )l2
  • the threshold value T w (L max . B - L min . B )/2
  • the seed point (L max . B + L min . B )l2
  • the threshold value T w (L max . c - L min . c )/2
  • the seed point (L max _ c + L mifact. c )l2
  • the original code was modified such that the rectum can be identified from the sharp cut-off, below w ch no pixels are found.
  • TMs cut-off grey- level has been used to define the start ofthe lowest tirreshold region in a modified image.
  • the result of applying the multi-region growing gives us a simplified image made of a mosaic of homogeneous pieces of constant grey-levels (mean grey-level ofthe growth region) with the same properties as the mosaic image.
  • TMs produces a homotopy modification ofthe original image and consequently ofthe gradient image.
  • the watershed transform in this simplified image the number of watershed lines and the computational process in terms of time and memory requirements are optimised.
  • the method ofthe present invention produces a segmented image with less overgrowing of regions while reducing the number of regions which would be produced by watershed alone.
  • the invention has application outside ofthe medical field, such as mihtary applications, robotics or any application which involves pattern recognition schemes.

Abstract

In a method of segmenting an image a first, seed pixel unit is selected from a first group of pixel units in which the pixel units all have substantially the same grey-level intensity. The grey-level intensity of said first pixel unit is compared with the grey-level intensity of each of selected adjacent pixel units of said image and those pixel units with grey levels within a selected range are assigned as a pixel unit of the same region as said first pixel unit. This comparison process is repeated for each of the pixel units in the image, those already having been assigned being ignored. A further seed pixel unit is selected from a further group of pixel units in which the pixel units all have substantially the same grey-level intensity and the comparison process repeated for all of the unassigned pixel units. Further seed pixel units are selected and the comparison process repeated until all the pixel units of the image have been assigned. A watershed transform is then applied to provide the segmented image.

Description

Image Segmentation
The present invention relates to a process for segmenting images.
There are many fields in which images such as digital images need to be processed in order to enhance the image for viewing and/or further processing. One such field is in medical imaging where, in X-ray Computed Tomography (CT) for example, the images viewed by the medical specialist need to be sufficiently clear for a proper diagnosis to be made and treatment to be given.
hi Computed Tomography a computer stores a large amount of data from a selected region of the scanned object, for example, a human body, making it possible to determine the spatial relationship of radiation-absorbing structures within the scanning x-ray beam. Once an image has been acquired by scanning it is then subj ected to segmentation which is a technique for delineating the various organs within the scanned area.
Segmentation can be defined as the process which partitions an input image into its relevant constituent parts or objects, using image attributes such as pixel intensity, spectral values and textural properties. The output of this process is an image represented in terms of edges, regions and their interrelationships. Segmentation is a key step in image processing and analysis, but it is one of the most difficult and intricate tasks. Many methods have been proposed to overcome image segmentation problems, but all of them are application dependent and problem specific.
The general objective of segmentation of medical images is to find regions which represent single anatomical structures. This makes feasible tasks such as interactive visualisation and automatic measurement of clinical parameters. Medical segmentation is becoming an increasingly important step for a number of clinical investigations, these include:
a) Identifying anatomical areas of interest for diagnosis treatment or surgery planning, b) Pre-processing for multi-modal image registration and improved correlation of anatomical areas of interest
c) Tumour measurement for diagnosis and therapy.
Over the last decade there have been a number of advances in Radiotherapy Treatment Planning (RTP) and treatment delivery. These have resulted in the need for systems that can generate complex treatment plans that are sensitive to the patients' anatomy, (the geometrical shape and the location of the organs) forplacement of theradiationbeams. i such systems the complete and precise segmentation or contouring of therapy relevant structures (namely the gross tumour volume (GTN), clinical target volume (CTV) and adj acent non-target normal tissues, together termed the Planning Target Volume (PTV), is a crucial step and one major bottleneck in the whole treatment planning process. It is estimated that 66% of all tumour patients are referred to radiation therapy. About 40% of these canbe heated effectively with current methods. Another 40% are not suitable for treatment because the disease has spread too far. The remaining 20% could be treated if the planning methods were generally available.
It is only by displaying the relevant structures that the clinical oncologist can devise an optimal plan that will heat the PTV to a given prescribed radiation dose while minimising radiation of non-target tissues thereby maximising the therapeutic gain of treatment. In common practice, the segmentation process is usually done manually slice by slice, and for a typical set of 40 slices, it can be a time consuming and tedious process.
The present invention seeks to provide an improved method of segmentation of an image.
Accordingly, the present invention provides a method of segmenting an image comprising:
selecting apixel unit from a first group of pixel units in which the pixel units all have substantially the same grey-level intensity; comparing the grey-level intensity of said first pixel unit with the grey-level intensity of each of a plurality of selected pixel units of said image;
assigning each said selected pixel unit as a pixel unit of the same region as said first pixel unit in response to the grey-level intensity of said adj acent pixel unit falling within a preselected grey-level intensity range;
selecting a further pixel unit from a further group of pixel units in which the pixel units have substantially the same grey-level intensity;
comparing the grey-level intensity of said further pixel unit with the grey-level intensity of each of a plurality of selected pixel units of said image, wherein each selected adj acent pixel unit which is already assigned as a pixel unit of a region is ignored;
assigning each unassigned said selected pixel unit as apixel unit of the same region as said further pixel unit in response to the grey-level intensity of said selected adj acent pixel unit falling within a preselected further grey-level intensity range;
and repeating the above steps until all of the pixel units in the image have been assigned to a region.
The present invention also provides a method of segmenting an image comprising the steps of:
(a) selecting a first pixel unit from a first group of pixel units in which the pixel units all have substantially the same grey-level intensity;
(b) selecting a first grey-level intensity range relative to the grey-level intensity of said first pixel unit;
(c) comparing the grey-level intensity of said first pixel unit with the grey-level intensity of each of selected adjacent pixel units of said image;
(d) assigning each said selected adj acent pixel unit as a pixel unit of the same region as said first pixel unit in response to the grey-level intensity of said adj acent pixel unit falling within said first grey-level intensity range;
(e) comparing the grey-level intensity of said first pixel unit with the grey-level intensity of each of selected next adjacent pixel units of said image;
(f) assigning each said selected next adj acent pixel unit as a pixel unit of the same region as said first pixel unit in response to the grey-level intensity of said next adj acent pixel unit falling within said first grey-level intensity range;
(g) repeating steps (e) and (f) for each of the pixel units in the image;
(h) selecting a further pixel unit from a further group of pixel units in which the pixel units have substantially the same grey-level intensity;
(i) selecting a further grey-level intensity range relative to the grey-level intensity of said further pixel unit;
(j ) comparing the grey-level intensity of said fluther pixel unit with the grey-level intensity of each of selected adjacent pixel units of said image, wherein each selected adj acent pixel unit which is already assigned as a pixel unit of a region is ignored;
(k) assigning each unassigned said selected adj acent pixel unit as a pixel unit of the same region as said further pixel unit in response to the grey-level intensity of said selected adj acent pixel unit falling within said further grey-level intensity range; (1) comparing the grey-level intensity of said furtherpixel unit with the grey-level intensity of each of selected next adjacent pixel units of said image;
(m) assigning each said unassigned selected next adj acent pixel unit as a pixel unit of the same region as said further pixel unit in response to the grey-level intensity of said selected next adjacent pixel unit falling within said further grey-level intensity range;
(n) repeating steps (1) and (m) for each of the pixel units in the image;
(o) and repeating steps (h) to (n) until all of the pixel units in the image have been assigned to a region.
Preferably, said first group of pixel units is the largest group of pixel units in the image and said further group of pixel units is the next largest group of pixel units.
The term ' 'pixel unit" is used herein to refer to a single pixel or a group of adj acent pixels which are treated as a single pixel.
hi a preferred form of the invention the method further comprises the steps of building a mosaic image, deriving the gradient of the mosaic image and applying a watershed transform to said gradient to provide said segmented image.
Advantageously, the method further comprises the step of applying a merging operation to said segmented image to reduce segmentation of the image.
Preferably, each said pixel unit is a single pixel.
The present invention is further described herein after, by way of example, with reference to the accompanying drawings, in which: Figure 1 is a view of an image produced by a CT scan;
Figure 1 a is a flow chart of an image processing technique according to the present invention which can be applied to the image of Figure 1;
Figure 2 is an image produced from the image of Figure 1 by application of a Watershed transform;
Figure 3 is a mosaic image generated from the image of Figure 1;
Figure 4 is an image produced by a Watershed transformation of the image of Figure 3;
Figures 5A and 5B are frequency histograms of two of a set of image "slices" similar to that of Figure 1;
Figure 6 is a frequency histogram showing a Gaussian distribution curve and anon Gaussian distribution curve superimposed on one another;
Figure 7 is a simplified flowchart showing the process of operation ofaprefened method according to the present invention;
Figure 8 is a detailed flowchart of part A of the process of Figure 7;
Figure 9 is a detailed flowchart of part B of the process of Figure 6; and
Figure 10 is a chart of histograms illustrating the effect of a couch and background on the Mstogram of Figure 9.
Referring to the drawings, Figure 1 shows an original grey scale image which is produced by a CT scan. Figure 1 a is a flow chart of an image processing technique according to the present invention which can be applied to the image of Figure 1. hi the process, the image is transformed into a mosaic image and the gradient image obtained. It is the magnitude of the gradient which is used in order to avoid negative peaks . A morphological gradient operator would avoid the production of negative values and produces an image which can be used directly by a Watershed transform. The Watershed transform followed by a merging process is then applied to provide the final image of Figure 2. As can be seen, the number of discrete regions in the image of Figure 2 is considerable and would normally be of the order of several thousands, hi this particular example the number of regions is seven thousand nine hundred and sixty-eight. This image would then need to be processed manually by a skilled operator in order to produce a reasonable image for viewing by the medical practitioner (given the large niunber of regions this may become prohibitive in terms of time).
hi order to reduce the number of regions produced by the Watershed transformation, in the preferred form of the process the original image is digitally coded and stored with each unit (byte) of the digitally stored image representing the grey scale level of a pixel of the original image.
As can be seen from Figure 2, when attempting to segment the image of Figure 1 the initial Watershed transform of the gradient image provides very unsatisfactory results since many apparently homogeneous regions are fragmented in small pieces, hi the preferred process according to the present invention the Watershed transformation is applied to a simplified image, hi the simplified image the homogeneous regions of the original image are merged, the simplified image of Figure 3 being made of a patchwork of pieces of uniform grey-level and is referred to as a partition or mosaic image.
Although the loss of information, which occurs when the original image of Figure 1 is transformed into the mosaic image of Figure 3 , is important, the main contours of the initial image of Figure 1 are preserved, hi such a simplified image, regions with identical grey levels may include actually different structures due to overgrowing. To solve this problem the simplified image is further transformed. To begin the process, the pixels of the image are stored in a temporary list (the boundary list) of pixels which are to be analysed. This list contains spatial information (x and y co-ordinates) and the intensity value of the pixels (grey-level).
hi order to calculate the mosaic image of Figure 3 a multi-region growing algorithm is used. This starts with a seed pixel which can be provided by the user who selects a seed point in the original image of Figure 1. This has previously been effected manually, for example by using a pointing device such as a mouse. The seed point chosen would nonnally be inside a region of interest in the image.
In order to carry out this process automatically, a frequency histogram of the grey-levels of the original image is first of all determined, hi this way, each grey-level is referenced to each pixel within the original image which belongs to that particular level. Figures 5A and 5B show a histograms of two image slices similar to that of Figure 1 in which it can be seen that various parts of the body such as muscles, organs and bone structures are characterised by or exhibit different grey-levels and therefore different distributions in the histogram.
A predetermined grey-level in each distribution is taken as corresponding to the intensity value of a representative pixel of the region which is represented by that distribution. The pixels of each distribution which form the representative pixels are selected as the seed pixels for each growing operation. By automatically selecting these seed pixels from the histogram a step of manually pointing at the image to specify the location of the seed pixels is avoided.
Each distribution of the histogram maybe a Gaussian or non Gaussian distribution and Figure 6 shows a diagrammatic representation of two distribution curves 10, 12 of a frequency histogram. The curves represent two different regions of the histogram but are superimposed on one another to illustrate the differences between a Gaussian and anon Gaussian distribution. Curve 10 shows a Gaussian distribution with the threshold minimum and maximum grey levels for the region represented by the curve 10beingchosenatJm„ andZ,mffiC(points 14 and 16 on the curves). Curve 12 shows a non Gaussian distribution superimposed on curve 10 with the minimum and maximum grey levels for the region represented by the curve also being chosen at Lmm and Lmax. hi practice, because the curve 12 would be in a different part of the mstogram the threshold grey levels would be different values, but they are shown here having the same values for ease of explanation.
hi the preferred method, the predetermined grey level used to define the representative pixel (seed pixel) for each region is the average grey level in each distribution.
Where a Gaussian distribution of the grey levels in a region occurs or is assumed (curve 10), since the threshold grey levels for the region are equidistant from the dishibution peak, the average grey level in the dishibution is equal to the grey level corresponding to the pealc of the distribution and rs Lave = ( mm + L,mx )lλ.
Where, however, a non-Gaussian dishibution of the grey levels in a region occurs, the average grey level in the distribution will not be equal to the peak of the distribution (curve 12).
It will be appreciated that in such non-Gaussian dishibution the predetermined grey level used to define the representative pixel (seed pixel) for each region could be the average grey level, the grey level corresponding to the pealc of the dishibution or the grey level corresponding to the central position between the thresholds Lmm and L,mv
Once the histogram has been created the grey level values of the pixels are sorted according to frequency in descending order, ie the pixels having an intensity value which occurs most frequently are placed first in the sorting order. The effect of this is that the representative pixels will occur at the beginning of the ordered boundary list. It will be appreciated, therefore, that the region that occupies the largest portion of the image is grown first, the region occupying the second largest portion is grown second and so on.
The growing process for the first region begins with the first pixel at the head of the ordered boundary list.
The first pixel in the list is scanned in order to determine whether or not the grey-level of the pixel lies within a certain intensity range . If the scanned pixel meets the requirement it is transferred to a further store in a new list (the region list) . If the pixel does not meet the requirement then it is ignored.
If the scanned pixel meets the requirement then the eight immediately adj acent, surrounding pixels (which may or may not belong to distributions other than the one currently being created) of the image are tested to detennine if they also meet the requirement and can therefore be included in the region being grown. If a neighbour pixel being tested has aheady been assigned to a region then it is ignored. If the neighbour pixel has not aheady been assigned to a region and passes a statistical test for homogeneity criteria (ie if the pixel grey-level lies within a certain intensity range) it is inserted in the region list and its identifier value in the original image is changed to the region value. This procedure is repeated until all the pixels in the image belong to one of the regions. It will be appreciated that whilst the scanning refers to eight adj acent pixels, the scan maybe effected using other connectivities e.g. four or six.
The following test is used as a basis for including a pixel in a region and applies for Gaussian distributions. It also applies for non Gaussian distributions where the average grey level intensity Lave is used to determine the seed pixel.
Here a pixel pxy of intensity L(xy) is included in the region list if it passes the similarity criteria, i.e., if the following condition is satisfied:
Figure imgf000011_0001
where Lave is the average grey intensity level and Tw is a threshold "window" control parameter, hi the case of curve 10 (Gaussian) of Figure 6, Lave is equal to the pealc value grey level and is midway between Lmax and Lmin. Thus Tw is equal to ( Lmax - Lmin )/2. The parameter Lave acts as a cental value for growing the region, and the parameter Tw acts as a tlrresholding distance in pixel intensity units from the central value.
hi a non Gaussian distribution where the average grey level intensity Lave is not equal to the peak value grey level and therefore is not midway betweenJmra andX„„„, two thresholds Twl and Tw2 are needed, where:
wl ' -L w2 ~~ -'-'wax ~ l-'min
Thus:
(X)>) - Lave < Twj jor L(xy > Laveve ' L(Xy) ≤ Tw2 for L(xy) < Lave
Before region growing is started, the values of the level parameter Lave and window control parameter JH,mustbe set appropriately. The value oϊLave maybe set to the intensity value of the seed pixel, which in turn represents the central value of the region to be grown. Alternatively, it maybe obtained from aprevious processing step, which includes a statistical analysis of pixels around the region of interest, h this case Lave can be set equal to the mean of the sample region. Usually, a 20 x 20 pixel matrix is taken for the sample, but larger samples introduce a degree of data smoothing and may give more accurate calculation of the region statistics. However, if the sample area is too large then the computational time can become too long.
The values of the parameter _Twcan be set interactively or automatically.
To set the value of ^interactively the user can specify the value in a window which forms part of the GUI (graphical user interface) control panel for the algorithm. A range of results can be quickly observed simply by setting the threshold value Tw at different levels in order to extract different regions from the original image. As will be appreciated, if the seed pixel remains the same, a higher value for the threshold Tw will normally result in larger regions being grown. Changing the seed pixel for the same threshold value Tw will also produce a different grown region pattern.
If the same value is used for the threshold value parameter Tw then the process produces good results with high contrasting obj ects within the image, such as pelvic bones and body contour. However, this is not the case when segmenting soft tissues such as the bladder and seminal vesicles where the contrasts are relatively low between obj ects . Using a high threshold value Tw results in a relatively small niunber of regions being produced (typically several hundred) which results in a loss of structures. With a high value ofTw it is possible to obtain segmentation of just the bones and the body contour.
If a low threshold value Tw is used this results in over segmentation with a relatively large number of regions (typically several thousand) being produced.
The results are therefore dependent on the threshold value Tw and therefore in the growing process an adaptive threshold value Tw is applied to each region instead of a single threshold value Tw for the whole image.
To set the threshold value Tw automatically, it can be computed by the region growing algorithm which examines the statistics of the pixels within a sample regioni? of about 20 pixels i size (the figure of 20 may, of course, be varied as required). This sample regioni? is located centrally over the seed point of the region. The window threshold parameter ϋTwis computed by multiplying the standard deviation of the sample region with a scaling factor K which is dependent on the signal to noise ratio in the image. A scaling factor K of value of 2.0 has been found to give reasonable results for CT and Magnetic Resonance (MR) images. The threshold value Tw for each region is calculated automatically by taking into account the histogram information. The threshold value Tw for each region is calculated prior to and independently of the growing process and is effected firstly by looking for sequences of pixels in the histogram that follow a "pealc like" pattern. To avoid identifying false pealcs because of noise, the process ignores pealcs wliich have a pixel widtli less than a preselected number, typically seven pixels. If the grey-level spacing between adj acent pealcs is relatively large then the threshold value Tw for the region being grown can also be large. Where the adjacent pealcs are close together on the grey-level scale then the threshold value Tw will need to be relatively small.
The segmented image may still contain some false regions that are produced as a result of CT artifacts. These are undesrred regions which are not wanted by the clinicians and are removed through a merging process.
The merging process looks at adj acent regions and will merge a first region into an adj acent second region if the number of elements of the first region are:
(a) considerably fewer (by a preselected amount) than the number of elements of the second region, and
(b) less than a threshold number E which represents a minimum number of elements in a region above which a merge is not allowed.
An element is a preselected area of a region and is typically a single pixel.
When the first region is merged into the second region the intensity level of each of the pixels is adjusted to that of the pixels of the second region.
The resulting image is the mosaic image shown in Figure 3. It is a simplified image made of a mosaic of homogeneous pieces of constant grey-levels and is a homotopy modification of the original image.
The boundaries of the grey scale areas in the image are differentiated to provide boundary ridges to which a Watershed transform can be applied.
If one uses a Watershed transform on the gradient image the number of Watershed lines and the computational process in terms of time and memory requirements are optimised.
The above process can be applied in different domains without previous knowledge of the regions of interest within the original image. The preferred method is based on homotopy modification of the original image prior to applying the Watershed transformation. The homotopy modification of the original image produces a mosaic image.
Using the above process over-segmentation is considerably reduced and satisfactory results in terms of accuracy, computational time and memory are obtained.
Figures 7 illustrates a flow chart showing the steps which are carried out in order to obtain the image of Figure 4. Figure 8 is a flow chart showing in more detail the steps for region growing of Figure 7 and Figure 9 shows in more detail the steps for obtaining the gradient of the mosaic image of Figure 3 with Gaussian smoothing. It will be appreciated that other ways of obtaining the gradient used by the Watershed transform can be used, for example, morphological gradient/operators .
Analysing Histograms
The technique of analysing histograms aims to determine a seed pixel and a threshold.
Figure 10 shows three different histograms 20, 22 and 24 similar to those of Figures 5 A and 5B of a pelvic CT image. Graph 20 is from the original CT image, graph 22 is graph 20 with the couch removed and graph 24 is graph 20 without the couch and background.
Referring to graph 20, this contains four distinct pealcs 30, 32, 34 and 36. These have been found automatically using relational operators to define pealcs in the histogram and a minimum height to allow small pealcs to be disregarded. The first peak 30 is by far the largest, typically being composed of about half of all the image pixels. It is located at the low intensity end of the histogram and analysis of the image shows that this represents mainly air with some background counts.
The second pealc 32, very close to the first, is much smaller, with only about 1.5% of pixels at the pealc grey-level. This represents much of the image of the couch on which the patient lies, although this will vary between couches.
The final two pealcs 34, 36 are located further along the histogram and very close together. This indicates a degree of overlap in intensities between regions. These are separated by finding the local minimum between the peaks using a similar method to that used to findpeaks automatically. The darker pealc 34 represents fat and soft tissue. The brighter pealc 36 represents muscle and organs. These pixels include the bladder and prostate.
Note that the bones and rectum region wliich include a wide range of grey-level are not represented by pealcs but by valleys or plateau. The interior of the rectum is located at the grey-levels between pealcs 32 and 34 as depicted in the top left image in Figure 10. Finally the bones canbefoundat grey-levels above the fourth pealc 36.
It has been observed that the removal of the couch from the CT by pre-processing or the removal of the background can affect the histogram, indeed the first two pealcs 30, 32 may disappear as shown in graph 24. Note that the number of pixels in the region A between 0 and 120 is much reduced compared to graph 22.
The threshold and seed points for various parts of the histograms are set out below. rectum
The threshold value Tw = (Lmax.A - Lmin.A)/2
The seed point = (Lmax.A + Lmin.A)/2
bones
The threshold value Tw = (Lmax.D - Lmin.D)l2 The seed point = (Lmax.D + Lmin.D)l2
OAR type 1
The threshold value Tw = (Lmax.B - Lmin.B)/2 The seed point = (Lmax.B + Lmin.B)l2
OAR type 2
The threshold value Tw = (Lmax.c - Lmin.c)/2 The seed point = (Lmax_c + Lmi„.c)l2
To overcome this loss of information in the Mstogram, the original code was modified such that the rectum can be identified from the sharp cut-off, below w ch no pixels are found. TMs cut-off grey- level has been used to define the start ofthe lowest tirreshold region in a modified image.
The result of applying the multi-region growing gives us a simplified image made of a mosaic of homogeneous pieces of constant grey-levels (mean grey-level ofthe growth region) with the same properties as the mosaic image. TMs produces a homotopy modification ofthe original image and consequently ofthe gradient image. Using the watershed transform in this simplified image the number of watershed lines and the computational process in terms of time and memory requirements are optimised. Compared to a standard, multithresholding region growing process without mosaic image, the method ofthe present invention produces a segmented image with less overgrowing of regions while reducing the number of regions which would be produced by watershed alone.
Itwillbe appreciated that the invention has application outside ofthe medical field, such as mihtary applications, robotics or any application which involves pattern recognition schemes.

Claims

Claims
1 A method of segmenting an image comprising:
(a) selecting a first pixel unit from a first group of pixel units in wliich the pixel units all have substantially the same grey-level intensity;
(b) selecting a first grey-level intensity range relative to tlie grey-level intensity of said first pixel unit;
(c) comparing the grey-level intensity of said first pixel unit with tlie grey-level intensity of each of selected adjacent pixel units of said image;
(d) assigning each said selected adj acent pixel unit as a pixel unit ofthe same region as said first pixel unit in response to tlie grey-level intensity of said adj acent pixel unit falling within said first grey-level intensity range;
(e) comparhig the grey-level mtensity of said first pixel unit with tlie grey-level intensity of each of selected next adjacent pixel units of said image;
(f) assigning each said selected next adjacentpixel unit as apixel unit ofthe same region as said first pixel unit in response to the grey-level intensity of said next adj acent pixel unit falling within said first grey-level mtensity range;
(g) repeating steps (e) and (f) for each ofthe pixel units in the image;
(h) selecting a further pixel unit from a further group of pixel umts in wMch the pixel u ts have substantially the same grey-level intensity; (i) selecting a further grey-level mtensity range relative to the grey-level intensity of said further pixel unit;
(j) comparing the grey-level intensity of said frutlier pixel unit witli the grey-level intensity of each of selected adj acent pixel units of said image, wherein each selected adj acent pixel unit which is already assigned as a pixel unit of a region is ignored;
(k) assigning each unassigned said selected adj acent pixel luiit as apixel uMt ofthe same region as said further pixel unit in response to the grey-level intensity of said selected adj acent pixel unit falling within said further grey-level intensity range;
(1) comparing tlie grey-level intensity of said further pixel unit with the grey-level intensity of each of selected next adjacent pixel units of said image;
(m) assigning each said unassigned selected next adj acent pixel uMt as a pixel umt ofthe same region as said fluther pixel unit in response to the grey-level intensity of said selected next adjacent pixel unit falling within said further grey-level intensity range;
(n) repeating steps (1) and (m) for each ofthe pixel units in the image;
(o) and repeating steps (h) to (n) until all of tlie pixel umts in the image have been assigned to a region.
2 A method as claimed in claim 1 wherein:
said first group of pixel units is the largest group of pixel units in the image;
and said further group of pixel units is the next largest group of pixel units. 3 A method as claimed in claim 1 or 2 further comprising the steps of:
(p) building a mosaic image;
(q) deriving the gradient of the mosaic image; and
(r) applying a watershed transform to said gradient to provide said segmented image.
4 A method as claimed in claim 3 fluther comprising the step of applying a merging operation to said segmented image to reduce segmentation ofthe image.
5 A method as claimed in claim 4 wherein a region is merged into an adj acent region if the number of pixel units in said region is less than a preselected number.
6 A method as claimed in any of claims 1 to 5 wherein each said pixel unit is a single pixel.
7 A method as claimed in any of claims 1 to 6 wherein the step of selecting said first and further pixel units comprises creating a frequency Mstogram ofthe grey level values of said image and selecting a predetennrned grey level value in each distribution of said Mstogram to define said first and further pixel.units.
8 A method as claimed in claim 7 wherein the predetermined grey level value for said first pixel unit is chosen from the largest distribution in the histogram, and for each successive further pixel unit is chosen from the next successive largest distribution in the histogram.
9 A method as claimed in claim 7 or 8 wherein said predetermined grey level is the average grey level ofthe distribution.
10 A method as claimed in any of claims 1 to 9 wherein the distribution is a Gaussian dishibution and each adj acent pixel unit is assigned to a region when tlie following condition is met: L„„ - L ϊ, *,y) < Z w
where:
Lave = the average grey level intensity ofthe distribution; L(xy = the grey level intensity ofthe selected pixel unit in the distribution; and Tw = a preselected threshold parameter value in the distribution.
11 A method as claimed in claim 10 wherein Lave is the pealc value grey level and Tw= (Lmax - Jm„)/2, where Lmax and Lmin are preselected upper and lower grey level values for the distribution.
12 A method as claimed in any of claims 1 to 9 wherein the distribution is a non Gaussian distribution and each adj acent pixel unit is assigned to a region when the following conditions are met: (λy) - Lave < Tw! for L(xy) > Lave Lave - L(xy) < Tw2 for L(χyj < Lave where:
Lσve = a preselected grey level intensity within the distribution; L(xyJ = the grey level intensity of the selected pixel unit;
Twl = a preselected lower threshold parameter value in the distribution; and Tw2 - an upper preselected threshold parameter value in the distribution.
13 A method as claimed in claim 12 wherein the value of Lave is obtained from a statistical analysis of at least a portion ofthe distribution.
14 A method as claimed in claim 13 wherein value of Lave is equal to the mean of a selected sample region within the distribution.
15 A method as claimed in claim 13 wherein said selected sample region comprises a 20 x 20 pixel matrix. 16 A method of segmenting an image comprising:
selecting a pixel unit from a first group of pixel units in wMch tlie pixel units all have substantially the same grey-level intensity;
comparing the gr-ey-level intensity of said first pixel unit with the grey-level intensity of each of a plurality of selected pixel units of said image;
assigning each said selected pixel unit as apixel unit ofthe same region as said firstpixel unit in response to tlie grey-level intensity of said adj acent pixel unit falling witMn apreselected grey-level intensity range;
selecting a further pixel unit from a further group of pixel units in which the pixel units have substantially the same grey-level intensity;
comparing the grey-level mtensity of said further pixel umt with the grey-level intensity of each of a plurality of selected pixel units of said image, wherein each selected adj acent pixel unit wMch is already assigned as a pixel unit of a region is ignored;
assigning each unassigned said selected pixel unit as apixel unit ofthe same region as said further pixel unit in response to the grey-level intensity of said selected adjacent pixel unit falling within a preselected further grey-level intensity range;
and repeating tlie above steps until all ofthe pixel units in tlie image have been assigned to a region.
PCT/GB2002/002945 2001-06-27 2002-06-27 Image segmentation WO2003003303A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP02748982A EP1399888A2 (en) 2001-06-27 2002-06-27 Image segmentation
US10/482,196 US20040258305A1 (en) 2001-06-27 2002-06-27 Image segmentation
AU2002319397A AU2002319397A1 (en) 2001-06-27 2002-06-27 Image segmentation
CA002468456A CA2468456A1 (en) 2001-06-27 2002-06-27 Image segmentation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0115615.7 2001-06-27
GBGB0115615.7A GB0115615D0 (en) 2001-06-27 2001-06-27 Image segmentation

Publications (2)

Publication Number Publication Date
WO2003003303A2 true WO2003003303A2 (en) 2003-01-09
WO2003003303A3 WO2003003303A3 (en) 2003-09-18

Family

ID=9917385

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2002/002945 WO2003003303A2 (en) 2001-06-27 2002-06-27 Image segmentation

Country Status (7)

Country Link
US (1) US20040258305A1 (en)
EP (1) EP1399888A2 (en)
AU (1) AU2002319397A1 (en)
CA (1) CA2468456A1 (en)
GB (1) GB0115615D0 (en)
PL (1) PL367727A1 (en)
WO (1) WO2003003303A2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005057493A1 (en) * 2003-12-10 2005-06-23 Agency For Science, Technology And Research Methods and apparatus for binarising images
US7091974B2 (en) * 2001-11-30 2006-08-15 Eastman Kodak Company Method for selecting and displaying a subject or interest in a still digital image
GB2463141A (en) * 2008-09-05 2010-03-10 Siemens Medical Solutions Medical image segmentation
CN106651885A (en) * 2016-12-31 2017-05-10 中国农业大学 Image segmentation method and apparatus

Families Citing this family (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6985612B2 (en) * 2001-10-05 2006-01-10 Mevis - Centrum Fur Medizinische Diagnosesysteme Und Visualisierung Gmbh Computer system and a method for segmentation of a digital image
US8275091B2 (en) 2002-07-23 2012-09-25 Rapiscan Systems, Inc. Compact mobile cargo scanning system
US7963695B2 (en) 2002-07-23 2011-06-21 Rapiscan Systems, Inc. Rotatable boom cargo scanning system
US8804899B2 (en) 2003-04-25 2014-08-12 Rapiscan Systems, Inc. Imaging, data acquisition, data transmission, and data distribution methods and systems for high data rate tomographic X-ray scanners
GB0309371D0 (en) * 2003-04-25 2003-06-04 Cxr Ltd X-Ray tubes
US10483077B2 (en) 2003-04-25 2019-11-19 Rapiscan Systems, Inc. X-ray sources having reduced electron scattering
US8451974B2 (en) 2003-04-25 2013-05-28 Rapiscan Systems, Inc. X-ray tomographic inspection system for the identification of specific target items
GB0812864D0 (en) 2008-07-15 2008-08-20 Cxr Ltd Coolign anode
US9208988B2 (en) 2005-10-25 2015-12-08 Rapiscan Systems, Inc. Graphite backscattered electron shield for use in an X-ray tube
US8094784B2 (en) 2003-04-25 2012-01-10 Rapiscan Systems, Inc. X-ray sources
US9113839B2 (en) 2003-04-25 2015-08-25 Rapiscon Systems, Inc. X-ray inspection system and method
GB0309385D0 (en) 2003-04-25 2003-06-04 Cxr Ltd X-ray monitoring
US8837669B2 (en) 2003-04-25 2014-09-16 Rapiscan Systems, Inc. X-ray scanning system
GB0309387D0 (en) * 2003-04-25 2003-06-04 Cxr Ltd X-Ray scanning
US8243876B2 (en) 2003-04-25 2012-08-14 Rapiscan Systems, Inc. X-ray scanners
GB0309379D0 (en) * 2003-04-25 2003-06-04 Cxr Ltd X-ray scanning
GB0309374D0 (en) * 2003-04-25 2003-06-04 Cxr Ltd X-ray sources
US8223919B2 (en) 2003-04-25 2012-07-17 Rapiscan Systems, Inc. X-ray tomographic inspection systems for the identification of specific target items
US7949101B2 (en) 2005-12-16 2011-05-24 Rapiscan Systems, Inc. X-ray scanners and X-ray sources therefor
GB0309383D0 (en) 2003-04-25 2003-06-04 Cxr Ltd X-ray tube electron sources
GB0525593D0 (en) 2005-12-16 2006-01-25 Cxr Ltd X-ray tomography inspection systems
US6928141B2 (en) 2003-06-20 2005-08-09 Rapiscan, Inc. Relocatable X-ray imaging system and method for inspecting commercial vehicles and cargo containers
US7327880B2 (en) * 2004-03-12 2008-02-05 Siemens Medical Solutions Usa, Inc. Local watershed operators for image segmentation
US7394933B2 (en) * 2004-11-08 2008-07-01 Siemens Medical Solutions Usa, Inc. Region competition via local watershed operators
US7689038B2 (en) * 2005-01-10 2010-03-30 Cytyc Corporation Method for improved image segmentation
US7894568B2 (en) * 2005-04-14 2011-02-22 Koninklijke Philips Electronics N.V. Energy distribution reconstruction in CT
US7471764B2 (en) 2005-04-15 2008-12-30 Rapiscan Security Products, Inc. X-ray imaging system having improved weather resistance
US8184119B2 (en) * 2005-07-13 2012-05-22 Siemens Medical Solutions Usa, Inc. Fast ambient occlusion for direct volume rendering
EP1945596B8 (en) 2005-09-15 2015-11-04 Anuvia Plant Nutrients Holdings LLC Organic containing sludge to fertilizer alkaline conversion process
US9046465B2 (en) 2011-02-24 2015-06-02 Rapiscan Systems, Inc. Optimization of the source firing pattern for X-ray scanning systems
US8036423B2 (en) * 2006-10-11 2011-10-11 Avago Technologies General Ip (Singapore) Pte. Ltd. Contrast-based technique to reduce artifacts in wavelength-encoded images
US8068668B2 (en) 2007-07-19 2011-11-29 Nikon Corporation Device and method for estimating if an image is blurred
US8260048B2 (en) * 2007-11-14 2012-09-04 Exelis Inc. Segmentation-based image processing system
GB0803641D0 (en) 2008-02-28 2008-04-02 Rapiscan Security Products Inc Scanning systems
GB0803644D0 (en) 2008-02-28 2008-04-02 Rapiscan Security Products Inc Scanning systems
GB0809110D0 (en) 2008-05-20 2008-06-25 Rapiscan Security Products Inc Gantry scanner systems
GB0816823D0 (en) 2008-09-13 2008-10-22 Cxr Ltd X-ray tubes
US9013596B2 (en) * 2008-09-24 2015-04-21 Nikon Corporation Automatic illuminant estimation that incorporates apparatus setting and intrinsic color casting information
WO2010036249A1 (en) * 2008-09-24 2010-04-01 Nikon Corporation Autofocus technique utilizing gradient histogram distribution characteristics
US8860838B2 (en) 2008-09-24 2014-10-14 Nikon Corporation Automatic illuminant estimation and white balance adjustment based on color gamut unions
WO2010036240A1 (en) * 2008-09-24 2010-04-01 Nikon Corporation Image segmentation from focus varied images using graph cuts
WO2010036247A1 (en) * 2008-09-24 2010-04-01 Nikon Corporation Principal components analysis based illuminant estimation
GB0901338D0 (en) 2009-01-28 2009-03-11 Cxr Ltd X-Ray tube electron sources
WO2012160511A1 (en) 2011-05-24 2012-11-29 Koninklijke Philips Electronics N.V. Apparatus and method for generating an attenuation correction map
WO2012160520A1 (en) 2011-05-24 2012-11-29 Koninklijke Philips Electronics N.V. Apparatus for generating assignments between image regions of an image and element classes
US9008372B2 (en) * 2011-05-31 2015-04-14 Schlumberger Technology Corporation Method for determination of spatial distribution and concentration of contrast components in a porous and/or heterogeneous sample
US9218933B2 (en) 2011-06-09 2015-12-22 Rapidscan Systems, Inc. Low-dose radiographic imaging system
US8781187B2 (en) * 2011-07-13 2014-07-15 Mckesson Financial Holdings Methods, apparatuses, and computer program products for identifying a region of interest within a mammogram image
MX350070B (en) 2013-01-31 2017-08-25 Rapiscan Systems Inc Portable security inspection system.
CN112950747A (en) 2013-09-13 2021-06-11 斯特拉克斯私人有限公司 Method and system for assigning color to image, computer readable storage medium
US9626476B2 (en) 2014-03-27 2017-04-18 Change Healthcare Llc Apparatus, method and computer-readable storage medium for transforming digital images
US9235903B2 (en) * 2014-04-03 2016-01-12 Sony Corporation Image processing system with automatic segmentation and method of operation thereof
US9773325B2 (en) * 2015-04-02 2017-09-26 Toshiba Medical Systems Corporation Medical imaging data processing apparatus and method
JP2018126389A (en) * 2017-02-09 2018-08-16 キヤノン株式会社 Information processing apparatus, information processing method, and program
CN109840914B (en) * 2019-02-28 2022-12-16 华南理工大学 Texture segmentation method based on user interaction
US11551903B2 (en) 2020-06-25 2023-01-10 American Science And Engineering, Inc. Devices and methods for dissipating heat from an anode of an x-ray tube assembly

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2827060B1 (en) * 2001-07-05 2003-09-19 Eastman Kodak Co METHOD FOR IDENTIFYING THE SKY IN AN IMAGE AND IMAGE OBTAINED THANKS TO THIS PROCESS

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BUENO G ET AL: "Watershed transform for segmenting medical images" SYSTEMS SCIENCE, 1997, WROCLAW TECH. UNIV. PRESS, POLAND, vol. 23, no. 3, pages 95-106, XP008020084 ISSN: 0137-1223 *
GONZALEZ R C ET AL: "Digital Image Processing" , DIGITAL IMAGE PROCESSING, XX, XX, PAGE(S) 458-465 , READING, MASSACHUSETTS XP002248957 sections 7.4.1, 7.4.2 *
VICKERS J P ET AL: "Histogram-based segmentation of pelvic computed tomographic images" WORKSHOP ON EUROPEAN SCIENTIFIC AND INDUSTRIAL COLLABORATION. WESIC '99. PROMOTING: ADVANCED TECHNOLOGIES IN MANUFACTURING, WORKSHOP ON EUROPEAN SCIENTIFIC AND INDUSTRIAL COLLABORATION. WESIC '99. PROMOTING: ADVANCED TECHNOLOGIES IN MANUFACTURING, NE, pages 291-298, XP008020078 1999, Newport, South Wales, UK, Univ. Wales College, Newport, UK ISBN: 1-899274-23-5 *
ZEUGE, W.: "Skripte zur Mathematik - Wahrscheinlichkeitsrechnung, Statistik, Ausgleichsrechnung" 1998 , UNIVERSIT[T HAMBURG , HAMBURG WANDSBEK, PAGES 49-53, XP002248958 page 53, section 2.3.4 figures 2.1,2.2 *
ZUCKER S W: "REGION GROWING: CHILDHOOD AND ADOLESCENCE" COMPUTER GRAPHICS AND IMAGE PROCESSING, ACADEMIC PRESS. NEW YORK, US, vol. 5, no. 3, September 1976 (1976-09), pages 382-399, XP001149042 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7091974B2 (en) * 2001-11-30 2006-08-15 Eastman Kodak Company Method for selecting and displaying a subject or interest in a still digital image
WO2005057493A1 (en) * 2003-12-10 2005-06-23 Agency For Science, Technology And Research Methods and apparatus for binarising images
GB2463141A (en) * 2008-09-05 2010-03-10 Siemens Medical Solutions Medical image segmentation
GB2463141B (en) * 2008-09-05 2010-12-08 Siemens Medical Solutions Methods and apparatus for identifying regions of interest in a medical image
US9349184B2 (en) 2008-09-05 2016-05-24 Siemens Medical Solutions Usa, Inc. Method and apparatus for identifying regions of interest in a medical image
CN106651885A (en) * 2016-12-31 2017-05-10 中国农业大学 Image segmentation method and apparatus

Also Published As

Publication number Publication date
CA2468456A1 (en) 2003-01-09
US20040258305A1 (en) 2004-12-23
WO2003003303A3 (en) 2003-09-18
EP1399888A2 (en) 2004-03-24
GB0115615D0 (en) 2001-08-15
AU2002319397A1 (en) 2003-03-03
PL367727A1 (en) 2005-03-07

Similar Documents

Publication Publication Date Title
WO2003003303A2 (en) Image segmentation
US7536041B2 (en) 3D image segmentation
EP0965104B1 (en) Autosegmentation/autocontouring methods for use with three-dimensional radiation therapy treatment planning
US7796790B2 (en) Manual tools for model based image segmentation
US8577115B2 (en) Method and system for improved image segmentation
EP2252204B1 (en) Ct surrogate by auto-segmentation of magnetic resonance images
US7388973B2 (en) Systems and methods for segmenting an organ in a plurality of images
RU2589292C2 (en) Device and method for formation of attenuation correction map
US8527244B2 (en) Generating model data representing a biological body section
WO2012072129A1 (en) Longitudinal monitoring of pathology
Sivewright et al. Interactive region and volume growing for segmenting volumes in MR and CT images
CN106537452A (en) Device, system and method for segmenting an image of a subject.
US8094895B2 (en) Point subselection for fast deformable point-based imaging
Tan et al. An approach to extraction midsagittal plane of skull from brain CT images for oral and maxillofacial surgery
CN105678711B (en) A kind of attenuation correction method based on image segmentation
US20080285822A1 (en) Automated Stool Removal Method For Medical Imaging
CN106780492A (en) A kind of extraction method of key frame of CT pelvises image
CN114187293B (en) Oral cavity palate part soft and hard tissue segmentation method based on attention mechanism and integrated registration
Krawczyk et al. YOLO and morphing-based method for 3D individualised bone model creation
Sun et al. Stepwise local synthetic pseudo-CT imaging based on anatomical semantic guidance
US20220180525A1 (en) Organ segmentation method and system
Stough et al. Clustering on local appearance for deformable model segmentation
Bhise et al. Lung Segmentation and Nodule Detection based on CT Images using Image Processing Method
Liamsuwan et al. CTScanTool, a semi-automated organ segmentation tool for radiotherapy treatment planning
Bacher et al. Model-based segmentation of anatomical structures in MR images of the head and neck area

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG US

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 2002748982

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2002748982

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

ENP Entry into the national phase

Ref document number: 2004115104

Country of ref document: RU

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 2468456

Country of ref document: CA

WWE Wipo information: entry into national phase

Ref document number: 10482196

Country of ref document: US

WWW Wipo information: withdrawn in national office

Ref document number: 2002748982

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP