WO2006121410A1 - Method, apparatus and computer software for segmenting the brain from mr data - Google Patents

Method, apparatus and computer software for segmenting the brain from mr data Download PDF

Info

Publication number
WO2006121410A1
WO2006121410A1 PCT/SG2005/000147 SG2005000147W WO2006121410A1 WO 2006121410 A1 WO2006121410 A1 WO 2006121410A1 SG 2005000147 W SG2005000147 W SG 2005000147W WO 2006121410 A1 WO2006121410 A1 WO 2006121410A1
Authority
WO
WIPO (PCT)
Prior art keywords
brain
mask
image data
determining
threshold
Prior art date
Application number
PCT/SG2005/000147
Other languages
French (fr)
Inventor
Qingmao Hu
Wieslaw Lucjan Nowinski
Original Assignee
Agency For Science, Technology And Research
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency For Science, Technology And Research filed Critical Agency For Science, Technology And Research
Priority to PCT/SG2005/000147 priority Critical patent/WO2006121410A1/en
Publication of WO2006121410A1 publication Critical patent/WO2006121410A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Definitions

  • the present invention relates to segmenting brain tissue from non-brain tissue within image data, in particular from Tl -weighted, Spoiled Gradient-Recalled (SPGR), and Fluid Attenuation Inversion Recovery (FLAIR) Magnetic Resonance (MR) data.
  • SPGR Spoiled Gradient-Recalled
  • FLAIR Fluid Attenuation Inversion Recovery
  • MR Magnetic Resonance
  • Tl -weighted volume scans with thin slices have become a central component of scanning protocols for structural (morphological) imaging as they provide excellent anatomical detail.
  • multi-contrast multi-spectral
  • automated segmentation techniques are available based on voxel classification schemes in a feature space. It is also possible to image different types of tissue directly, based on the selective suppression of signals from specific tissues. In principle, these types of tissue images can be classified into a much larger number of tissue types than for single-contrast data.
  • these approaches have the disadvantages of relatively long acquisition times or low spatial resolution (> 3 mm slice thickness) and may require the registration of data from multiple acquisitions.
  • Brain segmentation from MR head volumes is challenging due to the inherent nature of MR neuro-images: noise, grey level inhomogeneity, partial volume effects, artefacts, and the closeness of brain tissue to non-brain tissue both spatially and in grey levels.
  • the first problem is the time cost.
  • the second is the requirement for sufficient training, and care during segmentation, that subjectivity is reduced to an acceptable level.
  • the algorithm assumes intensity inhomogeneity correction based on a correction suitable for a phantom and thus cannot handle serious inhomogeneity; ii) the threshold determination based on a Gaussian curve fitting may fail if there is serious noise or intensity inhomogeneity; iii) excessive end slices may be removed due to the 3D distance transform and distance threshold (steps 3 and 4); and iv) mask propagation in 2D is error prone.
  • the algorithm is a 2D method without 3D connected component labelling.
  • the contour is very rough and cannot follow the fold of the cortex; ii) like Brummer et al. 1993, the determination of thresholds based on Gaussian curve fitting may fail when extreme imaging conditions occur; iii) non-linear anisotropic diffusion filtering helps in noise removal but not against inhomogeneity, thus the algorithm is not able to handle serious intensity inhomogeneity;
  • the 10 x 10 SE for erosion, removal and dilation is good to separate non-brain tissues but may be too big to maintain small brain fragments; v) the image may be over- smoothed so that spatial resolution is decreased and details are lost; vi) and three PD/T2 volumes were used for validation to reach a similarity index around 0.95.
  • Magnetic resonance image tissue classification using a partial volume model Neurolmage 2001; 13: 856-876, proposed segmenting the brain from Tl -weighted MR images using a combination of 3D anisotropic diffusion filtering, edge detection (Marr- Hildreth edge detector), and mathematical morphology (erosion with a rhombus SE of size 1 voxel, finding the largest connected component, and dilation with the same SE).
  • edge detection Marr- Hildreth edge detector
  • mathematical morphology erosion with a rhombus SE of size 1 voxel, finding the largest connected component, and dilation with the same SE.
  • the problem with this method is the presence of spurious edges caused by various artefacts and intensity inhomogeneity, as well as the demand for users to tune parameters (diffusion constant and size of SE for erosion).
  • the big size for morphological opening may discard small brain fragments.
  • the curve fitting of the intensity histogram by Gaussian mixtures may fail when extreme imaging conditions occur.
  • the formula to calculate the threshold for separating brain and non-brain tissues may be ad hoc and not applicable to other data.
  • the brain mask needs to be registered with Tl-weighted images for high-resolution- brain tissue segmentation.
  • Smith S. M. "Fast robust automated brain extraction", Human Brain Mapping 2002; 17: 143-155, proposed finding the brain surface through a deformable model surface.
  • This approach can segment Tl-, T2-, and PD-weighted brain images.
  • inherent to the nature of the active method that is used it may converge to a local minimum, and is vulnerable to weak edges and complicated edges (e.g. multiple edges due to artefacts). Also when brain and non-brain tissue are close in both space and intensities, it is hard to distinguish them.
  • a method of segmenting a brain in image data comprises: determining a second brain mask for the brain, segmenting the brain in the image data; determining a third brain mask for the brain, segmenting the brain in the image data; merging one or more corresponding portions of the second and third brain masks; and generating a fourth brain mask, segmenting the brain in the image data, the fourth brain mask comprising the merged one or more corresponding portions of the second and third brain masks.
  • a method of processing brain masks for superior image slices of a brain to exclude sagittal sinuses close to the midsagittal plane comprises: determining a first image slice, being the most superior image slice to have a first predetermined brain area; determining a second image slice, being the next image slice that is at least a predetermined distance inferior to the first image slice; for image slices between the first and second image slices determining the average and standard deviation of the grey levels along the midsagittal plane of foreground voxels of the brain mask; using the sum of the average and standard deviation to determine a grey level threshold; comparing the grey levels of the foreground voxels with the grey level threshold; and adjusting the brain masks by setting those foreground voxels to background for which the grey level is below the threshold.
  • a method method of processing brain masks for image slices inferior to eyeball level to remove connexions between brain tissue and non-brain tissue comprises: determining connected foreground components for a plurality of slices below eyeball level; for individual determined connected foreground components, determining an overlap index between that connected foreground component and the corresponding connected foreground component in an adjacent slice; for individual determined connected foreground components, determining minimum and maximum distances to background voxel of the head mask of the same slice; and for individual determined connected foreground components, comparing the combination of the determined overlap index and the determined minimum and maximum distances with a set of predetermined values and setting the determined connected foreground component as background or brain tissue based on the result.
  • a method of segmenting a brain in image data representing individual image slices of a brain comprises: calculating a first threshold for the image data; first binarising the image data using the first threshold; determining a first binary mask for the brain; calculating a second binarising threshold for the image data, using a portion of the first binary mask corresponding to at least one slice; re-binarising the image data using at least one threshold pair determined using the first and second thresholds; and determining a second binary mask for the brain, segmenting the brain in the image data.
  • apparatus for segmenting a brain in image data representing individual image slices of a brain comprises: second brain mask calculating means for determining a second brain mask for the brain, segmenting the brain in the image data; third brain mask calculating means for determining a third brain mask for the brain, segmenting the brain in the image data; and mask merging means for merging a plurality of corresponding slices of the second and third brain masks, to generate a fourth brain mask, segmenting the brain in the image data and comprising the merged corresponding slices of the second and third brain masks.
  • apparatus for segmenting a brain in image data representing individual image slices of a brain.
  • the apparatus comprises: first threshold calculating means for calculating a first threshold for the image data; first binarising means for first binarising the image data; first brain mask calculating means for determining a first binary mask for the brain; second threshold calculating means for calculating a second threshold for the image data, using a portion of the first binary mask corresponding to at least one slice;second binarising means, for re-binarising the image data using at least one threshold pair determined using the first and second thresholds; and second brain mask calculating means for determining a second binary mask for the brain, segmenting the brain in the image data.
  • a computer program product for segmenting a brain in image data representing individual image slices of a brain.
  • the computer program product comprises: computer readable program code for determining a second brain mask for the brain, segmenting the brain in the image data; computer readable program code for determining a third brain mask for the brain, segmenting the brain in the image data; and computer readable program code for merging a plurality of corresponding slices of the second and third brain masks, to generate a fourth brain mask, segmenting the brain in the image data and comprising the merged corresponding slices of the second and third brain masks.
  • a computer program product for segmenting a brain in image data representing individual image slices of a brain.
  • the computer program product comprises: computer readable program code for calculating a first threshold for the image data; computer readable program code for first binarising the image data; computer readable program code for determining a first binary mask for the brain; computer readable program code for calculating a second threshold for the image data;, using a portion of the first binary mask corresponding to at least one slice; computer readable program code for re- binarising the image data using at least one threshold pair determined using the first and second thresholds; and computer readable program code for determining a second binary mask for the brain, segmenting the brain in the image data.
  • Embodiments of a method, apparatus and computer software segment a brain in image data representing individual image slices of the brain.
  • the preferred method involves noise filtering of the data, determining a first binary mask for volume data of the brain, determining second and third binary masks, using the first binary mask as a first iteration, merging the second and third binary masks as appropriate and performing further processing for certain superior and inferior slices.
  • Figure 2 is a schematic diagram of apparatus for performing brain segmentation in an image
  • Figure 3 A is an image showing a superior axial slice without eyes
  • Figure 3B is an image showing an axial slice in which eyes are present.
  • Figure 4 is a schematic representation of a computer system suitable for performing the techniques described with reference to Figures 1 to 3.
  • the brain is the sum of grey matter (GM) and white matter (WM).
  • GM grey matter
  • WM white matter
  • CSF cerebro-spinal fluid
  • FLAIR images GM is the brightest, followed by WM, CSF, bone and air.
  • the brain is surrounded by a dark rim of 'background' with occasional thin bright connections to the major sinuses (namely, the sinus transversus, sigmoidus, confluens sinuum, and sagittalis superior), other blood vessels, dura, marrow, scalp, and soft tissue of the neck.
  • a preferred embodiment of the present invention uses a mixture of thresholding and morphological operations to segment the tissues in such images.
  • Automatic determination of thresholds is based on anatomical knowledge instead of ad hoc empirical formulae.
  • Noise is removed by non-linear anisotropic diffusion filtering.
  • Intensity inhomogeneity over the whole volume is divided into two kinds: intra-slice inhomogeneity and inter-slice inhomogeneity. Intra-slice inhomogeneity is tackled with range-constrained thresholding, while inter-slice inhomogeneity is handled by local thresholding.
  • Figure 1 is a flowchart showing steps involved in a preferred method of brain segmentation in an image.
  • Figure 2 is a schematic diagram of apparatus 10 for performing brain segmentation in an image, for example according to the method of the flowchart of Figure 1.
  • the input data is a 3D volumetric data, which is composed of axial slices. Some of the processing may be based on the volume as a whole (e.g. finding connected components, erosion, dilation), while other processing (e.g. refining steps) maybe local and partial, and typically is only applied to some and not all slices.
  • Step S 102 - Input Data Volumetric MRI data for a brain volume scan is input, with the data representing images of sequential slices.
  • the data may, for example, be provided straight from a scanner 12 or from a memory 14 (e.g. a disc or computer memory).
  • Step S 104 - Filter Noise The input volumetric data is filtered, by way of noise filter means 16, in this embodiment by 3D anisotropic non-linear diffusion filtering, to remove noise. As such filtering encourages intra-region smoothing while inhibiting inter-region smoothing, noises are efficiently smoothed. Other approaches to filtering are possible, such as inter alia, wavelet transform or spatial filtering.
  • Step S 106 - Calculate First Threshold TThI The first brain threshold, ThI, around an axial slice is calculated by range-constrained thresholding, in a thresholding determining means 18.
  • the preferred axial slice chosen for thresholding passes through the anterior commissure (AC) and/or the posterior commissure (PC). This approach explicitly incorporates knowledge about the images to be segmented into the segmentation scheme.
  • This preferred approach to thresholding, of range-constrained thresholding involves 3 main steps: i) determining a region of interest (ROI), being the space enclosed by the skull in an axial slice; ii) within the ROI, estimating a range in the intensity histogram of the ROI, based on empirical data, which represents the minimum and maximum bounds that the background proportion can take, and which are determined by training or knowledge; and iii) selecting the threshold to maximize the between-class variance, as indicated below.
  • ROI region of interest
  • the axial slice passing through the AC or PC is derived as the ROI and used for range-constrained thresholding, otherwise an axial slice slightly superior to the eyes is obtained for range-constrained thresholding.
  • the axial slice that is used for thresholding is denoted axS 1.
  • r Mgh mm ⁇ i ⁇ H(i) ⁇ H h b ⁇ ;
  • This third step can be performed a number of different ways.
  • Pr(C2) ][>(0, r t +l
  • the optimum threshold is the r ⁇ maximizing formula (1) for given Hf and H ⁇ .
  • This preferred approach may be varied using weighted variances, e.g.
  • step 1) If the frequency range derived in step 1) is correctly estimated then it will include a valley in the frequency distribution of intensities. This valley separates the background and the object. Thus, valley detection can be exploited to select the threshold. This has the following steps:
  • a frequency interval ⁇ h is specified (e.g. 1%, or typically any other value between 1% and 5%).
  • ii) The grey level range J is partitioned into K+ ⁇ intervals with an equal frequency range ⁇ h .
  • integer index j the lower end of its intensity range is denoted r/ and the upper end is denoted r/ .
  • a b I A 0 be the fuzzy sets of fuzzy events "background/object" (which denotes a fuzzy partition of the set with a membership function PA j, ⁇ MA n respectively).
  • the probability of these fuzzy events is given by:
  • Other approaches to threshold determination may be used, as desired, including standard approaches.
  • Step S 108 - Binarise Data The noise reduced volumetric data is binarised using threshold ThI , in binarising means 20.
  • step S 104 If the noise reduced data after step S 104 is denoted as inputF(x, y, z), where (x, y, z) are voxel coordinates, inputF(x, y, z) is the grey level at voxel (x, y, z). Binarisation of inputF(x, y, z) with ThI yields a first binary volume binl(x, y, z) as follows
  • SE structuring element
  • the connections between brain and non-brain tissue is broken by morphological erosion of the binary volume, binl(x, y, z), with a bigger size of structuring element (SE) (e.g. a cuboid SE of size being at least 5x5x5 voxels).
  • SE structuring element
  • Cuboids are preferred to enable fast implementation.
  • Such morphological erosion removes those foreground voxels, within whose neighbourhood, defined by (2s x +l)*(2s y +l)*(2s z +l), there is at least one background voxel (that is an object that is not of interest).
  • the SE size in the x, y, and z directions is denoted as (2s x +l), (2s y +l) and (2s z +1), respectively.
  • the largest connected component is found and is deemed to be the brain mask.
  • This mask is then dilated to restore those voxels eroded by the previous erosion, with the same SE to restore its original shape, that is to yield a first brain mask BrI (x, y, z).
  • Both bin volumes and Br masks are binary masks, typically with the bin volumes having foreground other than the brain while the Br masks have brain as foreground.
  • Step 1 Initialisation of object voxels and background voxels
  • An initialisation procedure is performed to standardise the labels of the object voxels and the background voxels to values of 1 and 0 respectively.
  • the input is a binary image and these two values can, of course, be different from 1 and 0.
  • the labels of all object voxels are initialised to a value of 1
  • the labels of all background voxels are initialised to a value of 0, in preparation of subsequent processing. All object voxels are thus labelled in preparation of subsequent steps.
  • a current label m which is used in subsequent steps, is initialised to an arbitrary value of 4.
  • the current label m is assigned to the non-marked object voxels first encountered in the raster scanning order, described below in relation to Step 2.
  • Step 2 Determining unmarked obj ect voxels
  • the binary volume b(x, y, z) is scanned for unmarked (that is, unlabelled) object voxels, starting from voxel (0, 0, 0) in a predetermined raster scanning order. To accord with raster scanning convention, one may scan with x incremented first, followed by y and finally by z. A systematic scanning procedure avoids missing any unmarked object voxels. Other predetermined scanning procedures can be adopted if desired, provided that each object voxel is addressed in due course.
  • Step 3 Labelling object voxels
  • This step labels a connected component through enhanced recursion.
  • Sub-step 4 is a recursion, while sub-step 3 is an iteration, and the combination of sub- steps 3 and 4 is the enhanced recursion).
  • the numbers of voxels of the binary volume b(x, y, z) in x, y, and z directions are L x , Ly, and L 2 respectively. Any voxel within b(x, y, z) will have its integer coordinate (x, y, z) satisfying 0 ⁇ x ⁇ (L x -I), 0 ⁇ y ⁇ (L y -1), and 0 ⁇ z ⁇ (L z -1).
  • the voxel (x, y, z) is said outside the volume b(x, y, z): x ⁇ 0, x > L x , y ⁇ 0, y > Ly, z ⁇ 0, or z > L 2 .
  • the (2n x +l)(2n y +l)(2n z +l) neighbourhood of a voxel (x', y', z') is a collection of all those voxels (x, y, z) satisfying
  • the sub-volume has its own coordinate system, and the relationship between the original volume b(x, y, z) and the sub-volume is a translation: the centre voxel of the sub-volume in its own coordinate system is (n x , n y , n z ), which corresponds to (x', y', z') of the original volume b(x, y, z) in the original coordinate system.
  • the centre voxel of the sub-volume in its own coordinate system is (n x , n y , n z ), which corresponds to (x', y', z') of the original volume b(x, y, z) in the original coordinate system.
  • This sub-volume is also called (2n x +l)(2n y +l)(2n z +l) sub-volume of (x', y', z') or (2n x +l)(2n y +l)(2n z +l) sub- volume from (x' 5 y', z')- Labelling a connected component starting from the first met unmarked object voxel (x 0 , yo, zo) found in step 2 can be done in the following sub-steps.
  • Sub-step 1 set the coordinate of the working voxel w as (x 0 , y 0 , zo);
  • Sub-step 2 formulate the sub-volume from the working voxel w;
  • Sub-step 3 in the sub-volume, label all internal object voxels 26-connected to its centre voxel w with the current label m and change the corresponding voxel in b(x, y, z) to rn;
  • Sub-step 4 for any voxel that is a non-internal object voxel 26-connetced to w in the sub-volume, its coordinates are set to be the working voxel w and go to sub-step 2.
  • Sub-step 4 is a recursion, while sub-step 3 is an iteration, and the combination of sub- steps 3 and 4 is the enhanced recursion. Further explanation of sub-step 3 is given below.
  • any object voxels are assigned the intermediate label m' (such as - 10, as noted above) while voxel (x,y, z) is itself assigned label m.
  • voxel (x,y, z) is itself assigned label m.
  • any voxel (x 3 , ⁇ 3 , z 3 )'s label is m', check the 3x3x3 neighbourhood of (x 3 ,,y 3 , z 3 ). If any voxel (x 4 , , y 4 , z 4 )'s label is 1, change its label to m'. Obviously (x ⁇ >> 4 , z 4 ) is 26-connected to (x, y, z).
  • This procedure also ensures that the sub-volume does not block the recursion from reaching out to other parts of the image to be labelled by initiating the enhanced recursion from those non-internal object voxels that are 26-connected to (x, y, z).
  • This step prepares for labelling the next connected component, and finds the first unmarked object voxel in the raster scan order.
  • the step of updating includes incrementing the current label m (from an initial value of 4) by 1 so and incrementing x, followed by y and finally z to find the next unmarked (that is, a voxel with a label of "1") object voxel having a label of 1.
  • the enhanced recursion of step 3 is initiated. The procedure stops when no further unmarked object voxels are found.
  • the output of the component labelling exercise is a unique label for each of the object voxels, with those voxels connected to each other having the same label, while those voxels that are non-connected have different labels.
  • Step Sl 12 - Calculate Second Threshold (Th2) A second range-constrained threshold, Th2, in the superior region is determined, in the second threshold calculating means 24.
  • the most superior axial slice (axS2) with a foreground area of at least a first predetermined size (e.g. 100 mm 2 ) is found. This is to determine an axial slice in the superior region.
  • the axial slice is in the inferior region.
  • the combination of two axial slices, from the superior and inferior regions can be used to determine the local thresholds along the superior/inferior direction.
  • the first binary mask could be considered to be a first iteration.
  • the area is based on a count of ' 1 ' pixel/voxel times the voxel sizes in the x, y and z directions.
  • the slice number of the axial slice that is a first predetermined distance (e.g. 20 mm) inferior to the axial slice axS2 is denoted as axS3.
  • the first predetermined distance is chosen based on experience, so that the proportion of the GM and WM is relatively stable.
  • Axial slice axS3 is used to calculate the brain threshold in the superior region of the brain.
  • the same approach to determining the threshold may be used as is used in step Sl 06, described above.
  • the range-constrained thresholding specifies the ROI, being the space enclosed by the skull, and suitable lower and upper bounds are used (e.g. 20%, and 60%, respectively, for the second threshold) to determine the second threshold Th2.
  • Step Sl 14 Determine Re-Binarisation Thresholds: Low [lTh(z)] and high
  • [hTh(z)] thresholds are determined for different axial slices z, in the low and high threshold calculating means 26.
  • the first threshold ThI and the second threshold Th2 will be very different and different axial slices should be binarised with a different threshold.
  • the two thresholds ThI, Th2 are considered significantly different if
  • ⁇ IP x 100% >r where r is a fractional constant.
  • r is preferably in the range of (0.2, 0.3) and may be chosen based on validation and experience.
  • Morphological dilation e.g. with cuboid SE of 5mm x 5mm x 5mm
  • Morphological dilation is performed with respect to the first brain mask BrI twice to yield a second binary volume bin2. For each foreground voxel in bin2, if its distance to the nearest background voxel of the head mask (the head mask is the space enclosed by the skull) is smaller than a second predetermined value, e.g.10 mm, which is typically chosen by experience and validation, then that foreground voxel in the second binary volume bin2 is set to background.
  • a second predetermined value e.g.10 mm
  • Multiplying the second binary volume bin2 with the original volume of the noise removed MRI data provides a grey level histogram.
  • Linear interpolation is used to define thresholds for different axial slices when
  • ThI and Th2 are significantly different. Without losing generality, suppose axS3 is smaller than axSl, i.e., suppose the axial slices are counted from superior to inferior, then the threshold for different axial slices will be a function of the slice number. For axial volumes, an axial slice number is the same as the z coordinate.
  • the low (lTh(z)) and high (hTh(z)) thresholds at different axial slice z are defined as follows:
  • lTh(z) is set to 5 to avoid decreasing the threshold, to the superior axial slices in the superior direction not being included due to too small thresholds.
  • the high threshold hTh(z) is defined as highTh.
  • Step Sl 16 - Refine Binarisation: Using the different thresholds lTh(z) and hTh(z), the noise-reduced slices from step S 104 are binarised, in a second binarising means 28.
  • Binarisation based on the refined thresholds is performed as follows to yield a new, third binary volume bin3(x, y, z) f 1 hTh(z) ⁇ inputF(x,y,z) ⁇ lTh(z) birii(x,y,z) - ⁇
  • step SIlO the connections between brain and non-brain tissue are broken by morphological erosion with a bigger size of SE. Then the largest connected component is found and is deemed to be the brain mask. This mask is then dilated with the same SE to restore its original shape.
  • the approach can be the same as is taken in step SIlO described above.
  • the third binary volume bin3(x, y, z) is first eroded with a bigger size SE (e.g. a cuboid SE of at least 5x5x5 voxels).
  • SE e.g. a cuboid SE of at least 5x5x5 voxels.
  • the SE size in the x, y, and z directions is denoted as (2s x +l), (2s y +l), (2s z +l) respectively.
  • the largest connected component of the eroded binary volume is found, the largest connected component is dilated with the cuboid SE (2s x +l)*(2s y +l)*(2s z +l) to yield a second brain mask Br2(x, y, z).
  • the superior region above the eyeball (that is above z ee ) uses a decreased SE ((2s x -l)*(2s y -l)*(2s z -l)) to keep the details of the convoluted brain surface; while the inferior region which has z bigger than Z ee uses the bigger cuboid SE (2s x +l)*(2s y +l)*(2s z +l) to break the connections between non-brain and brain tissues.
  • Dilating the largest component to yield the second brain mask Br2(x, y, z) uses the same SE, (2s x +l)*(2s y +l)*(2s z +l) or (2s x -l)*(2s y -l)*(2s z -l) as appropriate.
  • the axial slice with the eyeballs is identified using the positions of the AC and the PC.
  • Z ⁇ o is the axial slice at least a third predetermined distance, e.g. 10 mm, superior to the axial slices ZA C and zpc
  • z el is the axial slice at least a fourth predetermined distance, e.g. 20 mm, inferior to the axial slices Z AC and zpc.
  • For each axial slice between the axial slices z e o and z el find its ROI which is the space enclosed by the skull, and divide the y extension into 4 equal parts. This number of parts is chosen as empirical data shows that the eyes generally fall in the first part. In the first part (where the eyes are lying) calculate the ratio of the number of foreground (GM+WM) voxels to the number of voxels of the ROI.
  • GM+WM foreground
  • Figure 3 A is an image with 3 values (dark for background, grey for head, and white for brain) showing a superior axial slice without eyes and Figure 3 B an axial slice in which eyes are present.
  • Both Figures 3A and 3B have front quarter lines 40, 42.
  • the ROI area in front of the front quarter line 40 of Figure 3A comprises the light area 50 and the dark area 52, while the light area 50 comprises the GM+WM.
  • the ROI area in front of the front quarter line 42 comprises the light area 54 and the dark area 56, while the light area 54 comprises the GM+WM.
  • the first axial slice in between z ⁇ 0 and z el which has a ratio smaller than half of the maximum ratio of all these axial slices (between z e0 and z el ) is the axial slice with the eyes present and this axial slice is denoted as z ee .
  • step SIlO the connections between brain and non-brain tissue is broken by morphological erosion with a smaller size of SE. Then the largest connected component is found and is deemed to be the brain mask. This mask is then dilated with the same SE to restore its original shape.
  • the approach can be similar to that taken in step SIlO described above (with the main difference being in the SE size).
  • the third binary volume bin3(x, y, z) is first eroded with a smaller size SE (e.g. a cuboid SE of size 2mm x 2mm x 2mm), followed by finding the largest connected component.
  • SE e.g. a cuboid SE of size 2mm x 2mm x 2mm
  • the largest connected component is dilated with the same SE to yield a third brain mask Br3(x, y, z).
  • Step S 122 - Merge Second and Third binary masks Br2(x, y. z) and Br3(x, y. z) The second brain mask Br2(x, y, z) and the third brain mask Br3(x, y, z) are merged in a mask merging means 34.
  • the second brain mask Br2(x, y, z) is good at breaking the connection between the brain and non-brain tissues, while the third brain mask Br3(x, y, z) is good at maintaining small brain fragments.
  • This step finds the foreground connected components of the difference between Br3(x, y, z) and Br2(x, y, z) and adds those foreground components to Br2(x, y, z) , to form a fourth brain mask Br4(x, y, z).
  • the foreground components are only added when the minimum distance of the foreground component, of the difference image between Br3 and Br2, to the nearest background voxels of the head mask (the ROI determined in step S 106) is bigger than a fifth predetermined value, e.g. 10 mm, as may be determined from experience.
  • a fifth predetermined value e.g. 10 mm
  • Step S 124 Process Superior and Inferior Axial Slices: Special processing of certain superior axial slices and certain axial slices inferior to the eyes occurs in the superior and inferior slices processing means 36.
  • the inputs include both the fourth binary mask Br4 and the volume data after noise removal.
  • an overlap index is computed as follows: for all the foreground voxels of this component (the number of voxels is denoted as n our ), count the number n pr of object voxels with the same (x, y) coordinates in the previous axial slice z ee , and the overlap index of this component is n Pr /n oUr .
  • a foreground connected component of the current axial slice z ee +l if its minimum distance to the nearest background voxels of the head mask is smaller than a seventh predetermined distance, e.g. 7mm, and its maximum distance is smaller than an eighth predetermined distance, e.g. 15mm, and the overlap index of this component is smaller than a first predetermined constant, e.g. 0.4, then this foreground component is discarded (set as background).
  • this component is set to background.
  • the component is discarded (highly probable of non-brain tissues); otherwise this component is set to object component. And the current axial slice's minimum and maximum y coordinates of the remained object voxels are found and are assigned to yminP and ymaxP respectively.
  • Step S 126 - Output Brain Mask The final brain mask slices from the mask merging means 34 and the superior and inferior slices processing means 36 are collated and output from, collating and output means 38, as a sequence of binarised images.
  • a third example is in brain atrophy estimation in diseased subjects; after brain/non-brain segmentation, brain volume is measured at a single time point with respect to some normalizing volume such as skull or head size; alternatively, images from two or more time points are compared, to estimate how the brain has changed over time.
  • a fourth application is the removal of strong ghosting effects that can occur in fMRI. These artefacts can confound motion correction, global intensity normalization, and registration to a high-resolution image.
  • the above-described embodiment can handle serious intensity inhomogeneity and/or noise exhibited in clinical MR images robustly, due to local thresholding and breaking the connections between brain and non-brain tissues while maintaining small brain fragments through the combination of morphological processing with different sizes of structuring elements.
  • the various values used and exemplified in the above-described method are based on a brain image of a typical adult human, and may be generated by experiment and experience. Of course, other values maybe used as appropriate. For children or a- typical adult humans, different values may be used.
  • the invention can also be used on non-human subjects, for example mammals, especially primates, again using some or all different values, as may be determined empirically.
  • a module and in particular the module's functionality, can be implemented in either hardware or software.
  • a module is a process, program, or portion thereof, that usually performs a particular function or related functions.
  • a module is a functional hardware unit designed for use with other components or modules.
  • a module may be implemented using discrete electronic components, or it can form a portion of ari entire electronic circuit such as an Application Specific Integrated Circuit (ASIC). Numerous other possibilities exist.
  • ASIC Application Specific Integrated Circuit
  • FIG 4 is a schematic representation of a computer system 200 suitable for performing the techniques described with reference to Figures 1 to 3.
  • a computer 202 is loaded with suitable software in a memory, which software can be used to perform steps in a process that implement the techniques described herein (e.g. the steps of Figure 1).
  • MRI data can be input, and segmentation results obtained using such a computer system 200.
  • This computer software executes under a suitable operating system installed on the computer system 200.
  • the computer software involves a set of programmed logic instructions that are able to be interpreted by a processor, such as a CPU, for instructing the computer system 200 to perform predetermined functions specified by those instructions.
  • the computer software can be an expression recorded in any language, code or notation, comprising a set of instructions intended to cause a compatible information processing system to perform particular functions, either directly or after conversion to another language, code or notation.
  • the computer software is programmed by a computer program comprising statements in an appropriate computer language.
  • the computer program is processed using a compiler into computer software that has a binary format suitable for execution by the operating system.
  • the computer software is programmed in a manner that involves various software components, or code means, that perform particular steps in the process of the described techniques .
  • the components of the computer system 200 include: the computer 202, input and output devices such as a keyboard 204, a mouse 206 and an external memory device 208 (e.g. one or more of a floppy disc drive, a CD drive, a DVD drive and a flash memory drive) and a display 210, as well as network connections for connecting to the Internet 212.
  • input and output devices such as a keyboard 204, a mouse 206 and an external memory device 208 (e.g. one or more of a floppy disc drive, a CD drive, a DVD drive and a flash memory drive) and a display 210, as well as network connections for connecting to the Internet 212.
  • the computer 202 includes: a processor 222, a first memory such as a ROM 224, a second memory such as a RAM 226, a network interface 228 for connecting to external networks, an input/output (I/O) interface 230 for connecting to the input and output devices, a video interface 232 for connecting to the display, a storage device such as a hard disc 234, and a bus 236.
  • a processor 222 a first memory such as a ROM 224
  • a second memory such as a RAM 226, a network interface 228 for connecting to external networks, an input/output (I/O) interface 230 for connecting to the input and output devices, a video interface 232 for connecting to the display, a storage device such as a hard disc 234, and a bus 236.
  • the processor 222 executes the operating system and the computer software executing under the operating system.
  • the random access memory (RAM) 226, the readonly memory (ROM) 224 and the hard disc 234 are used under direction of the processor 222.
  • the video interface 232 is connected to the display 210 and provides video signals for display on the display 210. User input, to operate the computer 202 is provided from the keyboard 204 and the mouse 206.
  • the internal storage device is exemplified here by a hard disc 234 but can include any other suitable non- volatile storage medium.
  • Each of the components of the computer 202 is connected to the bus 236 that includes data, address, and control buses, to allow these components to communicate with each other.
  • the computer system 200 can be connected to one or more other similar computers via the Internet, LANs or other networks.
  • the computer software program may be provided as a computer program product. During normal use, the program may be stored on the hard disc 234. However, the computer software program may be provided recorded on a portable storage medium, e.g. a CD-ROM read by the external memory device 208. Alternatively, the computer software can be accessed directly from the network 212.
  • a user can interact with the computer system 200 using the keyboard 204 and the mouse 206 to operate the programmed computer software executing on the computer 202.
  • the computer system 200 is described for illustrative purposes: other configurations or types of computer systems can be equally well used to implement the described techniques.
  • the foregoing is only an example of a particular type of computer system suitable for implementing the described techniques.

Abstract

A method, apparatus and computer software which segments a brain in MR image data [S102] representing individual image slices of the brain. The method involves noise filtering [S104] the data, determining a first binary mask [S110] for volume data of the brain, determining second [S118] and third binary masks [S120] as appropriate and performing further processing for certain superior and inferior slices [S124].

Description

Method, Apparatus and Computer Software For Segmenting the Brain From MR Data
Field of the Invention
The present invention relates to segmenting brain tissue from non-brain tissue within image data, in particular from Tl -weighted, Spoiled Gradient-Recalled (SPGR), and Fluid Attenuation Inversion Recovery (FLAIR) Magnetic Resonance (MR) data.
Background
There are many applications related to brain imaging that either require, or benefit from, the ability to segment brain tissue from non-brain tissue accurately within an image. Tl -weighted volume scans with thin slices have become a central component of scanning protocols for structural (morphological) imaging as they provide excellent anatomical detail.
For multi-contrast (multi-spectral) data, automated segmentation techniques are available based on voxel classification schemes in a feature space. It is also possible to image different types of tissue directly, based on the selective suppression of signals from specific tissues. In principle, these types of tissue images can be classified into a much larger number of tissue types than for single-contrast data. However, these approaches have the disadvantages of relatively long acquisition times or low spatial resolution (> 3 mm slice thickness) and may require the registration of data from multiple acquisitions.
Brain segmentation from MR head volumes is challenging due to the inherent nature of MR neuro-images: noise, grey level inhomogeneity, partial volume effects, artefacts, and the closeness of brain tissue to non-brain tissue both spatially and in grey levels.
To date, there have been three main methods proposed for achieving brain/non- brain segmentation: manual, threshold-with-morphology, and surface-model-based. Manual brain/non-brain segmentation methods are, as a result of the complex information understanding involved, probably more accurate than rally automated methods are ever likely to achieve. This is the right level in image processing where human expertise is superior. At the lowest level or the most localized level (for example, noise reduction or tissue-type segmentation), human beings often cannot improve on the numerical accuracy and objectivity of a computational approach. The same also often holds at the highest or most global level; for example, in image registration; human beings cannot in general take in enough of the whole-image information to improve on the overall fit that a good registration programme can achieve. With brain segmentation, however, the proper size of the image 'neighbourhood' that is considered when outlining the brain surface is ideally suited to manual processing. However, there are serious enough problems with manual segmentation to prevent it from being a viable solution in most applications. The first problem is the time cost. The second is the requirement for sufficient training, and care during segmentation, that subjectivity is reduced to an acceptable level.
Most of the automatic and semi-automatic methods belong to the second type, threshold-with-morphology.
Brummer M.E., Mersereau R.M., Eisner R.L., Lewine R.R.J., "Automatic detection of brain contours in MRI dataset". IEEE Transactions on Medical Imaging 1993; 12(2): 153-166, proposed segmenting grey matter (GM) plus white matter (WM) through: 1) generating a head mask by modelling the background noise as a Rayleigh distribution and curve fitting;
2) finding two grey value thresholds via Gaussian curve fitting of a noise- removed histogram of the head image and determining initial brain segmentation through thresholding; 3) breaking connections between brain and non-brain tissues via a four-step morphological region-splitting algorithm: erosion with a circular structuring-element (SE) of radius 2 pixels, labelling connected components in the eroded binary mask, dilation of the labelled mask with circular SE of radius 3 pixels, and finding the intersection between the dilated mask and the original binary image; and
4) propagating brain masks in 2D slices based on an overlap index to get rid of clutter objects.
There are various problems with this approach: i) the algorithm assumes intensity inhomogeneity correction based on a correction suitable for a phantom and thus cannot handle serious inhomogeneity; ii) the threshold determination based on a Gaussian curve fitting may fail if there is serious noise or intensity inhomogeneity; iii) excessive end slices may be removed due to the 3D distance transform and distance threshold (steps 3 and 4); and iv) mask propagation in 2D is error prone.
Atkins M.S., Mackiewich B.T., "Fully automatic segmentation of the brain in MRI", IEEE Transactions on Medical Imaging 1998; 17(1): 98-107, proposed segmenting the brain (WM, GM and internal Cerebro-Spinal Fluid [CSF]) in T2- weighted or proton density (PD) MR images. The brain is segmented in three steps:
1) generating a head mask in a similar way as Brummer et al 1993 ;
2) generating an initial brain mask through 2D non-linear anisotropic diffusion filtering, determination of two thresholds based on Gaussian curve fitting of the grey level histogram, refinement of the binary mask through morphological erosion followed by removing object components whose centroids fall outside a bounding box defined by the head mask and morphological dilation; and
3) refining the brain mask through an active contour algorithm to locate the boundary between the brain and the intra-cranial cavity.
Basically the algorithm is a 2D method without 3D connected component labelling. There are various problems with this approach: i) the contour is very rough and cannot follow the fold of the cortex; ii) like Brummer et al. 1993, the determination of thresholds based on Gaussian curve fitting may fail when extreme imaging conditions occur; iii) non-linear anisotropic diffusion filtering helps in noise removal but not against inhomogeneity, thus the algorithm is not able to handle serious intensity inhomogeneity; iv) the 10 x 10 SE for erosion, removal and dilation is good to separate non-brain tissues but may be too big to maintain small brain fragments; v) the image may be over- smoothed so that spatial resolution is decreased and details are lost; vi) and three PD/T2 volumes were used for validation to reach a similarity index around 0.95.
Lemieux L., Hagemann G., Krakow K., Woermann F. G., "Fast, accurate, and reproducible automatic segmentation of the brain in Tl -weighted volume MRI data", Magnetic Resonance in Medicine 1999; 42: 127-135, proposed segmenting brain image data based on thresholding, morphological opening, and connected component analysis. It assumes that the intensity inhomogeneity has been corrected and thus is not able to handle serious intensity inhomogeneity. The threshold for the initial brain mask is purely based on heuristics and may fail in case of gradual grey level change. Even if the threshold can be found, it may take several rounds of binarisation, opening (time- consuming), and connected component analysis. The big size of the SE (sphere of radius 3 mm) for breaking connections between brain and non-brain tissues may discard small brain fragments.
Stokking R., Vincken K.L., Viergever M.A., "Automatic morphology-based brain segmentation (MBRASE) from MRI-Tl data", Neurohnage 2000; 12: 726-738, proposed a heuristic procedure to determine thresholds for brain segmentation. The procedure to find thresholds may fail as reported (2 out of 30 cases). The procedure to do segmentation is very time consuming, as it iterates over binarisation and morphological operations. The small SE may fail to break connections between brain and non-brain tissues. It seems the algorithm is very sensitive to serious noise and/or intensity inhomogeneity.
Shattuck D.W., Sandor-Leahy S.R., Schaper K.A., Rottenberg D.A., Leahy R.M.,
"Magnetic resonance image tissue classification using a partial volume model", Neurolmage 2001; 13: 856-876, proposed segmenting the brain from Tl -weighted MR images using a combination of 3D anisotropic diffusion filtering, edge detection (Marr- Hildreth edge detector), and mathematical morphology (erosion with a rhombus SE of size 1 voxel, finding the largest connected component, and dilation with the same SE). The problem with this method is the presence of spurious edges caused by various artefacts and intensity inhomogeneity, as well as the demand for users to tune parameters (diffusion constant and size of SE for erosion). Shan Z. Y., Yue G.H., Liu J.Z., "Automated histogram-based brain segmentation in Tl -weighted three-dimensional magnetic resonance head images", Neurolmage 2002; 17: 1587-1598, presented brain segmentation of Tl-weighted images in three steps: 1) foreground/background thresholding using a threshold derived from Otsu' method;
2) disconnection of brain from skull and other head tissue using a threshold derived from Gaussian fitting of the intensity histogram and morphological opening with a spherical SE of radius 3 voxels; and 3) removal of non-brain tissues through thresholding by getting thresholds from
Gaussian curve fitting of the intensity histogram and a morphological opening using a 3D spherical SE of a radius of five voxels.
This approach cannot handle serious noise and inhomogeneity. The big size for morphological opening may discard small brain fragments. The curve fitting of the intensity histogram by Gaussian mixtures may fail when extreme imaging conditions occur. The formula to calculate the threshold for separating brain and non-brain tissues may be ad hoc and not applicable to other data.
Kovacevic N., Lobaugh N.J., Bronskill MJ., Levine B., Feinstein A., Black S.E.,
"A robust method for extraction and automatic segmentation of brain images", Neurolmage 2002; 17: 1087-1100, proposed obtaining a complete brain mask (including all the CSF, GM, and WM) from two volumes (T2-weighted and PD-weighted) of the same subject acquired at the same session without the need for registration in a semiautomatic way. First, a binarisation is performed based on an empirical ellipse that defines the 2D (T2- and PD-weighted) threshold curve. Then the largest connected component for each slice is identified as the initial brain mask. This seed-growing step is accurate on all but a few slices, where eyes or other extra-cerebral tissues will remain. For such circumstances, the image is edited manually to correct these errors. This is not an automatic solution. The brain mask needs to be registered with Tl-weighted images for high-resolution- brain tissue segmentation. Smith S. M., "Fast robust automated brain extraction", Human Brain Mapping 2002; 17: 143-155, proposed finding the brain surface through a deformable model surface. This approach can segment Tl-, T2-, and PD-weighted brain images. However, inherent to the nature of the active method that is used, it may converge to a local minimum, and is vulnerable to weak edges and complicated edges (e.g. multiple edges due to artefacts). Also when brain and non-brain tissue are close in both space and intensities, it is hard to distinguish them.
It is an aim of the present invention to provide a useful approach to brain segmentation in images. Usefully, it may avoid or at least partially alleviate some of the problems encountered in at least some of the known systems.
Summary
According to one aspect of the present invention, there is provided a method of segmenting a brain in image data. The method comprises: determining a second brain mask for the brain, segmenting the brain in the image data; determining a third brain mask for the brain, segmenting the brain in the image data; merging one or more corresponding portions of the second and third brain masks; and generating a fourth brain mask, segmenting the brain in the image data, the fourth brain mask comprising the merged one or more corresponding portions of the second and third brain masks.
According to a second aspect of the present invention, there is provided a method of processing brain masks for superior image slices of a brain to exclude sagittal sinuses close to the midsagittal plane. The method comprises: determining a first image slice, being the most superior image slice to have a first predetermined brain area; determining a second image slice, being the next image slice that is at least a predetermined distance inferior to the first image slice; for image slices between the first and second image slices determining the average and standard deviation of the grey levels along the midsagittal plane of foreground voxels of the brain mask; using the sum of the average and standard deviation to determine a grey level threshold; comparing the grey levels of the foreground voxels with the grey level threshold; and adjusting the brain masks by setting those foreground voxels to background for which the grey level is below the threshold. According to a third aspect of the present invention, there is provided a method method of processing brain masks for image slices inferior to eyeball level to remove connexions between brain tissue and non-brain tissue. The method comprises: determining connected foreground components for a plurality of slices below eyeball level; for individual determined connected foreground components, determining an overlap index between that connected foreground component and the corresponding connected foreground component in an adjacent slice; for individual determined connected foreground components, determining minimum and maximum distances to background voxel of the head mask of the same slice; and for individual determined connected foreground components, comparing the combination of the determined overlap index and the determined minimum and maximum distances with a set of predetermined values and setting the determined connected foreground component as background or brain tissue based on the result.
According to a fourth aspect of the present invention, there is provided a method of segmenting a brain in image data representing individual image slices of a brain. The method comprises: calculating a first threshold for the image data; first binarising the image data using the first threshold; determining a first binary mask for the brain; calculating a second binarising threshold for the image data, using a portion of the first binary mask corresponding to at least one slice; re-binarising the image data using at least one threshold pair determined using the first and second thresholds; and determining a second binary mask for the brain, segmenting the brain in the image data.
According to a fifth aspect of the present invention, there is provided apparatus operable according to any one of the above methods.
According to a sixth aspect of the present invention, there is provided a computer program product operable according to any one of the above methods.
According to a seventh aspect of the present invention, there is provided apparatus for segmenting a brain in image data representing individual image slices of a brain. The apparatus comprises: second brain mask calculating means for determining a second brain mask for the brain, segmenting the brain in the image data; third brain mask calculating means for determining a third brain mask for the brain, segmenting the brain in the image data; and mask merging means for merging a plurality of corresponding slices of the second and third brain masks, to generate a fourth brain mask, segmenting the brain in the image data and comprising the merged corresponding slices of the second and third brain masks.
According to an eighth aspect of the present invention, there is provided apparatus for segmenting a brain in image data representing individual image slices of a brain. The apparatus comprises: first threshold calculating means for calculating a first threshold for the image data; first binarising means for first binarising the image data; first brain mask calculating means for determining a first binary mask for the brain; second threshold calculating means for calculating a second threshold for the image data, using a portion of the first binary mask corresponding to at least one slice;second binarising means, for re-binarising the image data using at least one threshold pair determined using the first and second thresholds; and second brain mask calculating means for determining a second binary mask for the brain, segmenting the brain in the image data.
According to an ninth aspect of the present invention, there is provided a computer program product for segmenting a brain in image data representing individual image slices of a brain. The computer program product comprises: computer readable program code for determining a second brain mask for the brain, segmenting the brain in the image data; computer readable program code for determining a third brain mask for the brain, segmenting the brain in the image data; and computer readable program code for merging a plurality of corresponding slices of the second and third brain masks, to generate a fourth brain mask, segmenting the brain in the image data and comprising the merged corresponding slices of the second and third brain masks.
According to an tenth aspect of the present invention, there is provided a computer program product for segmenting a brain in image data representing individual image slices of a brain. The computer program product comprises: computer readable program code for calculating a first threshold for the image data; computer readable program code for first binarising the image data; computer readable program code for determining a first binary mask for the brain; computer readable program code for calculating a second threshold for the image data;, using a portion of the first binary mask corresponding to at least one slice; computer readable program code for re- binarising the image data using at least one threshold pair determined using the first and second thresholds; and computer readable program code for determining a second binary mask for the brain, segmenting the brain in the image data.
Embodiments of a method, apparatus and computer software segment a brain in image data representing individual image slices of the brain. The preferred method involves noise filtering of the data, determining a first binary mask for volume data of the brain, determining second and third binary masks, using the first binary mask as a first iteration, merging the second and third binary masks as appropriate and performing further processing for certain superior and inferior slices.
Introduction to the Drawings
The invention is further described by way of non-limitative example with reference to the accompanying drawings, in which:- Figure 1 is a flowchart showing steps involved in a preferred method of brain segmentation in an image;
Figure 2 is a schematic diagram of apparatus for performing brain segmentation in an image;
Figure 3 A is an image showing a superior axial slice without eyes; Figure 3B is an image showing an axial slice in which eyes are present; and
Figure 4 is a schematic representation of a computer system suitable for performing the techniques described with reference to Figures 1 to 3.
Description
The brain is the sum of grey matter (GM) and white matter (WM). In Tl- weighted or SPGR MR images, WM is brighter than GM, and GM is brighter than cerebro-spinal fluid (CSF), bone and air. In FLAIR images, GM is the brightest, followed by WM, CSF, bone and air. In such images, the brain is surrounded by a dark rim of 'background' with occasional thin bright connections to the major sinuses (namely, the sinus transversus, sigmoidus, confluens sinuum, and sagittalis superior), other blood vessels, dura, marrow, scalp, and soft tissue of the neck.
A preferred embodiment of the present invention uses a mixture of thresholding and morphological operations to segment the tissues in such images. Automatic determination of thresholds is based on anatomical knowledge instead of ad hoc empirical formulae. Noise is removed by non-linear anisotropic diffusion filtering. Intensity inhomogeneity over the whole volume is divided into two kinds: intra-slice inhomogeneity and inter-slice inhomogeneity. Intra-slice inhomogeneity is tackled with range-constrained thresholding, while inter-slice inhomogeneity is handled by local thresholding. Connections in the images between brain tissues and non-brain tissues are broken by morphological operations with a bigger size of structuring element (SE), while small brain fragments are maintained by morphological operations with a smaller size of SE. Regions prone to error, like axial slices inferior to the eyes and the most superior axial slices with GMAVM are specially processed by incorporating anatomical knowledge.
Figure 1 is a flowchart showing steps involved in a preferred method of brain segmentation in an image. Figure 2 is a schematic diagram of apparatus 10 for performing brain segmentation in an image, for example according to the method of the flowchart of Figure 1.
The input data is a 3D volumetric data, which is composed of axial slices. Some of the processing may be based on the volume as a whole (e.g. finding connected components, erosion, dilation), while other processing (e.g. refining steps) maybe local and partial, and typically is only applied to some and not all slices.
Step S 102 - Input Data: Volumetric MRI data for a brain volume scan is input, with the data representing images of sequential slices. The data may, for example, be provided straight from a scanner 12 or from a memory 14 (e.g. a disc or computer memory). Step S 104 - Filter Noise: The input volumetric data is filtered, by way of noise filter means 16, in this embodiment by 3D anisotropic non-linear diffusion filtering, to remove noise. As such filtering encourages intra-region smoothing while inhibiting inter-region smoothing, noises are efficiently smoothed. Other approaches to filtering are possible, such as inter alia, wavelet transform or spatial filtering.
Step S 106 - Calculate First Threshold TThI): The first brain threshold, ThI, around an axial slice is calculated by range-constrained thresholding, in a thresholding determining means 18.
The preferred axial slice chosen for thresholding passes through the anterior commissure (AC) and/or the posterior commissure (PC). This approach explicitly incorporates knowledge about the images to be segmented into the segmentation scheme.
The preferred method for thresholding is described in more detail in International Patent Application PCT/SG2004/000403, filed 9 December 2004, entitled "Methods and Apparatus for Binarising Images", designating all states including the US, the whole contents of which are herein incorporated by reference, and which method is outlined below.
This preferred approach to thresholding, of range-constrained thresholding involves 3 main steps: i) determining a region of interest (ROI), being the space enclosed by the skull in an axial slice; ii) within the ROI, estimating a range in the intensity histogram of the ROI, based on empirical data, which represents the minimum and maximum bounds that the background proportion can take, and which are determined by training or knowledge; and iii) selecting the threshold to maximize the between-class variance, as indicated below.
When the AC or PC are available, the axial slice passing through the AC or PC is derived as the ROI and used for range-constrained thresholding, otherwise an axial slice slightly superior to the eyes is obtained for range-constrained thresholding. The axial slice that is used for thresholding is denoted axS 1. Let h(i) denote the frequency of grey level IΪ , the accumulative frequency H(i) is
Figure imgf000014_0001
The following steps yield the optimum threshold. 1) Specify two percentages
Figure imgf000014_0002
and Hh b , corresponding, respectively, to lower and upper frequency bounds of the background in the ROI (e.g. 12% and 30% or, more restrictively, 14% and 28%, respectively), the percentages being based on prior knowledge or tests conducted on ACs or PCs or whatever else is used for the ROI; 2) Calculate rlow and rjugh, which are the grey levels corresponding to the background lower and upper bounds
Figure imgf000014_0003
, Hh h respectively, r!ow = mm{i \ H(i) ≥ HΪ};
rMgh = mm{i \ H(i) ≥ Hh b};
3) Determine the threshold using an algorithm which operates on the frequencies within the selected range from riow to Thigh-
This third step can be performed a number of different ways.
First (Preferred) Approach - the range-constrained variance method.
Calculate the between-class variance with respect to a variable rk:
Pr(Cl) x D(Cl) + Pr(C2) x D(C2) (1) where rk falls within (riow, rhigh),
Figure imgf000014_0004
rhigh
Pr(C2) = ][>(0, rt+l
D(Cl) = (μoτ)2, O(C2) = (μiτ)2 , μτ = ∑ixh(i) , rlm
μo =∑ixh(i) ,
Figure imgf000015_0001
The optimum threshold is the r^ maximizing formula (1) for given Hf and H^ .
This preferred approach may be varied using weighted variances, e.g.
Pr(Cl) x D(Cl) x Wl + Pr(C2) x D(C2) x W2, where Wl, W2 are two positive constants selected by the user and representing the weights of the two respective class variances.
Second Approach - range-constrained least valley detection fRCLVD^
If the frequency range derived in step 1) is correctly estimated then it will include a valley in the frequency distribution of intensities. This valley separates the background and the object. Thus, valley detection can be exploited to select the threshold. This has the following steps:
i) A frequency interval δh is specified (e.g. 1%, or typically any other value between 1% and 5%). ii) The grey level range
Figure imgf000015_0002
J is partitioned into K+\ intervals with an equal frequency range δh . For an interval labelled by integer index j, the lower end of its intensity range is denoted r/ and the upper end is denoted r/ . Thus:
r\ = riow » 4 = nun^ I HQ) ≥ (Pero + Sh)}> rl = rl , rl = nm( | H(i) ≥ (H(r} ) + δh)},
r? = rtl , r2 K = min^ | H(i) ≥ (H(r*) + δh)}.
H(T1* + δh) ≥ perx and H(rf) < perγ . iii) The average frequency h J for each of the intervals j is calculated given by
Figure imgf000016_0001
4) Let J denote the interval for which h J is a minimum. The threshold of this RCVLD method may be selected to be any value in the range r( to r/ , such as Thl=(r2 J + nJ)/2.
Third Approach - range-constrained fuzzy c-partition thresholding CRCFCP)
In general terms, let Ab I A0 be the fuzzy sets of fuzzy events "background/object" (which denotes a fuzzy partition of the set
Figure imgf000016_0002
with a membership function PAj, ^MAn respectively). The probability of these fuzzy events is given by:
P(Ai)=%r μ(j)xhn
where A1 e {Ab , A0 }, and the weighted entropy with this fuzzy partition can be calculated as: SQV11W2) = W1 xP(Ab)xlogP(Ab) + W2 XP(A0)XlOgP(A0) where Wi and W2 are two positive constants, and log(.) is the natural logarithm.
Let rlow ≤a<c≤ rMgh . The membership functions can be defined as follows:
Figure imgf000016_0003
and
1, rlow ≤x≤a
MAO) = (x - a) l(c - a) a<x<c . 0 c<x≤ rm
The optimum parameters a* and c are chosen to maximise the entropy S(Wi, Wj), and the optimum threshold is ThI = (a + c )/2. Other approaches to threshold determination may be used, as desired, including standard approaches.
Step S 108 - Binarise Data: The noise reduced volumetric data is binarised using threshold ThI , in binarising means 20.
If the noise reduced data after step S 104 is denoted as inputF(x, y, z), where (x, y, z) are voxel coordinates, inputF(x, y, z) is the grey level at voxel (x, y, z). Binarisation of inputF(x, y, z) with ThI yields a first binary volume binl(x, y, z) as follows
1 inputF(x,y,z) > Thl brnl(x,y,z) = {
0 inputF(x,y,z) < ThI
Step SI lO - Determine First Brain Mask (BrO: The largest connected area of object voxels (that is of objects of interest - in this case the brain) within the volumetric data is determined in the 3D data, based on the binarised data component, using a larger size structuring element (SE), in a first brain mask calculating means 22.
In this step, the connections between brain and non-brain tissue is broken by morphological erosion of the binary volume, binl(x, y, z), with a bigger size of structuring element (SE) (e.g. a cuboid SE of size being at least 5x5x5 voxels). Cuboids are preferred to enable fast implementation. Such morphological erosion removes those foreground voxels, within whose neighbourhood, defined by (2sx+l)*(2sy+l)*(2sz+l), there is at least one background voxel (that is an object that is not of interest). The SE size in the x, y, and z directions is denoted as (2sx+l), (2sy+l) and (2sz +1), respectively. The largest connected component is found and is deemed to be the brain mask. This mask is then dilated to restore those voxels eroded by the previous erosion, with the same SE to restore its original shape, that is to yield a first brain mask BrI (x, y, z). Both bin volumes and Br masks are binary masks, typically with the bin volumes having foreground other than the brain while the Br masks have brain as foreground.
The preferred method of finding the largest connected component uses foreground 26-connectivity and is described in more detail in Singapore Patent Application No. 2005006820, filed 14 March 2005, the whole contents of which are herein incorporated by reference. One approach described in that patent application is described below.
Step 1 Initialisation of object voxels and background voxels
An initialisation procedure is performed to standardise the labels of the object voxels and the background voxels to values of 1 and 0 respectively. The input is a binary image and these two values can, of course, be different from 1 and 0. For convenience, the labels of all object voxels are initialised to a value of 1, and the labels of all background voxels are initialised to a value of 0, in preparation of subsequent processing. All object voxels are thus labelled in preparation of subsequent steps.
Also, a current label m, which is used in subsequent steps, is initialised to an arbitrary value of 4. The current label m is assigned to the non-marked object voxels first encountered in the raster scanning order, described below in relation to Step 2.
Step 2 Determining unmarked obj ect voxels
The binary volume b(x, y, z) is scanned for unmarked (that is, unlabelled) object voxels, starting from voxel (0, 0, 0) in a predetermined raster scanning order. To accord with raster scanning convention, one may scan with x incremented first, followed by y and finally by z. A systematic scanning procedure avoids missing any unmarked object voxels. Other predetermined scanning procedures can be adopted if desired, provided that each object voxel is addressed in due course.
Step 3 Labelling object voxels
This step labels a connected component through enhanced recursion. Starting from the first unmarked object voxel (xθ, yθ, zθ) found in step 2, all the unmarked object voxels 26-connected to (x, y, z) are marked with the same label m. This is achieved via the following sub-steps:
1) setting the working voxel w, which is w(xθ, yθ, zθ); 2) formulating a (2nx+l)(2ny+l)(2nz+l) sub-volume of w, which is a copy of the original label volume b(x, y, z) around the voxel w;
3) labelling all internal object voxels 26-connected to its centre voxel in the sub- volume with the current label m and changing the corresponding voxel in b(x, y, z) to m; and
4) for any voxel that is a non-internal object voxel 26-connected to w in the sub- volume, its coordinates are set to be the working voxel and go to 2).
(Sub-step 4 is a recursion, while sub-step 3 is an iteration, and the combination of sub- steps 3 and 4 is the enhanced recursion).
Suppose the numbers of voxels of the binary volume b(x, y, z) in x, y, and z directions are Lx, Ly, and L2 respectively. Any voxel within b(x, y, z) will have its integer coordinate (x, y, z) satisfying 0 < x < (Lx-I), 0 < y < (Ly-1), and 0 < z < (Lz-1). When this is not satisfied, i.e., when any one of the following inequalities is satisfied, the voxel (x, y, z) is said outside the volume b(x, y, z): x < 0, x > Lx, y < 0, y > Ly, z < 0, or z > L2.
The (2nx+l)(2ny+l)(2nz+l) neighbourhood of a voxel (x', y', z') is a collection of all those voxels (x, y, z) satisfying |x-x'| ≤ nx, |y-y'| < ny, |z-z'| < nz, and (x, y, z) ≠ (x'5 y', z').
The derivation.of a sub-volume [a cuboid with (2nx+l)(2ny+l)(2nz+l) voxels] from (x', y', z') is done this way: from (x', y', z') find its (2nx+l)(2ny+l)(2nz+l) neighbourhood in b(x, y, z) and copy this neighbourhood to the sub-volume; for those neighbourhood voxels outside b(x, y, z), they are set to 0 in the formed sub-volume. The sub-volume has its own coordinate system, and the relationship between the original volume b(x, y, z) and the sub-volume is a translation: the centre voxel of the sub-volume in its own coordinate system is (nx, ny, nz), which corresponds to (x', y', z') of the original volume b(x, y, z) in the original coordinate system. Through this translation the correspondence between the two volumes is complete built. This sub-volume is also called (2nx+l)(2ny+l)(2nz+l) sub-volume of (x', y', z') or (2nx+l)(2ny+l)(2nz+l) sub- volume from (x'5 y', z')- Labelling a connected component starting from the first met unmarked object voxel (x0, yo, zo) found in step 2 can be done in the following sub-steps.
Sub-step 1: set the coordinate of the working voxel w as (x0, y0, zo); Sub-step 2: formulate the sub-volume from the working voxel w;
Sub-step 3: in the sub-volume, label all internal object voxels 26-connected to its centre voxel w with the current label m and change the corresponding voxel in b(x, y, z) to rn;
Sub-step 4: for any voxel that is a non-internal object voxel 26-connetced to w in the sub-volume, its coordinates are set to be the working voxel w and go to sub-step 2.
(Sub-step 4 is a recursion, while sub-step 3 is an iteration, and the combination of sub- steps 3 and 4 is the enhanced recursion. Further explanation of sub-step 3 is given below.)
An iteration scheme for sub-step 3 is now described. Set an intermediate label m' = - 10. Any negative number can be used, to avoid confusion with existing labelled object voxels.
Within a 3x3x3 neighbourhood of (x, y, z), any object voxels are assigned the intermediate label m' (such as - 10, as noted above) while voxel (x,y, z) is itself assigned label m. In this 3x3x3 neighbourhood of (x, y, z), if any voxel (X1, y\, Z1) has label m', check (xι,yι, Z1)5S 3x3x3 neighbourhood. If any voxel (x2,.y2, z2)'s label is 1, also change fa, yτ, -^)5S label to m'. Obviously (x2, J2, z2) is 26-connected to (x, y, z), by definition of being in the 3x3x3 neighbourhood of (x, y, z). Change (x1} y\, Z1)5S label from m' to m, in any event.
Within a 5x5x5 neighbourhood of (x, y, z), if any voxel (x3, ^3, z3)'s label is m', check the 3x3x3 neighbourhood of (x3,,y3, z3). If any voxel (x4,,y4, z4)'s label is 1, change its label to m'. Obviously (x^ >>4, z4) is 26-connected to (x, y, z). As (x3, ys, z3) is 26- connected to (x, y, z) and (x4, >>4, z4) is 26-connected to (X3, >>3, z3), (x4, 74, z4) is also 26- connected to (x, y, z). This increase in window sizes from 3x to 5x, and so on, continues until the window size reaches the predetermined maximum of (2nx+l) X (lny + 1) X (2nz + 1). Consequently, all internal object voxels 26-connected to (x, y, z) within the sub-volume are labelled m while all non-internal object voxels 26-connected to (x, y, z) within the sub-volume are set back to label 1, and the enhanced recursions are initiated from them. This ensures that the internal object voxels 26-connected to (x, y, z) are labelled layer-by- layer, with voxels closer to (x, y, z) being labelled first, while maintaining the 26- connectivity. This procedure also ensures that the sub-volume does not block the recursion from reaching out to other parts of the image to be labelled by initiating the enhanced recursion from those non-internal object voxels that are 26-connected to (x, y, z).
Step 4 Updating labels
This step prepares for labelling the next connected component, and finds the first unmarked object voxel in the raster scan order.
The step of updating includes incrementing the current label m (from an initial value of 4) by 1 so and incrementing x, followed by y and finally z to find the next unmarked (that is, a voxel with a label of "1") object voxel having a label of 1. Once an unmarked object voxel is found, the enhanced recursion of step 3 is initiated. The procedure stops when no further unmarked object voxels are found.
The output of the component labelling exercise is a unique label for each of the object voxels, with those voxels connected to each other having the same label, while those voxels that are non-connected have different labels.
Step Sl 12 - Calculate Second Threshold (Th2) : A second range-constrained threshold, Th2, in the superior region is determined, in the second threshold calculating means 24. Using the first brain mask BrI (x, y, z), the most superior axial slice (axS2) with a foreground area of at least a first predetermined size (e.g. 100 mm2) is found. This is to determine an axial slice in the superior region. In step S 106 the axial slice is in the inferior region. The combination of two axial slices, from the superior and inferior regions can be used to determine the local thresholds along the superior/inferior direction. For this purpose, the first binary mask could be considered to be a first iteration. The area is based on a count of ' 1 ' pixel/voxel times the voxel sizes in the x, y and z directions. The slice number of the axial slice that is a first predetermined distance (e.g. 20 mm) inferior to the axial slice axS2 is denoted as axS3. The first predetermined distance is chosen based on experience, so that the proportion of the GM and WM is relatively stable. Axial slice axS3 is used to calculate the brain threshold in the superior region of the brain.
The same approach to determining the threshold may be used as is used in step Sl 06, described above. Again the range-constrained thresholding specifies the ROI, being the space enclosed by the skull, and suitable lower and upper bounds are used (e.g. 20%, and 60%, respectively, for the second threshold) to determine the second threshold Th2.
Step Sl 14 - Determine Re-Binarisation Thresholds: Low [lTh(z)] and high
[hTh(z)] thresholds are determined for different axial slices z, in the low and high threshold calculating means 26.
When there exists significant intensity inhomogeneity along the superior-inferior direction of the volume, the first threshold ThI and the second threshold Th2 will be very different and different axial slices should be binarised with a different threshold. The two thresholds ThI, Th2 are considered significantly different if
^IP x 100% >r, where r is a fractional constant. In practice r is preferably in the range of (0.2, 0.3) and may be chosen based on validation and experience. Morphological dilation (e.g. with cuboid SE of 5mm x 5mm x 5mm) is performed with respect to the first brain mask BrI twice to yield a second binary volume bin2. For each foreground voxel in bin2, if its distance to the nearest background voxel of the head mask (the head mask is the space enclosed by the skull) is smaller than a second predetermined value, e.g.10 mm, which is typically chosen by experience and validation, then that foreground voxel in the second binary volume bin2 is set to background.
Multiplying the second binary volume bin2 with the original volume of the noise removed MRI data provides a grey level histogram. Standard fuzzy C-means is used to categorize the histogram into 3 clusters: CSF, GM, and WM. Denoting the maximum grey level of cluster CSF as maxCSF, the mean and standard deviation of the WM cluster as meanWM and sdWM respectively, the high threshold for re-binarisation is determined as highTh = meanWM + α1*meanWM + α2, where Ci1 is a constant, for example around 3 and α2 is another constant around 5, and the low threshold lowTh is determined as lowTh = (ThI + maxCSF+l)/2.
Linear interpolation is used to define thresholds for different axial slices when
ThI and Th2 are significantly different. Without losing generality, suppose axS3 is smaller than axSl, i.e., suppose the axial slices are counted from superior to inferior, then the threshold for different axial slices will be a function of the slice number. For axial volumes, an axial slice number is the same as the z coordinate. The low (lTh(z)) and high (hTh(z)) thresholds at different axial slice z are defined as follows:
When z is smaller than axS3,
l ITThh^(z) ii*s T Thl,2? + I (Th2- (lJ1oxwSLTh8x)(Sa3x)S3-z) •
x WiThuen A the a ubso 1lu +te va 1lue o -fF , bi .gger t ,,han 5. ( ,a
Figure imgf000023_0001
range of 5 to 10), lTh(z) is set to 5 to avoid decreasing the threshold, to the superior axial slices in the superior direction not being included due to too small thresholds.
When z is in the range of [axS3, axS 1),
1rn1 , N . (Th2-lowTh)(z-axSl) , , _, 1Th(Z) 1S (axSS-axSl) + lowTh
When z is equal to or bigger than axSl, lTh(z) is lowTh.
When ThI and Th2 are not significantly different, the low threshold at different axial slice z is again defined as lTh(z) = lowTh.
The high threshold hTh(z) is defined as highTh.
Step Sl 16 - Refine Binarisation: Using the different thresholds lTh(z) and hTh(z), the noise-reduced slices from step S 104 are binarised, in a second binarising means 28.
Binarisation based on the refined thresholds is performed as follows to yield a new, third binary volume bin3(x, y, z) f 1 hTh(z) ≥ inputF(x,y,z) ≥ lTh(z) birii(x,y,z) - <
*■ 0 hTh(z) < inputF(x,y,z), inputF(x,y,z) < lTh(z).
Step Sl 18 - Determine Second Brain Mask (Br2): The largest connected area in the volumetric data is determined for a second time in the 3D third binary volume bin3(x, y, z), based on the refined binarised data component, using a larger size SE, in a second brain mask calculating means 30.
In this step, the connections between brain and non-brain tissue are broken by morphological erosion with a bigger size of SE. Then the largest connected component is found and is deemed to be the brain mask. This mask is then dilated with the same SE to restore its original shape. The approach can be the same as is taken in step SIlO described above.
The third binary volume bin3(x, y, z) is first eroded with a bigger size SE (e.g. a cuboid SE of at least 5x5x5 voxels). The SE size in the x, y, and z directions is denoted as (2sx+l), (2sy+l), (2sz+l) respectively. Once the largest connected component of the eroded binary volume is found, the largest connected component is dilated with the cuboid SE (2sx+l)*(2sy+l)*(2sz+l) to yield a second brain mask Br2(x, y, z).
Preferably, however, when performing the erosion, the superior region above the eyeball (that is above zee) uses a decreased SE ((2sx-l)*(2sy-l)*(2sz-l)) to keep the details of the convoluted brain surface; while the inferior region which has z bigger than Zee uses the bigger cuboid SE (2sx+l)*(2sy+l)*(2sz+l) to break the connections between non-brain and brain tissues. Dilating the largest component to yield the second brain mask Br2(x, y, z) uses the same SE, (2sx+l)*(2sy+l)*(2sz+l) or (2sx-l)*(2sy-l)*(2sz-l) as appropriate.
The axial slice with the eyeballs is identified using the positions of the AC and the PC. Denote the z coordinates of the AC and the PC as ZAC and zpc, respectively, then Zβo is the axial slice at least a third predetermined distance, e.g. 10 mm, superior to the axial slices ZAC and zpc, while zel is the axial slice at least a fourth predetermined distance, e.g. 20 mm, inferior to the axial slices ZAC and zpc. For each axial slice between the axial slices zeo and zel, find its ROI which is the space enclosed by the skull, and divide the y extension into 4 equal parts. This number of parts is chosen as empirical data shows that the eyes generally fall in the first part. In the first part (where the eyes are lying) calculate the ratio of the number of foreground (GM+WM) voxels to the number of voxels of the ROI.
Figure 3 A is an image with 3 values (dark for background, grey for head, and white for brain) showing a superior axial slice without eyes and Figure 3 B an axial slice in which eyes are present. Both Figures 3A and 3B have front quarter lines 40, 42. The ROI area in front of the front quarter line 40 of Figure 3A comprises the light area 50 and the dark area 52, while the light area 50 comprises the GM+WM. Similarly, in Figure 3B, the ROI area in front of the front quarter line 42 comprises the light area 54 and the dark area 56, while the light area 54 comprises the GM+WM.
The first axial slice in between zβ0 and zel which has a ratio smaller than half of the maximum ratio of all these axial slices (between ze0 and zel) is the axial slice with the eyes present and this axial slice is denoted as zee.
Step 120 - Determine Third Brain Mask (Br3): The largest connected area in the volumetric data is determined for a third time in the third binary volume bin3(x, y, z), based on the refined binarised data component, using a smaller size SE, in a third brain mask calculating means 32.
In this step, the connections between brain and non-brain tissue is broken by morphological erosion with a smaller size of SE. Then the largest connected component is found and is deemed to be the brain mask. This mask is then dilated with the same SE to restore its original shape. The approach can be similar to that taken in step SIlO described above (with the main difference being in the SE size).
The third binary volume bin3(x, y, z) is first eroded with a smaller size SE (e.g. a cuboid SE of size 2mm x 2mm x 2mm), followed by finding the largest connected component. The largest connected component is dilated with the same SE to yield a third brain mask Br3(x, y, z).
Step S 122 - Merge Second and Third binary masks Br2(x, y. z) and Br3(x, y. z): The second brain mask Br2(x, y, z) and the third brain mask Br3(x, y, z) are merged in a mask merging means 34.
As a result of the different sized SEs used, the second brain mask Br2(x, y, z) is good at breaking the connection between the brain and non-brain tissues, while the third brain mask Br3(x, y, z) is good at maintaining small brain fragments. This step finds the foreground connected components of the difference between Br3(x, y, z) and Br2(x, y, z) and adds those foreground components to Br2(x, y, z) , to form a fourth brain mask Br4(x, y, z). Preferably, the foreground components are only added when the minimum distance of the foreground component, of the difference image between Br3 and Br2, to the nearest background voxels of the head mask (the ROI determined in step S 106) is bigger than a fifth predetermined value, e.g. 10 mm, as may be determined from experience.
Step S 124 - Process Superior and Inferior Axial Slices: Special processing of certain superior axial slices and certain axial slices inferior to the eyes occurs in the superior and inferior slices processing means 36. The inputs include both the fourth binary mask Br4 and the volume data after noise removal.
In superior axial slices, attention is paid to exclude sagittal sinuses that are close to the midsagittal plane (MSP). Let the z coordinate of the most superior axial slice with at least a second predetermined area, e.g. 100 mm2, as maybe determined by experience, considerations to exclude noise, as well as robustness, of GM and WM be zo, and Z1 the z coordinate of the first superior axial slice whose z coordinate is at least a sixth predetermined distance, e.g. 10 mm inferior to zo. From axial slices Z0 to Z1, for each axial slice, find the average and standard deviation of the grey levels (in the noise removed volume) along the MSP of the foreground voxels in Br4. The average plus standard deviation is deemed the new grey level threshold, and those foreground voxels within the axial slice with a grey level smaller than the new threshold are set to background (0).
In the inferior region, especially the axial slices inferior to the eyes, non-brain tissues are likely to be connected to brain tissue and special care has been taken to remove the undesirable connections.
For axial slices inferior to axial slice zee (determined as described within step Sl 18), overlap index and distance are used to remove non-brain tissues. All foreground connected components in the axial slice zee are taken as object components. For the axial slice zee, its minimum and maximum y coordinates can be found and denoted as yminP and ymaxP respectively. In the axial slice below the slice at eyeball level, zee+l, all the foreground connected components are found, for instance using foreground 8-connectivity. The preferred method of finding the foreground connected components uses the approach described in more detail in Singapore Patent Application No. 2005006820, filed 14 March 2005, mentioned and described in brief above (although modified to 2D). For each connected component, an overlap index is computed as follows: for all the foreground voxels of this component (the number of voxels is denoted as nour), count the number npr of object voxels with the same (x, y) coordinates in the previous axial slice zee, and the overlap index of this component is nPr/noUr.
For a foreground connected component of the current axial slice zee+l, if its minimum distance to the nearest background voxels of the head mask is smaller than a seventh predetermined distance, e.g. 7mm, and its maximum distance is smaller than an eighth predetermined distance, e.g. 15mm, and the overlap index of this component is smaller than a first predetermined constant, e.g. 0.4, then this foreground component is discarded (set as background).
Likewise, when the minimum distance is bigger than the seventh predetermined distance or the maximum distance is bigger than a ninth predetermined distance, e.g. 25 mm, while the overlap index is smaller than a second constant, e.g. 0.2, then this component is set to background.
If the number of voxels of the component is smaller than a value (say 100), and its minimum y coordinate is smaller than (yminP+5) or its maximum y coordinate is bigger than (ymaxP+5), then the component is discarded (highly probable of non-brain tissues); otherwise this component is set to object component. And the current axial slice's minimum and maximum y coordinates of the remained object voxels are found and are assigned to yminP and ymaxP respectively.
This process is then repeated for slice zee+2, based on values for zee+l , and so on for slices zee+3 onwards. Step S 126 - Output Brain Mask: The final brain mask slices from the mask merging means 34 and the superior and inferior slices processing means 36 are collated and output from, collating and output means 38, as a sequence of binarised images.
The techniques described above are applicable for any image sequences where both white matter and grey matter are brighter than cerebrospinal fluid, in particular, but not exclusively, Tl -weighted and SPGR images and FLAIR.
Although the above description suggests different components do each task, the same component can, of course, do several tasks, e.g. the same binariser and same brain mask calculator can be used for all three masks, etc.
There are many applications related to brain imaging that either require, or benefit from, the ability to accurately segment brain from non-brain tissues. For example, in the registration of functional images to high resolution MR images, both functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) functional images often contain little non-brain tissue because of the nature of the imaging, whereas the high resolution MR image will probably contain a considerable amount of non-brain tissues such as eyeballs, skin, fat, and muscle, and thus registration robustness is improved if the non-brain parts of the image can be automatically removed before registration. A second example application of brain/non-brain segmentation is as the first stage in cortical flattening procedures. A third example is in brain atrophy estimation in diseased subjects; after brain/non-brain segmentation, brain volume is measured at a single time point with respect to some normalizing volume such as skull or head size; alternatively, images from two or more time points are compared, to estimate how the brain has changed over time. A fourth application is the removal of strong ghosting effects that can occur in fMRI. These artefacts can confound motion correction, global intensity normalization, and registration to a high-resolution image.
The above-described embodiment can handle serious intensity inhomogeneity and/or noise exhibited in clinical MR images robustly, due to local thresholding and breaking the connections between brain and non-brain tissues while maintaining small brain fragments through the combination of morphological processing with different sizes of structuring elements.
The various values used and exemplified in the above-described method are based on a brain image of a typical adult human, and may be generated by experiment and experience. Of course, other values maybe used as appropriate. For children or a- typical adult humans, different values may be used. The invention can also be used on non-human subjects, for example mammals, especially primates, again using some or all different values, as may be determined empirically.
The various components in Figure 2 are referred to as "means", and will usually be embodied in circuits, constructed in hardware to perform a single operation or several operations, or programmed using software modules to perform those one or more operations. Possible embodiments maybe made up of dedicated hardware alone, a combination of some dedicated hardware and some software programmed hardware and software programmed hardware alone. Embodiments also include a conventional or other computer programmed to perform the relevant tasks.
A module, and in particular the module's functionality, can be implemented in either hardware or software. In the software sense, a module is a process, program, or portion thereof, that usually performs a particular function or related functions. In the hardware sense, a module is a functional hardware unit designed for use with other components or modules. For example, a module may be implemented using discrete electronic components, or it can form a portion of ari entire electronic circuit such as an Application Specific Integrated Circuit (ASIC). Numerous other possibilities exist.
Figure 4 is a schematic representation of a computer system 200 suitable for performing the techniques described with reference to Figures 1 to 3. A computer 202 is loaded with suitable software in a memory, which software can be used to perform steps in a process that implement the techniques described herein (e.g. the steps of Figure 1). MRI data can be input, and segmentation results obtained using such a computer system 200. This computer software executes under a suitable operating system installed on the computer system 200. The computer software involves a set of programmed logic instructions that are able to be interpreted by a processor, such as a CPU, for instructing the computer system 200 to perform predetermined functions specified by those instructions. The computer software can be an expression recorded in any language, code or notation, comprising a set of instructions intended to cause a compatible information processing system to perform particular functions, either directly or after conversion to another language, code or notation.
The computer software is programmed by a computer program comprising statements in an appropriate computer language. The computer program is processed using a compiler into computer software that has a binary format suitable for execution by the operating system. The computer software is programmed in a manner that involves various software components, or code means, that perform particular steps in the process of the described techniques .
The components of the computer system 200 include: the computer 202, input and output devices such as a keyboard 204, a mouse 206 and an external memory device 208 (e.g. one or more of a floppy disc drive, a CD drive, a DVD drive and a flash memory drive) and a display 210, as well as network connexions for connecting to the Internet 212. The computer 202 includes: a processor 222, a first memory such as a ROM 224, a second memory such as a RAM 226, a network interface 228 for connecting to external networks, an input/output (I/O) interface 230 for connecting to the input and output devices, a video interface 232 for connecting to the display, a storage device such as a hard disc 234, and a bus 236.
The processor 222 executes the operating system and the computer software executing under the operating system. The random access memory (RAM) 226, the readonly memory (ROM) 224 and the hard disc 234 are used under direction of the processor 222. The video interface 232 is connected to the display 210 and provides video signals for display on the display 210. User input, to operate the computer 202 is provided from the keyboard 204 and the mouse 206.
The internal storage device is exemplified here by a hard disc 234 but can include any other suitable non- volatile storage medium.
Each of the components of the computer 202 is connected to the bus 236 that includes data, address, and control buses, to allow these components to communicate with each other.
The computer system 200 can be connected to one or more other similar computers via the Internet, LANs or other networks.
The computer software program may be provided as a computer program product. During normal use, the program may be stored on the hard disc 234. However, the computer software program may be provided recorded on a portable storage medium, e.g. a CD-ROM read by the external memory device 208. Alternatively, the computer software can be accessed directly from the network 212.
In either case, a user can interact with the computer system 200 using the keyboard 204 and the mouse 206 to operate the programmed computer software executing on the computer 202.
The computer system 200 is described for illustrative purposes: other configurations or types of computer systems can be equally well used to implement the described techniques. The foregoing is only an example of a particular type of computer system suitable for implementing the described techniques.

Claims

Claims
1. A method of segmenting a brain in image data, comprising: determining a second brain mask for the brain, segmenting the brain in the image data; determining a third brain mask for the brain, segmenting the brain in the image data; merging one or more corresponding portions of the second and third brain masks; and generating a fourth brain mask, segmenting the brain in the image data, the fourth brain mask comprising the merged one or more corresponding portions of the second and third brain masks.
2. A method according to claim 1 , wherein the image data represents individual image slices of the brain.
3. A method according to claim 2, wherein a portion of a brain mask comprises a slice of the brain mask.
4. A method according to any one of the preceding claims, further comprising determining a first brain mask for the brain, prior to determining the second and third brain masks.
5. A method according to any one of the preceding claims, wherein determining a brain mask is based on a structuring element size.
6. A method according to claim 5, wherein the structuring element size for the second brain mask is different from the structuring element size for the third brain mask.
7. A method according to claim 5 or 6, wherein the structuring element size for the second brain mask is smaller for a plurality of superior portions of the brain, above the eyeball level, than for a plurality of portions of the brain in the inferior region of the brain below the eyeball level.
8. A method according to any one of claims 5 to 7, wherein the structuring element size for the second brain mask is larger than for the third brain mask for corresponding portions of the brain.
9. A method according to any one of claim 5 to 8, wherein determining a brain mask for a brain comprises morphological erosion of volumetric image data of the brain based on a structuring element size.
10. A method according to any one of claims 5 to 9, wherein the structuring elements are cuboids.
11. A method according to any one of the preceding claims, wherein determining a brain mask for a brain comprises determining a largest connected component for binarised volumetric data of the brain.
12. A method according to claim 11 when dependent on at least claim 9, wherein determining a brain mask for a brain further comprises dilating the largest connected component based on the same structuring element size used for the morphological erosion.
13. A method according to claim 11 or 12, wherein determining the largest connected component for the volumetric data comprises: labelling with a current label all voxels that are internal to a predetermined sub- volume oriented with respect to an unlabelled voxel, and directly connected to the unlabelled voxel; repeating the labelling step for all voxels that are not internal to the predetermined sub-volume, but which are labelled with a current label; increasing a window size to a predetermined maximum; incrementing the current label; and repeating the preceding steps for remaining unlabelled object voxels.
14. A method according to any one of the preceding claims, wherein the second and third brain masks for a portion of the brain are merged if the minimum distance between the second brain mask and background voxels of the head mask for that portion is bigger than a predetermined value.
15. A method according to anyone of the preceding claims, wherein the fourth brain mask further comprises portions of one of the second and third brain masks, the fourth brain mask thereby comprising portions of one of the second and third brain masks corresponding to some portions of the brain and merged portions of the second and third brain masks for other portions of the brain.
16. A method according to any one of the preceding claims, further comprising outputting the fourth brain mask.
17. A method according to any one of the preceding claims, further comprising processing one or more portions of the fourth brain mask for superior portions of the brain, to exclude sagittal sinuses close to the midsagittal plane.
18. A method according to claims 16 and 17, wherein the output brain mask includes one or more of the brain mask portions processed to exclude sagittal sinuses close to the midsagittal plane.
19. A method according to any one of the preceding claims, further comprising processing one or more portions of the fourth brain mask for inferior portions of the brain, inferior to eyeball level, to remove connexions between brain tissue and non-brain tissue.
20. A method according to claim 19 when dependent on at least claim 16, wherein the output brain mask includes one or more of the brain mask portions processed for slices to remove connexions between brain tissue and non-brain tissue.
21. A method according to any one of the preceding claims, wherein the masks are determined from noise reduced image data.
22. A method according to claim 21, further comprising noise-reducing the image data prior to determining the brain masks.
23. A method according to claim 21 or 22, wherein the image data is noise-reduced by diffusion filtering.
24. A method according to claim 23, wherein the diffusion filtering comprises 3D anisotropic non-linear diffusion filtering.
25. A method according to any one of the preceding claims, wherein the brain masks are determined for binarised image data.
26. A method according to claim 25 when dependent on at least claim 21, wherein the binarised image data is noise-reduced binarised image data.
27. A method according to claim 25 or 26, further comprising binarising the image data prior to determining the brain masks.
28. A method according to any one of claims 25 to 27 when dependent on at least claim 4, wherein a first binarisation of the image data binarises the image data for determining the first brain mask.
29. A method according to claim 28, wherein the first binarisation uses a first binarisation threshold.
30. A method according to claim 29, further comprising determining the first binarisation threshold.
31. A method according to claim 30, wherein the first binarisation threshold is determined by range-constrained thresholding.
32. A method according to any one of claims 25 to 31, wherein a second binarisation of the image data binarises the image data for determining the second and third brain masks.
33. A method according to claim 32, wherein the second binarisation uses one or more pairs of binarisation thresholds.
34. A method according to claim 33, wherein the image data of a portion of the brain is set to one level if it falls between the two binarisation thresholds for the portion and to another level if it falls outside the two binarisation thresholds.
35. A method according to claim 33 or 34, wherein the pair of binarisation thresholds that is used to binarise an image portion of the brain depends on the position of the image portion of the brain within the brain.
36. A method according to any one of claims 33 to 35, further comprising determining a second binarisation threshold.
37. A method according to any one of claims 33 to 36, further comprising determining the one or more pairs of binarisation thresholds.
38. A method according to claims 36 and 37, when dependent on at least claim 29, wherein determining the one or more pairs of binarisation thresholds uses both the first and second binarisation thresholds.
39. A method according to any one of claims 33 to 38, further comprising determining the one or more pairs of binarisation thresholds, comprising: determining two representative thresholds, at the superior and inferior ends of the brain; and using linear interpolation to derive thresholds in between.
40. A method according to any one of claims 7, 8, 14, 15, 17 to 20 and 35 when dependent on at least claim 2, or any claim dependent thereon, wherein a portion of the brain comprises a slice of the brain.
41. A method of processing brain masks for superior image slices of a brain to exclude sagittal sinuses close to the midsagittal plane, comprising: determining a first image slice, being the most superior image slice to have a first predetermined brain area; determining a second image slice, being the next image slice that is at least a predetermined distance inferior to the first image slice; for image slices between the first and second image slices determining the average and standard deviation of the grey levels along the midsagittal plane of foreground voxels of the brain mask; using the sum of the average and standard deviation to determine a grey level threshold; comparing the grey levels of the foreground voxels with the grey level threshold; and adjusting the brain masks by setting those foreground voxels to background for which the grey level is below the threshold.
42. A method of processing brain masks for image slices inferior to eyeball level to remove connexions between brain tissue and non-brain tissue, the method comprising: determining connected foreground components for a plurality of slices below eyeball level; for individual determined connected foreground components, determining an overlap index between that connected foreground component and the corresponding connected foreground component in an adjacent slice; for individual determined connected foreground components, determining minimum and maximum distances to background voxel of the head mask of the same slice; and for individual determined connected foreground components, comparing the combination of the determined overlap index and the determined minimum and maximum distances with a set of predetermined values and setting the determined connected foreground component as background or brain tissue based on the result.
43. A method of segmenting a brain in image data representing individual image slices of a brain, comprising: calculating a first threshold for the image data; first binarising the image data using the first threshold; determining a first binary mask for the brain; calculating a second binarising threshold for the image data, using a portion of the first binary mask corresponding to at least one slice; re-binarising the image data using at least one threshold pair determined using the first and second thresholds; and determining a second binary mask for the brain, segmenting the brain in the image data.
44. A method according to claim 43, further comprising: determining a third binary mask for the brain, segmenting the brain in the image data; merging a plurality of corresponding slices of the second and third brain masks; and generating a fourth brain masks, segmenting the brain in the image data, the fourth brain mask comprising the merged corresponding slices of the second and third brain masks.
45. A method according to any one of the preceding claims, wherein the image data comprises Tl -weighted, spoiled gradient-recalled, or Fluid Attenuation Inversion Recovery magnetic resonance imaging data.
46. Apparatus operable according to the method of any one of the preceding claims.
47. Apparatus for segmenting a brain in image data representing individual image slices of a brain, comprising: second brain mask calculating means for determining a second brain mask for the brain, segmenting the brain in the image data; third brain mask calculating means for determining a third brain mask for the brain, segmenting the brain in the image data; and mask merging means for merging a plurality of corresponding slices of the second and third brain masks, to generate a fourth brain mask, segmenting the brain in the image data and comprising the merged corresponding slices of the second and third brain masks.
48. Apparatus for segmenting a brain in image data representing individual image slices of a brain, comprising: first threshold calculating means for calculating a first threshold for the image data; first binarising means for first binarising the image data; first brain mask calculating means for determining a first binary mask for the brain; second threshold calculating means for calculating a second threshold for the image data, using a portion of the first binary mask corresponding to at least one slice; second binarising means, for re-binarising the image data using at least one threshold pair determined using the first and second thresholds; and second brain mask calculating means for determining a second binary mask for the brain, segmenting the brain in the image data.
49. Apparatus according to claim 47 or 48 operable according to the method of any one of claims 1 to 45.
50. Apparatus according to any one of claims 47 to 49 being a computer system.
51. A computer program product for segmenting a brain in image data representing individual image slices of a brain, comprising: computer readable program code for determining a second brain mask for the brain, segmenting the brain in the image data; computer readable program code for determining a third brain mask for the brain, segmenting the brain in the image data; and computer readable program code for merging a plurality of corresponding slices of the second and third brain masks, to generate a fourth brain mask, segmenting the brain in the image data and comprising the merged corresponding slices of the second and third brain masks.
52. A computer program product for segmenting a brain in image data representing individual image slices of a brain, comprising: computer readable program code for calculating a first threshold for the image data; computer readable program code for first binarising the image data; computer readable program code for determining a first binary mask for the brain; computer readable program code for calculating a second threshold for the image data;, using a portion of the first binary mask corresponding to at least one slice; computer readable program code for re-binarising the image data using at least one threshold pair determined using the first and second thresholds; and computer readable program code for determining a second binary mask for the brain, segmenting the brain in the image data.
53. A computer program product operable according to the method of any one of claims 1 to 45.
PCT/SG2005/000147 2005-05-11 2005-05-11 Method, apparatus and computer software for segmenting the brain from mr data WO2006121410A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/SG2005/000147 WO2006121410A1 (en) 2005-05-11 2005-05-11 Method, apparatus and computer software for segmenting the brain from mr data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/SG2005/000147 WO2006121410A1 (en) 2005-05-11 2005-05-11 Method, apparatus and computer software for segmenting the brain from mr data

Publications (1)

Publication Number Publication Date
WO2006121410A1 true WO2006121410A1 (en) 2006-11-16

Family

ID=37396825

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2005/000147 WO2006121410A1 (en) 2005-05-11 2005-05-11 Method, apparatus and computer software for segmenting the brain from mr data

Country Status (1)

Country Link
WO (1) WO2006121410A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AT510329A1 (en) * 2010-08-27 2012-03-15 Tissuegnostics Gmbh METHOD FOR DETECTING A TISSUE STRUCTURE
EP2446418A1 (en) * 2009-06-23 2012-05-02 Agency For Science, Technology And Research A method and system for segmenting a brain image
CN108701220A (en) * 2016-02-05 2018-10-23 索尼公司 System and method for handling multi-modality images
CN110046646A (en) * 2019-03-07 2019-07-23 深圳先进技术研究院 Image processing method, calculates equipment and storage medium at system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000079481A1 (en) * 1999-06-23 2000-12-28 Massachusetts Institute Of Technology Mra segmentation using active contour models
WO2004051568A1 (en) * 2002-11-29 2004-06-17 Isis Innovation Limited Brain connectivity mapping
JP2004340954A (en) * 2003-04-23 2004-12-02 Daiichi Radioisotope Labs Ltd Brain image data processing system, method, program, and recording medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000079481A1 (en) * 1999-06-23 2000-12-28 Massachusetts Institute Of Technology Mra segmentation using active contour models
WO2004051568A1 (en) * 2002-11-29 2004-06-17 Isis Innovation Limited Brain connectivity mapping
JP2004340954A (en) * 2003-04-23 2004-12-02 Daiichi Radioisotope Labs Ltd Brain image data processing system, method, program, and recording medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
ATKINS M.S. ET AL.: "FULLY AUTOMATIC SEGMENTATION OF THE BRAIN IN MRI", IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 17, no. 1, February 1998 (1998-02-01), pages 98 - 107, XP000754623 *
DATABASE WPI Week 200501, Derwent World Patents Index; Class P31, AN 2005-004297, XP003003714 *
HULT R.: "GREY-LEVEL MORPHOLOGY BASED SEGMENTATION OF MRI OF THE HUMAN CORTEX", IMAGE ANALYSIS AND PROCESSING, 2003. PROCEEDINGS. 11TH INTERNATIONAL CONFERENCE, 26 September 2001 (2001-09-26) - 28 September 2001 (2001-09-28), pages 578 - 583, XP010561309 *
HULT R.: "SEGMENTATION AND VISUALISATION OF HUMAN BRAIN STRUCTURES", COMPREHENSIVE SUMMARIES OF UPPSALA DISSERTATIONS FROM THE FACULTY OF SCIENCE AND TECHNOLOGY, UPPSALA, 2003, XP003004440 *
MACKIEWICH B.: "INTRACRANIAL BOUNDARY DETECTION AND RADIO-FREQUENCY CORRECTION IN MAGNETIC RESONANCE IMAGES", 19 August 1995 (1995-08-19), XP003003713 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2446418A1 (en) * 2009-06-23 2012-05-02 Agency For Science, Technology And Research A method and system for segmenting a brain image
EP2446418A4 (en) * 2009-06-23 2013-11-13 Agency Science Tech & Res A method and system for segmenting a brain image
US8831328B2 (en) 2009-06-23 2014-09-09 Agency For Science, Technology And Research Method and system for segmenting a brain image
AT510329A1 (en) * 2010-08-27 2012-03-15 Tissuegnostics Gmbh METHOD FOR DETECTING A TISSUE STRUCTURE
AT510329B1 (en) * 2010-08-27 2012-05-15 Tissuegnostics Gmbh METHOD FOR DETECTING A TISSUE STRUCTURE
CN108701220A (en) * 2016-02-05 2018-10-23 索尼公司 System and method for handling multi-modality images
CN110046646A (en) * 2019-03-07 2019-07-23 深圳先进技术研究院 Image processing method, calculates equipment and storage medium at system
CN110046646B (en) * 2019-03-07 2023-06-30 深圳先进技术研究院 Image processing method, system, computing device and storage medium

Similar Documents

Publication Publication Date Title
Smith BET: Brain extraction tool
Hu et al. Supervised range-constrained thresholding
US20090129671A1 (en) Method and apparatus for image segmentation
US20080292194A1 (en) Method and System for Automatic Detection and Segmentation of Tumors and Associated Edema (Swelling) in Magnetic Resonance (Mri) Images
EP2869261B1 (en) Method for processing image data representing a three-dimensional volume
Kanade et al. Brain tumor detection using MRI images
US8831328B2 (en) Method and system for segmenting a brain image
Hu et al. Segmentation of brain from computed tomography head images
EP1974313A2 (en) An integrated segmentation and classification approach applied to medical applications analysis
EP2304648A1 (en) Medical image segmentation
Balan et al. Smart histogram analysis applied to the skull-stripping problem in T1-weighted MRI
Freifeld et al. Multiple sclerosis lesion detection using constrained GMM and curve evolution
Saien et al. Refinement of lung nodule candidates based on local geometric shape analysis and Laplacian of Gaussian kernels
Forbes et al. Adaptive weighted fusion of multiple MR sequences for brain lesion segmentation
EP1893091A1 (en) Brain image segmentation from ct data
Caldairou et al. Segmentation of the cortex in fetal MRI using a topological model
EP1960803A1 (en) Method and device for correction of magnetic resonance images
Roy et al. An accurate and robust skull stripping method for 3-D magnetic resonance brain images
WO2006121410A1 (en) Method, apparatus and computer software for segmenting the brain from mr data
Wu et al. Adaptive model initialization and deformation for automatic segmentation of T1-weighted brain MRI data
Archip et al. Ultrasound image segmentation using spectral clustering
Silvoster M et al. Efficient segmentation of lumbar intervertebral disc from MR images
Jerebko et al. Robust parametric modeling approach based on domain knowledge for computer aided detection of vertebrae column metastases in MRI
Georgieva et al. Multistage Approach for Simple Kidney Cysts Segmentation in CT Images
Bandyopadhyay Pre-processing and segmentation of brain image for tumor detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

NENP Non-entry into the national phase

Ref country code: RU

122 Ep: pct application non-entry in european phase

Ref document number: 05737725

Country of ref document: EP

Kind code of ref document: A1