WO2013155300A1 - Techniques pour la segmentation d'organes et de tumeurs et d'objets - Google Patents

Techniques pour la segmentation d'organes et de tumeurs et d'objets Download PDF

Info

Publication number
WO2013155300A1
WO2013155300A1 PCT/US2013/036166 US2013036166W WO2013155300A1 WO 2013155300 A1 WO2013155300 A1 WO 2013155300A1 US 2013036166 W US2013036166 W US 2013036166W WO 2013155300 A1 WO2013155300 A1 WO 2013155300A1
Authority
WO
WIPO (PCT)
Prior art keywords
boundary
voxels
image data
indicates
distance
Prior art date
Application number
PCT/US2013/036166
Other languages
English (en)
Inventor
Xiaotao Guo
Binsheng Zhao
Lawrence Schwartz
Original Assignee
The Trustees Of Columbia University In The City Of New York
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Trustees Of Columbia University In The City Of New York filed Critical The Trustees Of Columbia University In The City Of New York
Priority to US14/394,097 priority Critical patent/US10388020B2/en
Publication of WO2013155300A1 publication Critical patent/WO2013155300A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/695Preprocessing, e.g. image segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/24Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/100764D tomography; Time-sequential 3D tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20152Watershed segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30024Cell structures in vitro; Tissue sections in vitro
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • Cross sectional imaging is an imaging technique which produces a large series of two-dimensional (2D) images of a subject, e.g., a human subject.
  • Examples of cross sectional imaging techniques include computerized tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), SPECT scanning, ultrasonography (US), among others.
  • CT computerized tomography
  • MRI magnetic resonance imaging
  • PET positron emission tomography
  • SPECT scanning positron emission tomography
  • US ultrasonography
  • a set of cross sectional images for a single patient, e.g., for different axially located cross-sections or for the same cross section at different times can be considered three dimensional (3D) image data, and even four dimensional (4D) image data for combinations of axial and temporal cross sectional images..
  • the cross sectional images can be processed by segmentation, which generally involves separating objects not of interest from objects of interest, e.g., extracting anatomical surfaces, structures, or regions of interest from the images for the purposes of anatomical identification, diagnosis, evaluation, and volumetric measurements.
  • segmentation generally involves separating objects not of interest from objects of interest, e.g., extracting anatomical surfaces, structures, or regions of interest from the images for the purposes of anatomical identification, diagnosis, evaluation, and volumetric measurements.
  • volumetric measurement can be more accurate and sensitive than conventional linear measurements.
  • 3D segmentation of cross sectional images provides a feasible way to quantify tumor volume and volume changes over time.
  • voxel refers to an element of cross sectional images and is meant to refer to a picture element (pixel) of 2D image data as well as a volume element (voxel) of 3D image data as well as an element of any higher dimensional image data, unless a more restrictive meaning is evident from the context.
  • image data means any amplitude values distributed on a 2D or higher dimensional array of voxels, whether measured directly using scanning equipment, or the result of one or more subsequent transformations, such as Fourier Transform, edge detection, amplitude normalization, or shift or other transform or preprocessing.
  • a method in a first set of embodiments, includes obtaining image data that indicates amplitude values at each of a plurality of voxels; and, obtaining initial boundary data that indicates an initial position of a first boundary associated with a first object represented by the image data and an initial position of a different second boundary associated with a different second object represented by the image data.
  • the method also includes determining a first marker that indicates a first amplitude value inside the first object and determining a second marker that indicates a second amplitude value inside the second object.
  • the method further includes determining a first topographical distance between a voxel on at least one boundary and the first marker, and determining a second topographical distance between the voxel and the second marker.
  • the method then includes determining, a revised position of at least one of the first boundary and the second boundary by evaluating an evolution equation that includes differences in the first topographical distance and second topographical distances for the voxel on the at least one boundary, and also includes at least one other term related to boundary detection.
  • a method in a second set of embodiments, includes obtaining image data that indicates amplitude values at each of a plurality of voxels, and obtaining initial boundary data that indicates an initial position of a boundary associated with an object represented by the image data. The method further includes determining a revised position of the boundary by evaluating an evolution equation that includes differences of amplitude values for voxels on the boundary from a statistical metric of amplitude of voxels inside the boundary and differences of amplitude values for voxels on the boundary from a statistical metric amplitude of voxels outside the boundary for a limited region that lies within a distance r of the boundary. The distance r is small compared to a perimeter of the first boundary.
  • the method also includes automatically detecting both an inner boundary and an outer boundary based on a single initial boundary that encompasses the outer boundary.
  • a computer-readable medium causes an apparatus, or an apparatus or system is configured, to perform one or more steps of one or more of the above methods.
  • FIG. 1A is a block diagram that illustrates an imaging system for tissue detection, according to an embodiment
  • FIG. IB is a block diagram that illustrates scan elements in a 2D scan, such as one scanned image from a CT scanner
  • FIG. 1C is a block diagram that illustrates scan elements in a 3D scan, such as stacked multiple scanned images from a CT imager or true 3D scan elements from
  • FIG. 2 is a flow chart that illustrates at a high level a method for refining a boundary of one or more tissues, according to an embodiment
  • FIG. 3 is a flow chart that illustrates a method for performing a step of the method of FIG. 2 to perform active contour model based on a local regional statistic with
  • FIG. 4A and FIG. 4B are flow charts that illustrate alternative methods for performing a step of the method of FIG. 3 to compute topographical distance, according to various embodiments;
  • FIG. 5 is a block diagram that illustrates an example initial boundary, local region and example marker for each of two objects represented in image data for computing topographical effects, according to an embodiment
  • FIG. 6 is a graph that illustrates a separation weighting function for the
  • FIG. 7A is a block diagram that illustrates the boundary voxels on a first boundary affected by the local regional statistics, according to an embodiment
  • FIG. 7B is a block diagram that illustrates the boundary voxels on a first boundary affected by the topographical effects of a second boundary, according to an embodiment
  • FIG. 8 is a block diagram that illustrates an example graphical user interface for controlling active contour model, according to an embodiment
  • FIG. 9A is a block diagram that illustrates five example frames of simulated image data in which two objects represented in noisy image data, which are separate in at least one frame, maintain a boundary after contacting each other, according to an embodiment
  • FIG. 9B is a block diagram that illustrates example boundaries for the five example frames of FIG. 9A based on a local regional statistic of the upper object boundary initialized in the first frame, according to an embodiment
  • FIG. 9C is a block diagram that illustrates example boundaries for the five example frames of FIG. 9A based on a local regional statistic of the lower object boundary initialized in the first frame, according to an embodiment
  • FIG. 9D is a block diagram that illustrates example boundaries for the five example frames of FIG. 9 A based on a local regional statistic and topographic effects for the upper object boundary initialized in the first frame, according to an embodiment
  • FIG. 9E is a block diagram that illustrates example boundaries for the five example frames of FIG. 9 A based on a local regional statistic and topographic effects for the lower object boundary initialized in the first frame, according to an embodiment
  • FIG. 10A is a block diagram that illustrates five example frames of simulated image data in which three objects represented in noisy image data, which are separate in at least one frame, maintain a boundary after contacting one or more of each other, according to an embodiment
  • FIG. 10B is a block diagram that illustrates example boundaries for the five example frames of FIG. 10A based on a local regional statistic and topographic effects for three boundaries initialized in the first frame, according to an embodiment
  • FIG. 11 A is a graph that illustrates an example contour map with an inner contour suitable for a brain tumor segmentation, according to an embodiment
  • FIG. 1 IB is a graph that illustrates an example intensity profile through the map of
  • FIG. 11 A is an image that illustrates an example initial boundary for a brain tumor in one slice of a magnetic resonance (MR) scan, according to an embodiment
  • FIG. 12B is an image that illustrates an example refined double boundary for a brain tumor in one slice of a MR scan, according to an embodiment
  • FIG. 13 is a block diagram that illustrates a computer system upon which an embodiment of the invention may be implemented.
  • FIG. 14 illustrates a chip set upon which an embodiment of the invention may be implemented.
  • tissue detection Some embodiments of the invention are described below in the context of segmenting tumors in CT scans. However, the invention is not limited to this context. In other embodiments other imaging modalities, such as MRI, magnetic resonance spectral imaging (MRSI), PET, SPECT, US, microscopy, cytometry imaging, among others, are employed, to accumulate 2D, 3D, 4D or higher dimensionality image data, for segmenting other tissues, organs, tumors or cells, collectively called tissue detection hereinafter.
  • imaging modalities such as MRI, magnetic resonance spectral imaging (MRSI), PET, SPECT, US, microscopy, cytometry imaging, among others.
  • FIG. 1A is a block diagram that illustrates an imaging system 100 for tissue detection, according to an embodiment.
  • the system 100 is designed for determining the spatial arrangement of soft target tissue in a living body.
  • a living body is depicted, but is not part of the system 100.
  • a living body is depicted in a first spatial arrangement 132a at one time and includes a target tissue in a corresponding spatial arrangement 134a.
  • the same living body is in a second spatial arrangement 132b that includes the same or changed target tissue in a different corresponding spatial arrangement 134b.
  • system 100 includes a scanning device 140, such as a full dose X-ray computed tomography (CT) scanner, or a magnetic resonance imaging (MRI) scanner, among others.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • the scanning device 140 is used at one or more different times.
  • the device 140 is configured to produce scanned images that each represent a cross section of the living body at one of multiple cross sectional (transverse) slices arranged along the axial direction of the body, which is oriented in the long dimension of the body.
  • data from the imager 140 is received at a computer 160 and stored on storage device 162.
  • Computer systems and storage devices like 160, 162, respectively, are described in more detail below with reference to FIG. 13 and 14.
  • Scan data 180a, 180b, 190a, 190b based on data measured at imager 140 at one or more different times or axial locations or both are stored on storage device 162.
  • scan data 180a and scan data 180b which include scanned images at two slices separated in the axial direction, is stored based on measurements from scanning device 140 at one time.
  • Scan data 190a, 190b which include scanned images at two slices separated in the axial direction, is stored based on
  • a tissue detection process 150 operates on computer 160 to determine a boundary between scan elements of scan data which are inside and outside a particular target tissue or cell.
  • the boundary data is stored in boundary data 158 in associations with the scan data, e.g., scan data 180a, 180b, 190a, 190b.
  • FIG. 1A processes, equipment, and data structures are depicted in FIG. 1A as integral blocks in a particular arrangement for purposes of illustration, in other embodiments one or more processes or data structures, or portions thereof, are arranged in a different manner, on the same or different hosts, in one or more databases, or are omitted, or one or more different processes or data structures are included on the same or different hosts.
  • system 100 is depicted with a particular number of scanning devices 140, computers 160, and scan data 150, 160 on storage device 162 for purposes of illustration, in other embodiments more or fewer scanning devices, computers, storage devices and scan data constitute an imaging system for determining spatial arrangement of tissues, including cells.
  • FIG. IB is a block diagram that illustrates scan elements in a 2D scan 110, such as one scanned image from a CT scanner.
  • the two dimensions of the scan 110 are represented by the x direction arrow 102 and the y direction arrow 104.
  • the scan 110 consists of a two dimensional array of 2D scan elements (pixels) 112 each with an associated position.
  • a 2D scan element position is given by a row number in the x direction and a column number in the y direction of a rectangular array of scan elements.
  • a value at each scan element position represents a measured or computed intensity or amplitude that represents a physical property (e.g., X-ray absorption, or resonance frequency of an MRI scanner) at a corresponding position in at least a portion of the spatial arrangement 132a, 132b of the living body.
  • the measured property is called amplitude hereinafter and is treated as a scalar quantity.
  • two or more properties are measured together at a pixel location and multiple amplitudes are obtained that can be collected into a vector quantity, such as spectral amplitudes in MRSI.
  • FIG. 1C is a block diagram that illustrates scan elements in a 3D scan 120, such as stacked multiple scanned images from a CT imager or true 3D scan elements from volumetric CT imagers or MRI or US.
  • the three dimensions of the scan are represented by the x direction arrow 102, the y direction arrow 104, and the z direction arrow 106.
  • the scan 120 consists of a three dimensional array of 3D scan elements (also called volume elements and abbreviated as voxels) 122 each with an associated position.
  • a 3D scan element position is given by a row number in the x direction, column number in the y direction and a scanned image number (also called a scan number) in the z (axial) direction of a cubic array of scan elements or a temporal sequence of scanned slices.
  • a value at each scan element position represents a measured or computed intensity that represents a physical property (e.g., X-ray absorption for a CT scanner, or resonance frequency of an MRI scanner) at a corresponding position in at least a portion of the spatial arrangement 132a, 132b of the living body.
  • voxels is used herein to represent either 2D scan elements (pixels) or 3D scan elements (voxels), or 4D scan elements, or some combination, depending on the context.
  • Amplitude is often expressed as one of a series of discrete gray-levels.
  • a grey- level image may be seen as a topographic relief, where the grey level of a voxel is interpreted as its altitude in the relief.
  • a drop of water falling on a topographic relief flows along a path to finally reach a local minimum.
  • the watershed of a relief corresponds to the limits of the adjacent catchment basins of the drops of water.
  • the horizontal gradient is expressed as a two element vector at each voxel, the magnitude and direction of the steepest increase in amplitude from the voxel to any of its neighbors.
  • a voxel may have one, two, four, six, eight or more neighbors.
  • a drop of water falling on a topographic relief flows towards the "nearest" minimum.
  • the "nearest” minimum is that minimum which lies at the end of the path of steepest descent. In terms of topography, this occurs if the point lies in the catchment basin of that minimum.
  • the length of that path weighted by the altitude drop is related to the topographical distance, as described in more detail below.
  • FIG. 2 is a flow chart that illustrates at a high level a method 200 for refining a boundary of one or more tissues, according to an embodiment.
  • steps are depicted in FIG. 2, and in subsequent flowcharts FIG. 3 and FIG. 4A and FIG. 4B, as integral steps in a particular order for purposes of illustration, in other embodiments, one or more steps, or portions thereof, are performed in a different order, or overlapping in time, in series or in parallel, or are omitted, or one or more additional steps are added, or the method is changed in some combination of ways.
  • a region of interest is obtained.
  • Any method may be used to obtain the ROI, including receiving data from an operator, either unsolicited or in response to a prompt, or retrieving a mask that indicates a one in voxels that are within the ROI and a zero elsewhere, from a storage device on a local or remote computer, at a fixed location or a location indicated by a user or program operating on a remote computer, either in response to a request or unsolicited.
  • the ROI is defined by a user, such as a radiologist viewing the images on a display device, e.g., by drawing using a pointing device (such as a computer mouse), on a reference slice of the images such that the ROI encloses the target tissue (such as a lesion or cell) to be segmented.
  • the ROI includes both the target tissue and a portion of the background.
  • the ROI is also generated by parameters provided by a user.
  • the user can specify a center on the reference image, and provide a pixel value and a radius, then a circular ROI can be automatically generated.
  • an elliptical ROI can be generated by providing the center of the ellipse and the lengths of its major and minor axes. ROI of other shapes can be generated in similar manner.
  • the ROI is automatically generated based on a number of selected points near a target boundary specified by the user. If the ROI does not enclose the target, it can be dilated to enclose the target, in response to further user commands.
  • an ROI is not involved; and, step 201 is omitted.
  • step 203 tissue types in the ROI are classified, at least preliminarily.
  • the target tissue is a particular organ or tumor therein, such as liver or brain or lung or lymph node.
  • the specification is made by a user in response to a prompt provided at a graphical user interface.
  • step 203 includes classifying one or more voxels within the ROI as one or more tissue types. For example, voxels that constitute a background to tissues of interest are classified as background voxels. Any method may be used to classify voxel during step 203.
  • Steps 211 through 225 are directed to determining an initial positions for one or more boundaries of a corresponding one or more target tissues (including target cells). In step 211, it is determined whether the initial boundary is to be determined in a two dimensional slice or a higher dimensional subset of the image data.
  • the initial boundary is determined in three dimensions or not. If not, then in step 213 one or more initial boundary curves in two dimensions are determined. For example in some embodiments, a user draws an initial boundary in or near the target tissue of interest. In other embodiments, the initial boundary is based on an approximate segmentation technique, such as a watershed method with internal and external markers.
  • the watershed transform with internal and external markers is well known in the art and described, for example, in L.Vincent and P. Soille, "Watersheds in digital spaces: an efficient algorithm based on immersion simulations," IEEE Transactions On Pattern Analysis And Machine Intelligence, v.13 (6), pp583-598, June 1991.
  • the initial one or more boundaries are determined for a single subset of voxels in the image data, called a reference subset, such as on a reference slice at one axial position and moment in time.
  • a reference subset such as on a reference slice at one axial position and moment in time.
  • a single slice of multiple slices at different axial positions or times or some combination is called a subset of the image data.
  • a subset of the image data can also include other collections of voxels fewer than the entire set of voxels in the image data, whether on the same slice or same time or not.
  • a volume of interest is determined.
  • the VOI is determined automatically based on the size of the ROI and the distance of the subset from the subset on which the ROI is defined, such as one slice. In other embodiments, the VOI is based, at least in part, on input received from a user.
  • voxels within the VOI are sampled differently from the original voxels (are re-sampled). For example, in some embodiments the voxels in the VOI are super sampled by interpolating amplitudes at positions other than those in the original image data. In some embodiments, amplitudes from several voxels are grouped by selecting an average, median, maximum, minimum or some other statistic of the several voxels being grouped. [0055] In step 225, one or more initial boundary surfaces are determined in three or more dimensions. For example, in some embodiments a user draws an initial boundary in or near the target tissue of interest.
  • the initial boundary is based on an approximate segmentation technique, such as a watershed method with internal and external markers extended to three dimensions or more.
  • the initial one or more boundaries are determined for a single subset of voxels in the image data, called a reference subset, such as one or more of several slices along an axial position or at one or more moments in time.
  • the shape of the initial contour depends on the object to be approximated, and can be artibrary to trace the boundary of the object on the reference image. Domain knowledge of the nature of the image and how it was taken can be relied upon for drawing the initial contour.
  • the initial contour can also be generated automatically based on prior knowledge, such as anatomical position, in certain circumstances.
  • the initial contour can be inaccurate for the true object boundary, and can be fully inside or fully outside the true object boundary, or can have intersections with the true object boundary.
  • the initial contour can be drawn by an operator with a mouse or other input device.
  • the initialization can be realized by drawing multiple contours, each roughly enclosing the individual component, so that the multiple components can be segmented simultaneously.
  • the reference image can be selected based on the position of the image slice (e.g., in the middle of the cross sectional images for the tumor or organ) or an image in which the tumor or organ has a relatively large area.
  • a local region-based active contour method is used, as described in more detail below.
  • a local region in the vicinity of the initial boundary is defined during step 219.
  • the local region is within a distance r of the current boundary, where r is small compared to a perimeter of the current boundary.
  • the distance r should also be large enough to include a sufficient number of voxels to provide good estimates of statistical properties of the image in the vicinity, such as about 10 voxels or more.
  • the local region assumes any shape with roughly balanced inside and outside regions at each side of the evolving boundary.
  • the localized region-based active contour method use a local region in which the center point is located on the evolving curve and the radius specified by r.
  • the local region takes the form of a band or strip extending on both sides of the current contour.
  • r is selected within a range defined by a maximum value r max and a minimum value r ⁇ .
  • r max is selected (e.g., about 100 voxels) in order to not include too many voxels in the local region, whereas /"min is selected to ensure a sufficient number of voxels (e.g., greater than about 5) to provide good estimates of statistical properties of the image in the local region.
  • step 241 the position of the one or more boundaries are refined using an active contour model.
  • An active contour model involves a moveable boundary where movement of the boundary is based on a differential equation called an evolution equation.
  • the evolution equation is a solution to a cost or energy minimization problem.
  • the evolution equation is based on minimization of an energy functional that assigns a cost to properties of a boundary that deviate from desired properties for the boundary.
  • a typical energy functional includes a cost for distance from a maximum gradient, a cost for lack of smoothness, or a cost for deviations from an average intensity on one or both sides of the boundary, among other terms. Any active contour model may be used.
  • the energy functional is based on the use of local regional statistics in step 243, or includes a topographical cost for multiple boundaries in step 249, or some combination.
  • step 251 it is determined whether the current boundary describes a special class of tissue that received particular, class-specific treatment. For example, in some
  • lung tissue, brain tissue and liver tissue receive class-specific treatments, either during definition of the initial boundary or during or after refinement. If so, then control passes to step 253.
  • step 255 class-specific operations are performed.
  • step 253 includes step 255 to determine the boundary of tissue, such as the liver or other organs, in which the boundaries are known to satisfy some intensity constraint or size and shape constraints (called topological constraints, herein) or position constraints, or some combination, as described in more detail below.
  • step 253 includes step 257 to determine an inner boundary for a tumor in brain tissue, as described in more detail below with reference to FIG. 11 and FIG. 12.
  • step 261 it is determined whether there are to be any manual edits.
  • the current boundary is presented to a user along with a graphical user interface, which the user can operate to indicate a manual edit to be performed. If so, control passes to step 263 to receive commands from a user that indicate one or more manual edits to be performed.
  • the edited boundary may be used directly or propagated to another subset or sent back for further refinement.
  • the user can use a pointing device to move a portion of the boundary, or mark a boundary as suitable for propagation in one or more directions to other or adjacent subsets of the image data.
  • the edited boundary is refined by passing the manually edited boundary back to step 241.
  • step 263 includes determining a mask in step 265.
  • the mask indicates where the current boundary may be refined (e.g., where the voxels of the mask hold a first value, such as "1") and where the current boundary may not be changed (e.g., where the voxels of the mask hold a different second value, such as "0").
  • FIG. 8 is a block diagram that illustrates an example graphical user interface (GUI) 800 for controlling active contour model, according to an embodiment.
  • GUI graphical user interface
  • FIG. 8 is thus a diagram that illustrates an example screen on a display device, according to one embodiment.
  • the screen includes one or more active areas that allow a user to input data to operate on data.
  • an active area is a portion of a display to which a user can point using a pointing device (such as a cursor and cursor movement device, or a touch screen) to cause an action to be initiated by the device that includes the display.
  • pointing device such as a cursor and cursor movement device, or a touch screen
  • Well known forms of active areas are stand alone buttons, radio buttons, check lists, pull down menus, scrolling lists, and text boxes, among others.
  • FIG. 8 areas, active areas, windows and tool bars are depicted in FIG. 8 as integral blocks in a particular arrangement on a particular screen for purposes of illustration, in other embodiments, one or more screens, windows or active areas, or portions thereof, are arranged in a different order, are of different types, or one or more are omitted, or additional areas are included or the user interfaces are changed in some combination of ways.
  • the GUI 800 includes active areas 811, 812, 813, 814, 821, 822, 823, 824, 831, 832, 833 and 834.
  • Global Refine active area 811 causes an apparatus or system to refine every boundary in the current subset of the image data, such as an individual slice.
  • control passes to step 241 to perform a global region-based active contour method, described above.
  • Quick Refine active area 812 causes an apparatus or system to perform a local region-based active contour method on every boundary in the current subset of the image data.
  • Partial Refine active area 813 causes an apparatus or system to perform local region-based active contour method on a portion of the current subset, e.g. as indicated by a mask.
  • Quick Smooth active area 814 causes an apparatus or system to smooth a boundary in the current subset of the image data, for example by performing an active contour model with a large weight for a smoothness term and restricting r to be very small, e.g., about 5 voxels.
  • Backward active area 821 causes an apparatus or system to propagate a boundary from the current subset of image data to an adjacent subset of image data in a particular direction, e.g., toward the toes of a human subject.
  • Backward 2 active area 822 When activated, Backward 2 active area 822 causes an apparatus or system to propagate a boundary from the current subset of image data to a different subset of image data adjacent to an adjacent subset in the particular direction. Similarly, when activated, Backward 5 active area 832 or
  • Backward 10 active area 831 causes an apparatus or system to propagate a boundary from the current subset of image data to each successive adjacent subset of image data up to five subsets away, or 10 subsets away, respectively, in the particular direction.
  • Forward active area 824 causes an apparatus or system to propagate a boundary from the current subset of image data to an adjacent subset of image data in a direction opposite to the particular direction, e.g., toward the head of a human subject.
  • Forward 2 active area 824 causes an apparatus or system to propagate a boundary from the current subset of image data to a different subset of image data adjacent to an adjacent subset opposite the particular direction.
  • Forward 5 active area 833, or Forward 10 active area 834 causes an apparatus or system to propagate a boundary from the current subset of image data to each successive adjacent subset of image data up to five subsets away, or 10 subsets away, respectively, in the opposite direction.
  • step 271 it is determined whether some stop condition is satisfied. For example, it is determined whether the entire volume of interest has been segmented. If so, the process ends.
  • the refinement-propagation is iterated for each slice within a volume of interest (VOI) until each slice of the VOI is processed.
  • the VOI can be defined by the first and last images that enclose the entire object, and is specified in step 221.
  • the propagation can also be terminated automatically using certain criteria, such as based on a pre-defined minimum size or intensity range of the segmented object on the current image, or the average intensity difference between the segmented object on a reference image (or on the previous image) and that on the current image.
  • step 271 If instead, it is determined in step 271 that manual input during step 263 indicates that a boundary is ready for propagation or that subsets of the image data within the volume of interest remained un-segmented, then the stop condition is not satisfied. If it is determined that a stop condition is not satisfied, then in step 273 the boundary is propagated to another or adjacent subset, such as the next slice in an axially displaced position or at the next moment of time. In some embodiments, the propagation is performed in two directions seperately from the reference image: one towards the head direction and the other towards foot or toes direction.
  • the refined contour on one image can be projected on the next image in the propagation direction, which can be simply duplicated from the single previous slice, or based on the predication model based on all the previous segmentation contours.
  • the propagated contour is refined using a similar active contour method, e.g. by passing control back to step 241.
  • step 273 includes step 275.
  • step 275 the boundary is propagated to the other or adjacent subset in such a way as to satisfy an intensity constraint, a typology constraint, or a position constraint, or some combination.
  • an intensity range associated with the liver is used to exclude any pixels or voxels within the boundary that fall outside the accepted intensity range.
  • liver tissue falls largely within a range from about 0 to about 250 Hounsfield units (HU).
  • HU Hounsfield units
  • boundaries that form any shape or size outside the range expected for the target tissue in the current subset are excluded by the topology constraints.
  • boundaries that are positioned outside the range expected for a particular target tissue are also excluded. For example in CT or MR data, when segmenting a left kidney in axial view, which is approximately located in the right half of the image, if some of the segmentation results fall largely on the other half of the image, then those parts can be removed.
  • step 275 After propagation to the next subset, the boundary or boundaries on the new subset, and not excluded by the constraints of step 275, are refined using an active contour model, such as in step 241.
  • the segmentation result is thus further analyzed to improve the accuracy of the result.
  • Prior knowledge or available information about the objects such as intensity range, position, or shape, are exploited and integrated into the method to improve the performance of the method. For example, regions having a statistical quantity of intensities exceeding a predetermined value or range can be excluded from the segmentation result.
  • the intensity statistics inside the boundary such as mean ⁇ 0 and standard deviation ⁇ 3 ⁇ 4 is calculated.
  • Those statistics are then used to set threshods Ti ow and Thigh, which are utilized, for example in step 255 in some embodiments, to exclude some portion of the images from the segmentation result. For instance, the pixels whose intensity values are either below the threshold Ti ow or above the threshed 7 3 ⁇ 4g3 ⁇ 4 , are excluded from the segmentation result.
  • the empirical threshold has the form given by Equation 1.
  • T io W o - ⁇ 3 ⁇ 4 ⁇ 0
  • T high ⁇ 0 + 2 ⁇ 0
  • i and a 2 are positive parameters that are selected empirically.
  • i is selected advantageously in the range from about 1.0 to about 2.0
  • a 2 is selected advantageously in the range from about 1.0 to about 3.5
  • Gaussian mixture model More sophisticated models, such as the Gaussian mixture model, are also employed, in some embodiments, to evaluate the statistics of the largest component.
  • the Gaussian mixture model is used in these embodiments as a more robust estimate of statistics of the organ.
  • the two component Gaussian mixture model gives a more accurate estimate of the mean J Q and standard deviation c3 ⁇ 4 of liver parenchyma, which is a large component.
  • the threshod determined using Equation 1 is used to exclude adjacent organs, such as heart, stomach, or kidney.
  • the described procedure also includes preprocessing the original set of image data to remove the portions of voxels or pixels not of interest in step 201.
  • the Hounsfield value of the CT intensity is used to preproccess the original images.
  • Some organs e.g., lung , fat or bone
  • have distinct Hounsfield range from the liver and are separated by simple thresholding techniques.
  • the techniques herein can integrate the above thresholding techniques into both segmentation and refinement process.
  • Prior knowledge or available information of the shape and position of the objects is also used, in some embodiments, to eliminate possible inaccuracies during propagation of the contour from one slice to another.
  • the connected components of the segmented results is first labeled and analyzed based on properties such as area or centroid.
  • the labeled objects which has an area larger than a specified theshold and where the centroid falls into the specified range is selected as a candidate of true liver tissue. This, avoids, for example, the heart being mistaken as part of the liver.
  • the centroid of the connected components is used to exclude some of the non- target tissues; because, in the inferior part of the liver, the centroid of the contour often falls over the left side of the image.
  • the average amplitude value in a labeled object is compared to the average amplitude ⁇ and standard devaition crof a reference image.
  • the labeled object can be excluded if it is beyond a specified range. For example in some organ segmentation, an amplitude range is specified as from about ⁇ - a ⁇ to about ⁇ + a ⁇ , where a is a positive parameter in the range from about 1.5 to about 3.
  • FIG. 3 is a flow chart that illustrates a method 300 for performing a step 241 of the method of FIG. 2 to perform active contour model based on a local regional statistic with topographical effects, according to an embodiment.
  • method 300 is a particular embodiment of the step 241.
  • step 301 it is determined whether there are multiple boundaries within the region of interest ROI or the volume of interest VOL
  • the multiple boundaries may be the result of the initial boundary determination in step 213 or step 225, or the result of a previous active contour model refinement during a previous execution of step 241 or previous user inputs, such as in manual edit step 263, or previous class specific operations in step 253, or propagation from a previous subset during step 273. If not, control passes to step 303 to move the boundary by a local regional evolution equation. 2.
  • FIG. 5 is a block diagram that illustrates an example initial boundary, local region and example marker for each of two objects represented in image data for computing topographical effects, according to an embodiment.
  • Two initial boundaries are depicted as curves CI 501 and curve C2 502.
  • a local region 513 of a point 511 on boundary CI 501 is depicted as a dotted ellipse with major axis 515 given by the distance r described above with respect to step 219, such that every point in local region 513 is within the distance r of the boundary CI.
  • the local region assumes a different shape, such as a circle, square, rectangle or band parallel to the curve CI, among others.
  • FIG. 5 also shows a marker Ml region 521 that defines points inside the curve CI used with the watershed transform to provide the initial boundary CI.
  • FIG. 5 depicts a second initial boundary C2 502 of a second object with a second marker M2 522. Note that the initial boundaries CI 501 and C2 502 intersect and their enclosed regions thus overlap. It is desirable that multiple contours evolve to eliminate such intersections.
  • the local regional evolution equation is determined during step 303 as described here.
  • Parameterized edge-based active contour model originally proposed by M. Kass, et al., 1988, is a powerful segmentation model based on energy minimization. The energy term consists of two components: one is used for adjusting smoothness of the curve, and another one is used to attract the contour toward the object boundary.
  • Another typical edge-based model is the geodesic active contour model proposed by Caselles et al., 1997. It also has a level set formulation. Topology change can be naturally handled.
  • Evolving the curve C in normal direction with speed F amounts to solvinge the differential equation given by Equation 2a
  • Equation 2b the energy functional F(c 1 , c 2 ⁇ ) in the Chan-Vese regional model is defined by Equation 2b.
  • Equation 4 The evolution e uation for ⁇ ( ⁇ , x, y) can be derived as Equation 4
  • the first term in the bracket is a smoothness term. Ignoring the expanding term V , the data attachment term, (x, y) - c, , can be interpreted as a "force", pushing the evolving curve toward the object boundary where the intensity on either side of the boundary most closely matches the average intensity on either side.
  • the term was also used in the region competition segmentation model, in which the statistics force is expressed in terms of function of the probability distribution.
  • Equation 2b The constants c 1 and c 2 in Equation 2b are the global mean intensities of the entire interior and exterior regions defined by C, respectively.
  • Incorporating localized statistics into the variational framework can improve segmentation. This can be achieved, in some embodiments, by introducing a characteristic function in terms of a radius arameter r [5]
  • This function will be 1 when the point y is within a ball of radius r centered at x, and 0 otherwise.
  • B is defined differently to emulate an ellipse, rectangle, square, or band parallel to the boundary, or some other local area.
  • n (n>l) objects of interest are spatially disconnected on the image
  • segmentation can be achieved by using a single level set function to represent the evolving contour, which can be initialized by manually drawing a closed curve that roughly encloses all the objects on the reference image.
  • Global region-based active contour method such as the active contour model proposed by Chan-Vese, can be used to guide the evolution of the curve.
  • the local region-based active contour method is used instead of the global region-based active contour method.
  • an active contour model method is utilized, in some embodiments, that is based on, but modifies, area or edge-based active contour model methods, such as the traditional Chan-Vese active contour method or the local region- based active contour method described above.
  • Such a model is modified to incorporate information regarding the proximity among the multiple object contours.
  • the initial contours for the multiple objects as provided above are evolved cooperatively to avoid the intersection of different objects.
  • the energy functional for the active contour is modified to include terms that relate to the proximity among the multiple object contours.
  • topographical distance drawn from watershed method is used to represent such proximity information.
  • the driving forces of the evolving contours come from the combination of a statistical force, which is drawn from the region-basd or edge-based active contour model (such as the shape of the individual contours, interior/exterior density of the individual contours, or the local intensity near the individual contours), and a topographical force that relates the boudnary to the topographic distance to a region mimimum (marker) associated with each object.
  • the evolution of the contours is mainly driven by the local statistical force. Under this condition, the topographical force is turned off. However, when the two evolving contours become close enough, the topographic force is automatically activated.
  • the statistical and topographic forces both contribute strongly under this condition. For example, when two objects with similar intensity values intersect, the statistical difference between them at the intersecting segment becomes weak, making the topographic force play a dominant role. Whereas at the remaining segments of the evolving contours, the statistical force plays a dominant role because of the large distance between the non-intersecting contour segments and the usually stronger statistical differnce between the inside of the object and the background intensiteis. This allows the method to take full advantage of the strong statistical difference between the foreground (object) and background at the contour segments that do not interact with each other.
  • Figure 5 illustrates the calculation of topographic force using two objects as an example.
  • the gradient of the original image is computed as a relief function representing edge evidence.
  • Internal and external markers are determined for each contour of the objects, e.g., by eroding the object contour C obtained from the previous image.
  • regional minima Ml and M2 are formed by eroding the initial contours of / and C2, respectively.
  • the modified relief is achieved by performing a reconstruction by erosion or recursive conditional erosion from the obtained regional minima M 1 and M 2
  • the topographical distance L between two voxels x and y is a spatial weighted distance defined by Equation 8, where inf refers to the infimum i.e., greatest lower bound) of a set.
  • Equation 9a Equation 9a
  • Equation 9b The cost of walking from voxel p to voxel q is related to the lower slope, which is defined as the maximum slope linking p to any of its neighbors of lower altitude, given by Equation 9b.
  • dist() denotes the Chamfer distance
  • LS(x) is the lower slope at point x.
  • Equation 10 For each regional minimum (marker Mi), the corresponding topographic distance transform is defined by Equation 10
  • the topographic force can further incorporate a weight function h , which can be defined as a function of distance d between the two contours (i.e., the shortest distance between a point on one contour to the other contour).
  • the weighted topographic force for the evolving curve Ci can be defined as 1 ⁇ 2 ) [( ⁇ 3 ⁇ 4 - a 2 ) + (Z ⁇ - L 2 )] .
  • the weighting function h can have the following characteristics. As the distance goes into the topographical influence zone ( d ⁇ d ⁇ ), h is larger and can be nearly constant.
  • the function h can be very small and vanish.
  • the topographic forces can function as a complement of statistical forces, when the statistical force becomes too weak to drive the contour to the correct boundary.
  • FIG. 6 is a graph 600 that illustrates a separation weighting function h for the topographical effects, according to an embodiment.
  • the horizontal axis 602 is distance d
  • the vertical axis 604 is multiplicative factor.
  • the trace 610 indicates the weighting function h, which falls off substantially in the vicinity of dmax 612 and is zero by a separation threshold distance 608.
  • FIG. 7 A is a block diagram that illustrates the boundary voxels 701 on a first boundary CI 501 affected by the local regional statistics, according to an embodiment.
  • FIG. 7B is a block diagram that illustrates the boundary voxels 721 on the first boundary CI 501 affected by the topographical effects of a second boundary C2 502, according to an embodiment. Voxels on C I father than the separation threshold distance 608 are not affected by the boundary C2 502, and voxels at a distance d max 612 are just barely affected.
  • u 0 is the image data
  • ⁇ ⁇ , ⁇ 2 ⁇ 0 ⁇ 0 are preselected parameters (constants)
  • H is the Heaviside function
  • is the one-dimensional Dirac measure
  • c lx is the one-dimensional Dirac measure
  • c 2x are the localized intensity averages of u 0 inside and outside of , respectively.
  • ⁇ ( ⁇ ) in the bracket is used to remove overlaps between distinct contours: When the contours are overlapping its value is negative, forcing the evolving contour to recede from the current location.
  • futher overall refinement can be achieved by using the slice-by- slice segmentation result as the initial object surface for a 3D localized regional active contour method.
  • a 3D active contour method can be based on a 3D Chan- Vese model, where curve C in the 2D Chan-Vese model can be replaced with the area S, and the length (C) of the curve can be replaced with the surface area(S).
  • the inside and outside region can be obtained from a sphere (or any other shape) whose center is located on the surface and the radius is determined by the volume of the object previously determined.
  • This driving force ( t + L ; ) - ⁇ a i + Lf) is called herein the topographic force, since it is based on the topographic distance, as opposed to the statistics force in the regional active contour models.
  • a point on the boundary moves as long as it still resides inside one of the catchment basins of one of the markers, where the topographic force is not equal to zero; and, stops when the value equals zero.
  • the weighted topographic force can be also decomposed into two competing component.
  • the weighted topographic force may be written in terms of advance forces a T and retreat force r T according to
  • the separation distance threshold is determined, as well as the form of the weighting function h(d) so that the multiplicative factor is a small fraction of one at a distance d max .
  • the size of dmax is based on the size of the target object. For large size target objects, such as the liver, a separation distance threshold is selected in a range from about 2 voxels to about 16 voxels; while, for a small target objects, such as a lymph node, a separation distance threshold is selected in a range from about 2 voxels to about 5 voxels.
  • the inside marker e.g. the regional minimum
  • the inside marker is determined for each boundary. This can be achieved by eroding the initial contour, which can be formed manually in the first start frame or propagated from the previous frame.
  • regional minimum M 1 521 and M 2 522 are formed by eroding the initial contours CI 501 and C2 502 with a properly defined radius.
  • the regional minimum can be formed by first finding a centroid of the initial contour, then expanding it for a few voxels. The only requirement is that the obtained regional minima are located totally inside the corresponding objects.
  • M 1 521 and M 2 522 can be considered as its internal and external marker respectively.
  • M 2 522 and M 1 521 can be regarded as its internal and external marker respectively.
  • each of the multiple boundaries is defined by a different level set value.
  • the topographic distance field is determined for each marker. That is the topographic distance is determined from each point near one of the boundaries to each marker internal to one of the boundaries.
  • the voxels within a regional minimum are excluded from the computation of topographical distance, since by definition such voxels must belong within the boundary of the corresponding object.
  • step 319 the separation distance is determined for the next point on one boundary to the nearest point on each of the other boundaries.
  • SDF signed distance function
  • step 321 it is determined whether the separation distance for that point is greater than the separation threshold. If so, control passes to step 341.
  • step 341 the boundaries move based on evolution equation that does not include a topographical term. For example, the boundaries moved based on the evolution equation for the local regional energy functional.
  • Control then passes to step 351.
  • step 351 is determined whether the current iteration is the last iteration. For example, it is determined whether every point in each boundary has been subjected to the evolution equation. If it is determined that this is not the end of the iteration, then control returns to step 319 to revisit another point in each boundary. If it is determined that the iteration is completed, then control passes to step 361 to determine if an end condition is satisfied, as described above.
  • step 321 If it is determined in step 321, that the separation distance of the current point is not greater than separation threshold then control passes to step 231 to include topographical term in the evolution equation.
  • step 331 includes steps 333 and 335.
  • step 233 topographical term is included in the evolution equation.
  • the topographical term includes topographical distance cost of each boundary point from markers of two or more boundaries. For the example contour C / , the topographic force, the topographic distance difference from regional minimum M 1 and M 2 , can be written as:
  • step 335 the separation distance weighted topographical term is included in the evolution equation.
  • the weighted topographic force for the example evolving curve Ci can be written as: ⁇ ( ⁇ 2 ) [( ⁇ 3 ⁇ 4 - a 2 ) + (Z ⁇ - L 2 )] .
  • the weighted topographic force for the example evolving curve C 2 can be written as:
  • the topographical force term is used in an evolution equation with any other term related to boundary detection known in the art.
  • Known terms related to boundary detection are used in various active contour models known in the art, from which one or more terms may be taken, include, among others: edge-based active contour model terms such as a snake mode and a geodesic active contour model; and region- based active contour models such as active contours without edges, mean separation energy, histogram separation energy, graph partition active contour, and local histogram based segmentation using the Wasserstein distance.
  • the topographical distance is computed using one or more of the following techniques.
  • the topographical distance function computation is straightforward since there is only one possible path between any two points.
  • the computation is much more complicated.
  • one of two novel approaches is used for computing the topographical distance.
  • FIG. 4A and FIG. 4B are flow charts that illustrate alternative methods for performing a step of the method of FIG. 3 to compute topographical distance, according to various embodiments.
  • FIG. 4A is a flow chart for a first approach 400.
  • the topographic distance is the integration of the current cost and total cost of the optimal subpath. Therefore, a modification of Moore' s algorithm, originally designed for computing the shortest path from a node to all others, is used to compute the topographic distance for a relief function/ for which we know the set of regional minima m,.
  • G represent all the nodes of the grid and U the neighborhood relations and 7l(x) the topographical distance for voxel x, and the relief function /represent the morphological gradient of the image data.
  • the topographic distance of each voxel from a regional minimum is computed in the following way.
  • step 401 initialization is performed.
  • set ⁇ ( ⁇ ) ⁇ (where ⁇ indicates a value at or near the largest value of the computing device).
  • step 403 the interior voxels of the regional minima, i.e., the voxels without a higher neighbor are put into the set S; all other voxels including the voxels of the inner boundary dm i of the regional minima are put in the set S of non-interior voxels.
  • step 413 for each voxel neighbor z of the minimum voxel x inside the set S , set the value of n z) as follows:
  • step 411 If it is determined in step 411 that the determination of topographical distance is completed for one regional minimum (e.g., marker Ml), then in step 421 the values for the regional minimum at all voxels in the ROI are saved, e.g., stored on a computer-readable medium. If there is another regional minimum not yet processed, then the process above should be repeated for the next regional minimum, until topographical distances to all regional minima have been mapped and saved. Thus, in step 423, it is determined whether there is another regional minimum to process. If not, the process ends. If so, then control passes back to step 401 and following steps to repeat the process for the next regional minimum (e.g., marker M2). One map of topographical distance is determined for every regional minimum.
  • the next regional minimum e.g., marker M2
  • the process 400 is similar to one described in Meyer 1994, except that the label does not need to be propagated.
  • the topographic distance based on the Moore algorithm can be computed efficiently.
  • FIG. 4B is a flow chart for a second approach 450.
  • the topographic distance a t + L i (x) equals f(x) within the catchment basin
  • voxel W satisfies two conditions: (a) it is on the watershed line separating the catchment basins of at least two minima Mi; (b) it is an upstream voxel of x.
  • a voxel q is upstream of a voxel p, if there exists a path of steepest slope between p and q.
  • the catchment basins CBi and the watershed line can be easily obtained from the standard watershed transform. Then the problem is how to find the position of voxel w x for each voxel x.
  • the standard watershed transform was modified by recording the predecessor position in the optimum path for each voxel in the propagation procedure.
  • the predecessor voxel for each voxel x on the optimum path is indicated at voxel x to form a predecessor map. In this way, the upstream path from the point x upward to the watershed line can be obtained and the position of w x can be determined for each voxel x.
  • the image foresting transform of Falco, 2004, is utilized in addition to or instead of the watershed transform, in some embodiments, to produce predecessor map.
  • step 451 a predecessor map is formed.
  • step 451 includes step 461 to perform the watershed transform and step 463 to record at each voxel the predecessor voxel on the optimal path (e.g., the path of steepest descent) determined during the watershed transform.
  • step 451 includes step 471 to perform image foresting transform to produce the predecessor map.
  • the predecessor map is employed to compute topographic distance in step 481.
  • the associated (x), w x and equation 13 are used to compute the topographic distance functions to the corresponding regional minima (e.g., markers).
  • Both approaches are computationally efficient.
  • the first one computes only one topographic distance function from one regional minimum.
  • To compute multiple topographic distance functions the first approach is run iteratively.
  • the second approach can compute multiple topographic distance functions from corresponding multiple regional minima in a single iteration. It should be noted that the topographical distance to each regional minimum needs to be computed only once after all the regional minima are specified.
  • Equation 14 the e olution equation is defiend as given by Equation 14.
  • c ⁇ and c 2 represent either the global means or local means, the latter defined in Equation 6 and Equation 7.
  • the steps to implement this version of the evolution equaiton include: 1) calculate the average amplitude m inside the contour C; 2) define a local region for the the expanding force for which the center is on the boudnary and the radius is specified by a r2, which need not be the same as the radius r of the local region-based term; 3) calculate the means m, ⁇ and m 2 for the parts of the local region for the expanidn force that are inside and outside the boundary, respectively; 4) include the expanding term as given by Equation 15.
  • the tissue detection segmentation process includes the following steps.
  • n distinct level set functions to represent n objects of interest in a reference image (a pre-determined image) as an initialization.
  • n (n>l) objects of interest are spatially disconnected on the image
  • segmentation of all of the objects is achieved, in this embodiment, by using a single level set function to represent the evolving contour that can be initinalized by manually drawing a closed curve roughly enclosing all of the objects.
  • Region based active contour like active contour model without edges are used to guide the evolution of the curve.
  • manual initialization for each of the connected objects is advantageous and achieved by drawing an initial (closed) curve near the object boundary.
  • FIG. 9A is a block diagram that illustrates five example subsets of simulated image data in which two objects represented in noisy image data, which are separate in at least one frame, maintain a boundary after contacting each other, according to an embodiment.
  • the five subsets may be five successive or unordered, adjacent or non-adjacent, slices of axially displaced cross sectional images, or five successive or unordered, temporally displaced frames of the same scene, e.g., from a cytometry fluorescence or microscope measurement.
  • the five subsets of image data are labeled 901, 902, 903, 904 and 904, respectively.
  • a background 993 of one relatively dark intensity is also evident.
  • a first object represented by voxels 991a, 991b, 991c, 99 Id and 99 Id of relatively light intensity in the five subsets, respectively.
  • a second object represented by voxels 992a, 992b, 992c, 992d and 992d of slightly less light intensity compared to the first object in the five subsets, respectively.
  • the intensity for the upper (first) object, lower (second) object and the background are 100, 90 and 50, respectively.
  • This simulated image data presents a challenge for many segmenting approaches.
  • the simulated image data includes multiple objects well distinguished from the background, but not so well distinguished from each other.
  • the challenge is where to draw the boundary between the two objects when the objects come into contact with each other.
  • Note the area of the first object in the first subset is smaller than in the other subsets, which makes an area conservation assumption, used in some approaches, invalid.
  • a boundary is drawn that is superior to the boundaries produced by previous methods.
  • a coupled global regional active contour with intersection constraint from the prior art is then used. Although the model maintains contours of touching objects disjoint, the contours are unable to accurately trace the interface between the two objects.
  • FIG. 9B is a block diagram that illustrates example boundaries for the five example frames of FIG. 9A based on a local regional statistic of the upper object boundary initialized in the first frame, according to an embodiment. To avoid obscuring the aspects to be illustrated, the bounds of the five subsets re not drawn in FIG. 9B through FIG. 9E. The position of the boundary of the first object is illustrated in the five subsets by boundary lines 910a, 910b, 910c, 91 Od and 910e, respectively.
  • the local region-based active contour method provides good results where the first object abuts the background but does a poor job where the first and second objects abut (see especially, the boundary lines 910b, 910c and 910d where the first and second objects abut.
  • the boundary does not follow the actual change from light to less light voxels.
  • FIG. 9C is a block diagram that illustrates example boundaries for the five example frames of FIG. 9A based on a local regional statistic of the lower object boundary initialized in the first frame, according to an embodiment.
  • the parameter setting are as the same as in the coupled local region-based active contour method.
  • the relief image was computed from the morphological gradient of the original image. Then it was transformed into a lower complete image. The results indicated the introduced topographic force significantly improve the segmentation, especially for noisy image, as shown in FIG. 9D and FIG. 9E.
  • FIG. 9D is a block diagram that illustrates example boundaries for the five example frames of FIG. 9 A based on a local regional statistic and topographic effects for the upper object boundary initialized in the first frame, according to an embodiment.
  • the local region-based active contour method provides good results not only where the first object abuts the background but also where the first and second objects abut (see especially, the boundary lines 930b, 930c and 930d where the first and second objects abut.
  • the boundary does follow the actual change from less light to more light voxels.
  • FIG. 9E is a block diagram that illustrates example boundaries for the five example frames of FIG. 9 A based on a local regional statistic and topographic effects for the lower object boundary initialized in the first frame, according to an embodiment. Similar favorable results are obtained for boundaries 940c, 940c and 940d.
  • FIG. 10A is a block diagram that illustrates five example frames of simulated image data in which three objects represented in noisy image data, which are separate in at least one frame, maintain a boundary after contacting one or more of each other, according to an embodiment.
  • the intensities for the upper left (first) object, upper right (third) object, lower (second) object, and the background are 100, 100, 90, and 50 respectively.
  • FIG. 10B is a block diagram that illustrates example boundaries for the five example frames of FIG. 10A based on a local regional statistic and topographic effects, for three boundaries initialized in the first frame, according to an embodiment.
  • the dynamic topographic force helps to drive the evolving contour to the object interface, even if the shape of the object is irregular.
  • the illustrated embodiments incorporate one or both of two methods for automatic determination of the inner boundary.
  • one or both methods are included in class specific operations depicted in step 253 in FIG. 2, such as during step 257 to determine the inner boundary for a brain tumor.
  • a pre-classification is done, e.g., as non-viable brain tissue, during step 203.
  • the first method is based on the statistical analysis of the image intensity enclosed by the outer boundary.
  • a two Gaussian mixture model is fit the image intensity inside the outer boundary.
  • the percentage of the components ; and average amplitude value ⁇ 3 ⁇ 4 and standard deviation are obtained, where ie ⁇ 1, 2 ⁇ .
  • mi ⁇ m 2 and the percentage of each of the two Gaussian components exceeds a threshold T nv , where T nv is in a range from about 0.1 to about 0.5, then the boundary is considered to include non-viable tissue, such as in a brain tumor.
  • T nv is in a range from about 0.1 to about 0.5
  • the boundary is considered to include non-viable tissue, such as in a brain tumor.
  • the average intensity value of one component is less than a fraction a of the other, with a in a range from about 1/3 to about 2/3 for brain tumors, then the boundary is considered to include non-viable tissue.
  • the initial inner boundary is then refined by a morphological operation, such as morphological open to get rid of some noise or morphological erode, to situate the inner boundary inside the non- viable tissue.
  • the local region-based active contour method described above is used to refine the contour.
  • FIG. 11 A is a graph that illustrates an example contour map with an inner contour suitable for a brain tumor segmentation, according to an embodiment.
  • the horizontal axis 1102 indicates voxel location in one dimension
  • the vertical axis 1104 indicates voxel location in a perpendicular dimension.
  • the initial outer boundary is given by curve C 1110.
  • the center of the outside boundary C 1110 is given by voxel O 1106.
  • Multiple lines through voxel O 1106 include line 1108a, 1108b and 1108L
  • FIG. 1 IB is a graph that illustrates an example intensity profile through the map of FIG.
  • the horizontal axis 1152 indicates to assess along the line from outside to entering the object indicated by the outer boundary to exiting the outer boundary at the opposite side.
  • the vertical axis 1154 indicates intensity of a voxel at that location.
  • the trace 1160 indicates the intensity at the foxholes along the line 1108L
  • a section of width Wit 1130 of trace 1160 indicates a high- intensity portion of the trace the leading edge of which constitutes the outer boundary and the inner edge of which determines an initial position Pi 1131 of the inner boundary.
  • the position of Wi 1130 and Pi 1131 are depicted in FIG. 11 A. Performing a similar analysis on other lines, such as 1108a and 1108b, provides of the voxels along the inner boundary which are connected to form a polygon that represents an initial position of the inner boundary.
  • the second method consists of the following steps. Determine the center O of the outside boundary in the reference image. Create multiple (N) straight lines which pass through the center O. Find the intensity profile along the line i £[1,...,N]. If there is a ring pattern, then determine the width Wi of the ring, and obtain the inner boundary point Pi. Connect all the inner boundary points to obtain a polygon. Use the polygon as an initial inner boundary. Once the initial inner boundary is found, the present invention provides two different strategies for refining and propagating the boundary. [0141] A first strategy uses one mask obtained from the above outer and inner boundary of the object in the reference image.
  • the local region-based active contour method handles holes with the implementation of level set method.
  • the strategy then operates on the amplitudes in the mask area using one of the segmentation procedure described above.
  • a second strategy uses two masks obtained from the above outer and inner boundary separately to define the area of amplitudes to be segmented using one of the procedure described above, e.g., local region-based active contour method.
  • S out segmentation result for the outside boundary as an initialization
  • S ln segmentation result for the inner boundary as an initialization
  • FIG. 12A is an image that illustrates an example initial boundary 1210 for a brain tumor in one slice 1201of a MR scan, according to an embodiment.
  • FIG. 12B is an image that illustrates an example refined double boundary 1220 and 1230 for a brain tumor in one slice 1210 of a MR scan, according to an embodiment.
  • FIG. 13 is a block diagram that illustrates a computer system 1300 upon which an embodiment of the invention may be implemented.
  • Computer system 1300 includes a communication mechanism such as a bus 1310 for passing information between other internal and external components of the computer system 1300.
  • Information is represented as physical signals of a measurable phenomenon, typically electric voltages, but including, in other embodiments, such phenomena as magnetic, electromagnetic, pressure, chemical, molecular atomic and quantum interactions. For example, north and south magnetic fields, or a zero and non-zero electric voltage, represent two states (0, 1) of a binary digit (bit). ). Other phenomena can represent digits of a higher base.
  • a superposition of multiple simultaneous quantum states before measurement represents a quantum bit (qubit).
  • a sequence of one or more digits constitutes digital data that is used to represent a number or code for a character.
  • information called analog data is represented by a near continuum of measurable values within a particular range.
  • Computer system 1300, or a portion thereof, constitutes a means for performing one or more steps of one or more methods described herein.
  • a sequence of binary digits constitutes digital data that is used to represent a number or code for a character.
  • a bus 1310 includes many parallel conductors of information so that information is transferred quickly among devices coupled to the bus 1310.
  • One or more processors 1302 for processing information are coupled with the bus 1310.
  • a processor 1302 performs a set of operations on information.
  • the set of operations include bringing information in from the bus 1310 and placing information on the bus 1310.
  • the set of operations also typically include comparing two or more units of information, shifting positions of units of information, and combining two or more units of information, such as by addition or multiplication.
  • a sequence of operations to be executed by the processor 1302 constitute computer instructions.
  • Computer system 1300 also includes a memory 1304 coupled to bus 1310.
  • the memory 1304 such as a random access memory (RAM) or other dynamic storage device, stores information including computer instructions. Dynamic memory allows information stored therein to be changed by the computer system 1300. RAM allows a unit of information stored at a location called a memory address to be stored and retrieved independently of information at neighboring addresses.
  • the memory 1304 is also used by the processor 1302 to store temporary values during execution of computer instructions.
  • the computer system 1300 also includes a read only memory (ROM) 1306 or other static storage device coupled to the bus 1310 for storing static information, including instructions, that is not changed by the computer system 1300.
  • ROM read only memory
  • Also coupled to bus 1310 is a non-volatile (persistent) storage device 1308, such as a magnetic disk or optical disk, for storing information, including instructions, that persists even when the computer system 1300 is turned off or otherwise loses power.
  • Information is provided to the bus 1310 for use by the processor from an external input device 1312, such as a keyboard containing alphanumeric keys operated by a human user, or a sensor.
  • an external input device 1312 such as a keyboard containing alphanumeric keys operated by a human user, or a sensor.
  • a sensor detects conditions in its vicinity and transforms those detections into signals compatible with the signals used to represent information in computer system 1300.
  • bus 1310 Other external devices coupled to bus 1310, used primarily for interacting with humans, include a display device 1314, such as a cathode ray tube (CRT) or a liquid crystal display (LCD), for presenting images, and a pointing device 1316, such as a mouse or a trackball or cursor direction keys, for controlling a position of a small cursor image presented on the display 1314 and issuing commands associated with graphical elements presented on the display 1314.
  • a display device 1314 such as a cathode ray tube (CRT) or a liquid crystal display (LCD)
  • LCD liquid crystal display
  • pointing device 1316 such as a mouse or a trackball or cursor direction keys
  • special purpose hardware such as an application specific integrated circuit (IC) 1320
  • IC application specific integrated circuit
  • the special purpose hardware is configured to perform operations not performed by processor 1302 quickly enough for special purposes.
  • application specific ICs include graphics accelerator cards for generating images for display 1314, cryptographic boards for encrypting and decrypting messages sent over a network, speech recognition, and interfaces to special external devices, such as robotic arms and medical scanning equipment that repeatedly perform some complex sequence of operations that are more efficiently implemented in hardware.
  • Computer system 1300 also includes one or more instances of a communications interface 1370 coupled to bus 1310.
  • Communication interface 1370 provides a two-way communication coupling to a variety of external devices that operate with their own processors, such as printers, scanners and external disks. In general the coupling is with a network link 1378 that is connected to a local network 1380 to which a variety of external devices with their own processors are connected.
  • communication interface 1370 may be a parallel port or a serial port or a universal serial bus (USB) port on a personal computer.
  • USB universal serial bus
  • communications interface 1370 is an integrated services digital network (ISDN) card or a digital subscriber line (DSL) card or a telephone modem that provides an information communication connection to a corresponding type of telephone line.
  • ISDN integrated services digital network
  • DSL digital subscriber line
  • a communication interface 1370 is a cable modem that converts signals on bus 1310 into signals for a communication connection over a coaxial cable or into optical signals for a communication connection over a fiber optic cable.
  • communications interface 1370 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN, such as Ethernet.
  • LAN local area network
  • Wireless links may also be implemented.
  • Carrier waves such as acoustic waves and electromagnetic waves, including radio, optical and infrared waves travel through space without wires or cables.
  • Signals include man-made variations in amplitude, frequency, phase, polarization or other physical properties of carrier waves.
  • the communications interface 1370 sends and receives electrical, acoustic or electromagnetic signals, including infrared and optical signals, that carry information streams, such as digital data.
  • the term computer-readable medium is used herein to refer to any medium that participates in providing information to processor 1302, including instructions for execution. Such a medium may take many forms, including, but not limited to, non-volatile media, volatile media and transmission media.
  • Non-volatile media include, for example, optical or magnetic disks, such as storage device 1308.
  • Volatile media include, for example, dynamic memory 1304.
  • Transmission media include, for example, coaxial cables, copper wire, fiber optic cables, and waves that travel through space without wires or cables, such as acoustic waves and electromagnetic waves, including radio, optical and infrared waves.
  • the term computer-readable storage medium is used herein to refer to any medium that participates in providing information to processor 1302, except for transmission media.
  • Computer-readable media include, for example, a floppy disk, a flexible disk, a hard disk, a magnetic tape, or any other magnetic medium, a compact disk ROM (CD-ROM), a digital video disk (DVD) or any other optical medium, punch cards, paper tape, or any other physical medium with patterns of holes, a RAM, a programmable ROM (PROM), an erasable PROM (EPROM), a FLASH-EPROM, or any other memory chip or cartridge, a carrier wave, or any other medium from which a computer can read.
  • the term non-transitory computer-readable storage medium is used herein to refer to any medium that participates in providing information to processor 1302, except for carrier waves and other signals.
  • Logic encoded in one or more tangible media includes one or both of processor instructions on a computer-readable storage media and special purpose hardware, such as ASIC 1320.
  • Network link 1378 typically provides information communication through one or more networks to other devices that use or process the information.
  • network link 1378 may provide a connection through local network 1380 to a host computer 1382 or to equipment 1384 operated by an Internet Service Provider (ISP).
  • ISP equipment 1384 in turn provides data communication services through the public, world-wide packet-switching communication network of networks now commonly referred to as the Internet 1390.
  • a computer called a server 1392 connected to the Internet provides a service in response to information received over the Internet.
  • server 1392 provides information representing video data for presentation at display 1314.
  • the invention is related to the use of computer system 1300 for implementing the techniques described herein. According to one embodiment of the invention, those techniques are performed by computer system 1300 in response to processor 1302 executing one or more sequences of one or more instructions contained in memory 1304. Such instructions, also called software and program code, may be read into memory 1304 from another computer-readable medium such as storage device 1308. Execution of the sequences of instructions contained in memory 1304 causes processor 1302 to perform the method steps described herein.
  • hardware such as application specific integrated circuit 1320, may be used in place of or in combination with software to implement the invention. Thus, embodiments of the invention are not limited to any specific combination of hardware and software.
  • Computer system 1300 can send and receive information, including program code, through the networks 1380, 1390 among others, through network link 1378 and communications interface 1370.
  • a server 1392 transmits program code for a particular application, requested by a message sent from computer 1300, through Internet 1390, ISP equipment 1384, local network 1380 and communications interface 1370.
  • the received code may be executed by processor 1302 as it is received, or may be stored in storage device 1308 or other non- volatile storage for later execution, or both.
  • computer system 1300 may obtain application program code in the form of a signal on a carrier wave.
  • Various forms of computer readable media may be involved in carrying one or more sequence of instructions or data or both to processor 1302 for execution.
  • instructions and data may initially be carried on a magnetic disk of a remote computer such as host 1382.
  • the remote computer loads the instructions and data into its dynamic memory and sends the instructions and data over a telephone line using a modem.
  • a modem local to the computer system 1300 receives the instructions and data on a telephone line and uses an infra-red transmitter to convert the instructions and data to a signal on an infra-red a carrier wave serving as the network link 1378.
  • An infrared detector serving as communications interface 1370 receives the instructions and data carried in the infrared signal and places information representing the instructions and data onto bus 1310.
  • Bus 1310 carries the information to memory 1304 from which processor 1302 retrieves and executes the instructions using some of the data sent with the instructions.
  • the instructions and data received in memory 1304 may optionally be stored on storage device 1308, either before or after execution by the processor
  • FIG. 14 illustrates a chip set 1400 upon which an embodiment of the invention may be implemented.
  • Chip set 1400 is programmed to perform one or more steps of a method described herein and includes, for instance, the processor and memory components described with respect to FIG. 13 incorporated in one or more physical packages (e.g., chips).
  • a physical package includes an arrangement of one or more materials, components, and/or wires on a structural assembly (e.g., a baseboard) to provide one or more characteristics such as physical strength, conservation of size, and/or limitation of electrical interaction.
  • the chip set can be implemented in a single chip.
  • Chip set 1400 or a portion thereof, constitutes a means for performing one or more steps of a method described herein.
  • the chip set 1400 includes a communication mechanism such as a bus 1401 for passing information among the components of the chip set 1400.
  • a processor 1403 has connectivity to the bus 1401 to execute instructions and process information stored in, for example, a memory 1405.
  • the processor 1403 may include one or more processing cores with each core configured to perform independently.
  • a multi-core processor enables multiprocessing within a single physical package. Examples of a multi-core processor include two, four, eight, or greater numbers of processing cores.
  • the processor 1403 may include one or more microprocessors configured in tandem via the bus 1401 to enable independent execution of instructions, pipelining, and multithreading.
  • the processor 1403 may also be accompanied with one or more specialized components to perform certain processing functions and tasks such as one or more digital signal processors (DSP) 1407, or one or more application- specific integrated circuits (ASIC) 1409.
  • DSP digital signal processors
  • ASIC application- specific integrated circuits
  • a DSP 1407 typically is configured to process real-world signals (e.g., sound) in real time independently of the processor 1403.
  • an ASIC 1409 can be configured to performed specialized functions not easily performed by a general purposed processor.
  • FPGA field programmable gate arrays
  • controllers not shown
  • other special-purpose computer chips include one or more field programmable gate arrays (FPGA) (not shown), one or more controllers (not shown), or one or more other special-purpose computer chips.
  • the processor 1403 and accompanying components have connectivity to the memory 1405 via the bus 1401.
  • the memory 1405 includes both dynamic memory (e.g., RAM, magnetic disk, writable optical disk, etc.) and static memory (e.g., ROM, CD-ROM, etc.) for storing executable instructions that when executed perform one or more steps of a method described herein.
  • the memory 1405 also stores the data associated with or generated by the execution of one or more steps of the methods described herein. 6. Extensions, modifications and alternatives.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

La présente invention concerne des techniques pour la segmentation d'organes et de tumeurs et de cellules dans des données d'image comprenant la révision d'une position d'une limite par l'évaluation d'une équation d'évolution comportant des différences de valeurs d'amplitude pour des voxels sur la limite à partir d'une métrique statistique d'amplitude de voxels à l'intérieur, et à partir d'une métrique statistique d'amplitude de voxels à l'extérieur, pour une région limitée se trouvant à l'intérieur d'une distance r de la limite. La distance r est faible comparée à un périmètre de la première limite. Certaines techniques comprennent la détermination d'une position révisée d'une pluralité de limites par l'évaluation d'une équation d'évolution comportant des différences dans une première distance topographique depuis un premier marqueur et une seconde distance topographique depuis un second marqueur pour chaque voxel sur la limite, et comportant également au moins un autre terme associé à la détection de limite.
PCT/US2013/036166 2012-04-11 2013-04-11 Techniques pour la segmentation d'organes et de tumeurs et d'objets WO2013155300A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/394,097 US10388020B2 (en) 2012-04-11 2013-04-11 Methods and systems for segmentation of organs and tumors and objects

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261622917P 2012-04-11 2012-04-11
US61/622,917 2012-04-11

Publications (1)

Publication Number Publication Date
WO2013155300A1 true WO2013155300A1 (fr) 2013-10-17

Family

ID=49328162

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/036166 WO2013155300A1 (fr) 2012-04-11 2013-04-11 Techniques pour la segmentation d'organes et de tumeurs et d'objets

Country Status (2)

Country Link
US (1) US10388020B2 (fr)
WO (1) WO2013155300A1 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3109824A1 (fr) * 2015-06-24 2016-12-28 RaySearch Laboratories AB Système et méthode pour la manipulation des données d'image
CN106466183A (zh) * 2015-08-11 2017-03-01 三星电子株式会社 工作站、包括其的医学成像设备和对其的控制方法
CN107004300A (zh) * 2014-12-08 2017-08-01 皇家飞利浦有限公司 体积形状的虚拟交互式定义
JP2017536856A (ja) * 2014-10-21 2017-12-14 无錫海斯凱尓医学技術有限公司Wuxi Hisky Medical Technologies Co.,Ltd. 検出領域を選択する方法と装置及び弾性検出システム
CN113592890A (zh) * 2021-05-28 2021-11-02 北京医准智能科技有限公司 一种ct图像肝脏分割方法及装置
EP3982321A4 (fr) * 2019-06-10 2023-02-22 Obshchestvo S Ogranichennoj Otvetstvennost'Yu "Medicinskie Skrining Sistemy" Système de traitement d'images radiographiques et de fourniture de résultat à un utilisateur

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013080071A1 (fr) * 2011-11-28 2013-06-06 Koninklijke Philips Electronics N.V. Appareil de traitement d'image
US9218524B2 (en) * 2012-12-06 2015-12-22 Siemens Product Lifecycle Management Software Inc. Automatic spatial context based multi-object segmentation in 3D images
CA2902161C (fr) 2013-03-15 2021-05-04 Seno Medical Instruments, Inc. Systeme et procede d'aide a la classification de vecteur de diagnostic
WO2015009869A1 (fr) * 2013-07-17 2015-01-22 Hepatiq, Llc Systèmes et procédés permettant de déterminer la fonction hépatique à partir de scintigrammes du foie
KR101571151B1 (ko) * 2014-08-14 2015-12-07 주식회사 케이유엠텍 컴퓨터 단층촬영 이미지의 아티팩트 제거 방법
JP6411183B2 (ja) * 2014-11-13 2018-10-24 キヤノンメディカルシステムズ株式会社 医用画像診断装置、画像処理装置及び画像処理プログラム
CN107407646A (zh) * 2015-03-03 2017-11-28 株式会社尼康 测量处理装置、x射线检查装置、测量处理方法、测量处理程序及结构物的制造方法
WO2016157457A1 (fr) * 2015-03-31 2016-10-06 国立大学法人東北大学 Dispositif de traitement d'image, procédé de traitement d'image et programme de traitement d'image
US10248839B2 (en) * 2015-11-30 2019-04-02 Intel Corporation Locating objects within depth images
TWI586954B (zh) * 2015-12-09 2017-06-11 財團法人金屬工業研究發展中心 檢測細胞受人類乳突病毒(hpv)感染的裝置及其檢測方法
US10420523B2 (en) * 2016-03-21 2019-09-24 The Board Of Trustees Of The Leland Stanford Junior University Adaptive local window-based methods for characterizing features of interest in digital images and systems for practicing same
US9799120B1 (en) * 2016-05-09 2017-10-24 Siemens Healthcare Gmbh Method and apparatus for atlas/model-based segmentation of magnetic resonance images with weakly supervised examination-dependent learning
JP6829437B2 (ja) * 2017-03-03 2021-02-10 国立大学法人 東京大学 生体内運動追跡装置
US10762636B2 (en) * 2017-06-29 2020-09-01 HealthMyne, Inc. Systems and methods for volumetric segmentation of structures in planar medical images
US11043296B2 (en) 2018-11-05 2021-06-22 HealthMyne, Inc. Systems and methods for semi-automatic tumor segmentation
WO2020112078A1 (fr) * 2018-11-26 2020-06-04 Hewlett-Packard Development Company, L.P. Conception interactive tenant compte de la géométrie
US10991133B2 (en) * 2019-01-25 2021-04-27 Siemens Healthcare Gmbh Volume rendering from three-dimensional medical data using quantum computing
US11615881B2 (en) * 2019-11-12 2023-03-28 Hepatiq, Inc. Liver cancer detection
TWI774120B (zh) * 2020-11-10 2022-08-11 中國醫藥大學 生物組織影像分析方法及生物組織影像分析系統
US11580337B2 (en) * 2020-11-16 2023-02-14 International Business Machines Corporation Medical object detection and identification
US11854192B2 (en) 2021-03-03 2023-12-26 International Business Machines Corporation Multi-phase object contour refinement
US11923071B2 (en) 2021-03-03 2024-03-05 International Business Machines Corporation Multi-phase object contour refinement
US11776132B2 (en) 2021-03-03 2023-10-03 International Business Machines Corporation Multi-phase object contour refinement
CN115482246B (zh) * 2021-05-31 2023-06-16 数坤(上海)医疗科技有限公司 一种图像信息提取方法、装置、电子设备和可读存储介质
CN116580194B (zh) * 2023-05-04 2024-02-06 山东省人工智能研究院 融合几何信息的软注意力网络的血管分割方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040153128A1 (en) * 2003-01-30 2004-08-05 Mitta Suresh Method and system for image processing and contour assessment
US20070160277A1 (en) * 2006-01-12 2007-07-12 Siemens Corporate Research, Inc. System and Method For Segmentation of Anatomical Structures In MRI Volumes Using Graph Cuts
US20070165916A1 (en) * 2003-11-13 2007-07-19 Guy Cloutier Automatic multi-dimensional intravascular ultrasound image segmentation method
US20080181479A1 (en) * 2002-06-07 2008-07-31 Fuxing Yang System and method for cardiac imaging
US20080317314A1 (en) * 2007-06-20 2008-12-25 Schwartz Lawrence H Automated Determination of Lymph Nodes in Scanned Images

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110286654A1 (en) * 2010-05-21 2011-11-24 Siemens Medical Solutions Usa, Inc. Segmentation of Biological Image Data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080181479A1 (en) * 2002-06-07 2008-07-31 Fuxing Yang System and method for cardiac imaging
US20040153128A1 (en) * 2003-01-30 2004-08-05 Mitta Suresh Method and system for image processing and contour assessment
US20070165916A1 (en) * 2003-11-13 2007-07-19 Guy Cloutier Automatic multi-dimensional intravascular ultrasound image segmentation method
US20070160277A1 (en) * 2006-01-12 2007-07-12 Siemens Corporate Research, Inc. System and Method For Segmentation of Anatomical Structures In MRI Volumes Using Graph Cuts
US20080317314A1 (en) * 2007-06-20 2008-12-25 Schwartz Lawrence H Automated Determination of Lymph Nodes in Scanned Images

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ANGELINI ET AL.: "Brain MRI Segmentation with Multiphase Minimal Partitioning: A Comparative Study", INTEMATIONAL JOURNAL OF BIOMEDICAL IMAGING, vol. 2007, 19 April 2007 (2007-04-19), Retrieved from the Internet <URL:http://www.ncbi.nlm.nih.gov/pmdarticles/PMC2211521/pdf/IJBI2007-10526.pdf> [retrieved on 20130730] *
JAYADEVAPPA ET AL.: "A Hybrid Segmentation Model based on Watershed and Gradient Vector Flow for the Detection of Brain Tumor", INTERNATIONAL JOURNAL OF SIGNAL PROCESSING, IMAGE PROCESSING AND PATTERN RECOGNITION, vol. 2, no. 3, September 2009 (2009-09-01), Retrieved from the Internet <URL:http://citeseenc.ist.psu.edu/viewdoddownload?doi=10.1.1.177.5721&rep=repl&type=pdf> [retrieved on 20130730] *
XU ET AL.: "Automated temporal tracking and segmentation of lymphoma on serial CT examinations", MEDICAL PHYSICS., November 2011 (2011-11-01), pages 38, Retrieved from the Internet <URL:http://www.stanford.edu/-rubin/pubs/MedPhys-Xu-Rubin-LesionTracking.pdf> [retrieved on 20130730] *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017536856A (ja) * 2014-10-21 2017-12-14 无錫海斯凱尓医学技術有限公司Wuxi Hisky Medical Technologies Co.,Ltd. 検出領域を選択する方法と装置及び弾性検出システム
EP3210541A4 (fr) * 2014-10-21 2018-07-04 Wuxi Hisky Medical Technologies Co. Ltd. Procédé et dispositif pour sélectionner une zone de détection et système de détection d'élasticité
US10925582B2 (en) 2014-10-21 2021-02-23 Wuxi Hisky Medical Technologies Co., Ltd. Method and device for selecting detection area, and elasticity detection system
CN107004300A (zh) * 2014-12-08 2017-08-01 皇家飞利浦有限公司 体积形状的虚拟交互式定义
CN107004300B (zh) * 2014-12-08 2021-01-22 皇家飞利浦有限公司 体积形状的虚拟交互式定义
EP3109824A1 (fr) * 2015-06-24 2016-12-28 RaySearch Laboratories AB Système et méthode pour la manipulation des données d'image
CN106466183A (zh) * 2015-08-11 2017-03-01 三星电子株式会社 工作站、包括其的医学成像设备和对其的控制方法
EP3982321A4 (fr) * 2019-06-10 2023-02-22 Obshchestvo S Ogranichennoj Otvetstvennost'Yu "Medicinskie Skrining Sistemy" Système de traitement d'images radiographiques et de fourniture de résultat à un utilisateur
CN113592890A (zh) * 2021-05-28 2021-11-02 北京医准智能科技有限公司 一种ct图像肝脏分割方法及装置
CN113592890B (zh) * 2021-05-28 2022-02-11 北京医准智能科技有限公司 一种ct图像肝脏分割方法及装置

Also Published As

Publication number Publication date
US20150078640A1 (en) 2015-03-19
US10388020B2 (en) 2019-08-20

Similar Documents

Publication Publication Date Title
US10388020B2 (en) Methods and systems for segmentation of organs and tumors and objects
Chen et al. A review of thyroid gland segmentation and thyroid nodule segmentation methods for medical ultrasound images
Zhao et al. An overview of interactive medical image segmentation
US20190355117A1 (en) Techniques for Segmentation of Lymph Nodes, Lung Lesions and Other Solid or Part-Solid Objects
Masood et al. A survey on medical image segmentation
Mansoor et al. A generic approach to pathological lung segmentation
US10147185B2 (en) Interactive segmentation
Roy et al. A review on automated brain tumor detection and segmentation from MRI of brain
US7876947B2 (en) System and method for detecting tagged material using alpha matting
Roy et al. International journal of advanced research in computer science and software engineering
Jung et al. Deep learning for medical image analysis: Applications to computed tomography and magnetic resonance imaging
Florin et al. Globally optimal active contours, sequential Monte Carlo and on-line learning for vessel segmentation
Chung et al. Accurate liver vessel segmentation via active contour model with dense vessel candidates
Kaftan et al. Fuzzy pulmonary vessel segmentation in contrast enhanced CT data
Alirr et al. Survey on liver tumour resection planning system: steps, techniques, and parameters
Ali et al. Image-selective segmentation model for multi-regions within the object of interest with application to medical disease
Imielinska et al. Hybrid segmentation of anatomical data
Hu et al. Interactive semiautomatic contour delineation using statistical conditional random fields framework
He et al. Medical image segmentation
Bhadauria et al. Hemorrhage detection using edge-based contour with fuzzy clustering from brain computed tomography images
Bozkurt et al. A texture-based 3D region growing approach for segmentation of ICA through the skull base in CTA
Imielinska et al. Hybrid segmentation of the visible human data
He et al. Semi-automatic 3D segmentation of brain structures from MRI
Assley et al. A comparative study on medical image segmentation methods
Ogul et al. Unsupervised rib delineation in chest radiographs by an integrative approach

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13775870

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13775870

Country of ref document: EP

Kind code of ref document: A1