US20120082354A1 - Establishing a contour of a structure based on image information - Google Patents

Establishing a contour of a structure based on image information Download PDF

Info

Publication number
US20120082354A1
US20120082354A1 US13/377,401 US201013377401A US2012082354A1 US 20120082354 A1 US20120082354 A1 US 20120082354A1 US 201013377401 A US201013377401 A US 201013377401A US 2012082354 A1 US2012082354 A1 US 2012082354A1
Authority
US
United States
Prior art keywords
image
establishing
subsystem
structure
feature information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/377,401
Inventor
Jochen Peters
Olivier Ecabert
Carsten Meyer
Reinhard Kneser
Juergen Weese
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koninklijke Philips NV
Original Assignee
Koninklijke Philips NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to EP09163572 priority Critical
Priority to EP09163572.2 priority
Application filed by Koninklijke Philips NV filed Critical Koninklijke Philips NV
Priority to PCT/IB2010/052756 priority patent/WO2010150156A1/en
Assigned to KONINKLIJKE PHILIPS ELECTRONICS N.V. reassignment KONINKLIJKE PHILIPS ELECTRONICS N.V. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ECABERT, OLIVIER, KNESER, REINHARD, WEESE, JUERGEN, MEYER, CARSTEN, PETERS, JOCHEN
Publication of US20120082354A1 publication Critical patent/US20120082354A1/en
Application status is Abandoned legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/149Segmentation; Edge detection involving deformable models, e.g. active contour models
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20116Active contour; Active surface; Snakes
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Abstract

A system for establishing a contour of a structure is disclosed. An initialization subsystem (1) is used for initializing an adaptive mesh representing an approximate contour of the structure, the structure being represented at least partly by a first image, and the structure being represented at least partly by a second image. A deforming subsystem (2) is used for deforming the adaptive mesh, based on feature information of the first image and feature information of the second image. The deforming subsystem comprises a force-establishing subsystem (3) for establishing a force acting on at least part of the adaptive mesh, in dependence on the feature information of the first image and the feature information of the second image. A transform-establishing subsystem (4) is used for establishing a coordinate transform reflecting a registration mismatch between the first image, the second image, and the adaptive mesh.

Description

    FIELD OF THE INVENTION
  • The invention relates to image segmentation and image registration. The invention further relates to establishing a contour of a structure.
  • BACKGROUND OF THE INVENTION
  • Image segmentation generally concerns selection and/or separation of a selected part of a dataset. Such a dataset notably represents image information of an imaged object and the selected part relates to a specific part of the image. The dataset is in general a multi-dimensional dataset that assigns data values to positions in a multi-dimensional geometrical space. For example, such datasets can be two-dimensional or three-dimensional images where the data values are pixel values, such as brightness values, grey values or color values, assigned to positions in a two-dimensional plane or a three-dimensional volume.
  • U.S. Pat. No. 7,010,164 discloses a method of segmenting a selected region from a multi-dimensional dataset. The method comprises the steps of setting up a shape model representing the general outline of a selected region and setting up an adaptive mesh. The adaptive mesh represents an approximate contour of the selected region. The adaptive mesh is initialized on the basis of the shape model. Furthermore, the adaptive mesh is deformed in dependence on the shape model and on feature information of the selected region. In this way, a more precise contour of the selected region is obtained.
  • SUMMARY OF THE INVENTION
  • It would be advantageous to have an improved system for establishing a contour of a structure. To better address this concern, in a first aspect of the invention a system is presented that comprises
  • an initialization subsystem for initializing an adaptive mesh representing an approximate contour of the structure, the structure being represented at least partly by a first image, and the structure being represented at least partly by a second image;
  • a deforming subsystem for deforming the adaptive mesh, based on feature information of the first image and feature information of the second image.
  • Since the adaptive mesh is deformed based on both feature information of the first image and feature information of the second image, any information about the structure which is missing or unreliable in the first image may be obtained from the feature information of the second image. This way, the feature information of both images complement each other. The result may be a deformed adaptive mesh reflecting the shape information represented by both images. The contour may describe an outline of the structure. The structure may be, for example, a body, an organ, or a mass. The contour may describe a surface in a three-dimensional space. Alternatively, the contour may describe a line or curve in a two-dimensional space.
  • The deforming subsystem may be arranged for deforming the adaptive mesh, based on feature information of a plurality of images. In particular, an adaptive mesh may be deformed based on feature information of more than two images.
  • The deforming subsystem may comprise a force-establishing subsystem for establishing a force acting on at least part of the adaptive mesh in dependence on the feature information of the first image and the feature information of the second image. By considering the feature information of the first image and the feature information of the second image in establishing the force acting on a part of the adaptive mesh, the force may be more reliable. Moreover, the resulting deformed mesh is based on information from the two images. This may make the resulting deformed mesh more accurate. The feature information of the images may be weighted based on reliability criteria, for example, such that feature information which is regarded to be reliable determines the force to a greater extent than feature information which is regarded to be less reliable.
  • The force-establishing subsystem may be arranged for establishing the force acting on at least part of the adaptive mesh also in dependence on a type of the first image and/or a type of the second image. The type of an image may comprise an imaging modality or a clinical application of the image or, for example, a body part which is imaged. Different ways of establishing the force may be used for these different types of images, which allows the adaptive mesh to be adapted based on a plurality of images of different types.
  • The deforming subsystem may comprise a feature information-extracting subsystem for extracting feature information from the respective images, using respective models trained for the particular imaging modalities or protocols used to acquire the respective images. This helps to deform the mesh in an appropriate manner. Moreover, the feature information may be used by the force-establishing subsystem to establish the forces.
  • The system may comprise a transform-establishing subsystem for establishing a coordinate transform defining a registration between the first image, the second image, and/or the adaptive mesh, based on feature information in the respective image and the adaptive mesh. By involving the adaptive mesh in the generation of the coordinate transform, a more accurate registration may be obtained. The more accurate registration may in turn improve the adaptive mesh resulting from the deforming subsystem.
  • The coordinate transform may define a relation between a coordinate system of the first image and a coordinate system of the second image. Such a coordinate transform may be used to define a registration between the two images.
  • The coordinate transform may be an affine transform. Such an affine transform may represent global differences, such as movement, rotation, and/or enlargement. Such global changes then do not need to hinder the model adaptation.
  • The system may comprise a general outline-providing subsystem for providing a shape model representing a general outline of the structure. Moreover, the deforming subsystem may be arranged for deforming the adaptive mesh, based also on the shape model. For example, the adaptive mesh is deformed on the basis of an internal energy and the internal energy is defined in dependence on the shape model. This may help to avoid, for example, that image features in the multi-dimensional dataset would drive the adaptive mesh to false boundaries, so-called false attractors.
  • The first image and the second image may have been acquired using, for example, the same or two different imaging modalities from the group of X-ray, 3D rotational X-ray, CT, MR, Ultrasound, PET and SPECT, magnetic particle imaging. For example, the system may comprise an imaging modality of that group to perform the image acquisitions. Alternatively or additionally, an input may be provided for retrieving the image data from elsewhere.
  • The first and second image may be acquired using two different imaging modalities. The different features which appear in the images of different imaging modalities may be combined or may complement each other in the model adaptation.
  • The first image may have been acquired while an imaged subject contained a structure it did not contain during the acquisition of the second image. For example, the subject contained injected contrast agent during the acquisition of the first image, but not during the acquisition of the second image. The order in which images are acquired is not of importance here.
  • The first image and the second image may relate to a same patient. The different features visible in the first image and second image then may be assumed to relate to the same body structure.
  • A medical imaging workstation may comprise the system set forth.
  • A medical imaging acquisition apparatus may comprise a scanner for acquiring the first image and the system set forth.
  • A method of establishing a contour of a structure may comprise
  • initializing an adaptive mesh representing an approximate contour of the structure, the structure being represented at least partly by a first image, and the structure being represented at least partly by a second image; and
  • deforming the adaptive mesh, based on feature information of the first image and feature information of the second image.
  • A computer program product may comprise instructions for causing a processor system to perform the steps of the method set forth.
  • It will be appreciated by those skilled in the art that two or more of the above-mentioned embodiments, implementations, and/or aspects of the invention may be combined in any way deemed useful.
  • Modifications and variations of the image acquisition apparatus, of the workstation, of the system, and/or of the computer program product, which correspond to the described modifications and variations of the system, can be carried out by a person skilled in the art on the basis of the present description.
  • A person skilled in the art will appreciate that the method may be applied to multidimensional image data, e.g., to 2-dimensional (2-D), 3-dimensional (3-D) or 4-dimensional (4-D, for example 3-D+time) images, acquired by various acquisition modalities such as, but not limited to, standard X-ray Imaging, Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Ultrasound (US), Positron Emission Tomography (PET), Single Photon Emission Computed Tomography (SPECT), and Nuclear Medicine (NM).
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These and other aspects of the invention will be further elucidated and described with reference to the drawing, in which
  • FIG. 1 shows a block diagram of a system for establishing a contour of a structure; and
  • FIG. 2 shows a block diagram of a method of establishing a contour of a structure.
  • DETAILED DESCRIPTION OF EMBODIMENTS
  • Segmentation of medical images (2D, 3D, 4D) from one imaging modality can be performed automatically using shape-constrained deformable models. Depending on the task and the imaging protocol, some structures of interest may be invisible or hard to define. If no protocol is available that provides all the needed information, then a multi-modal approach may be helpful in solving the segmentation task. Here, complementary information about all structures of interest may provide a comprehensive segmentation.
  • Prior knowledge of inter-image registration parameters may be available to the system a priori. In such a case, the relation between the coordinate systems of the different images is known beforehand. For each available image, image-based forces may be established that jointly “pull” at least part of a common mesh model towards the structure of interest. Simultaneous image segmentation and registration is also possible. For example, the relations between the image coordinate systems and the mesh coordinate system are refined iteratively in alternation with the segmentation.
  • For improved viewing, regions with visible anatomical structures can be identified in images obtained from the different imaging modalities and combined in a single composite image. For example, the “locally most informative” image patches (i.e., image parts from that modality that provides the most convincing boundary information at an image location) may be fused to obtain one overall display showing image information from various sources. A user interface may be arranged for enabling the user to switch (locally, e.g., per displayed patch) between the complementary images to check or compare the information provided by each imaging modality. For example, information from both images may be encoded into a model comprising the contour or adaptive mesh, and the combined model may be displayed. For example, calcifications may be extracted from an image without contrast agent, whereas wall thickness may be extracted from another image with contrast agent.
  • If the inter-image registration parameters are not available a priori, or if it is desired to further improve these parameters, a registration process may be performed. Such registration process may comprise matching the images onto each other by means of rigid, affine, or even non-linear coordinate transforms. This may be done by optimizing a global match function between the images being registered. Match functions may establish some sort of correlation (or an objective function) between the grey values (image intensities) of the images. An optimal match may be achieved by maximizing this correlation or objective function. Alternatively (or thereafter), “forces” may be defined which may direct the first and/or the second image to a common mesh based on image features. Such a method arranged for aligning the images with the common mesh will be described in more detail further below.
  • FIG. 1 illustrates a system for establishing a contour of a structure. Such a system may be implemented in hardware or software or a combination thereof. The system may comprise an input 6 for retrieving image data from an imaging modality 7, for example, or from a database system 8. Such a database system 8 may comprise a PACS server, for example. An image may comprise a multi-dimensional array of grey-values, for example. The grey-values represent objects scanned using an imaging modality, such as computed tomography (CT), or magnetic resonance (MR) imaging.
  • An initialization subsystem 1 is arranged for initializing an adaptive mesh. The adaptive mesh represents an approximate contour of a structure. Such a structure could be an organ of a human body, for example. Other possible examples include an organ of an animal, or a non-organic object in non-destructive testing. The structure may be represented at least partly by a first image obtained via the input 6. In addition, the structure may be represented at least partly by a second image obtained via the input 6. The initialization subsystem may be arranged for using a default shape with which the adaptive mesh may be initialized. Alternatively, information in the first and second images may be used, for example by identifying a region having a different grey scale than the remainder of the image. The adaptive mesh may then be initialized as a contour of said region. The adaptive mesh may be initialized based on a shape model provided by the general outline-providing subsystem 5. For example, the adaptive mesh is initialized to be equal to a shape model provided by the general outline-providing subsystem 5. The initialization subsystem may further be arranged for initializing the adaptive mesh with a previous deformed model. For example, deformation may occur in iterations.
  • The deformation of the adaptive mesh may be performed by a deforming subsystem 2. The deformation may be based on feature information of the first image and feature information of the second image. This way, two information sources are used to obtain feature information. Feature information may include, for example, gradients. Other kinds of feature information are known in the art per se.
  • The deforming subsystem may comprise a force-establishing subsystem 3 for establishing a force acting on at least part of the adaptive mesh in dependence on the feature information of the first image and the feature information of the second image. For example, the first image may comprise a first feature close to a part of the adaptive mesh. However, the second image may also comprise a second feature close to that part of the adaptive mesh. The force-establishing subsystem 3 may take both features into account when establishing the force value. When only one of the images comprises a relevant feature close to a part of the adaptive mesh, the force acting on that part of the adaptive mesh may be established based on only this feature. The deforming subsystem 3 may comprise a feature information-extracting subsystem 11 for extracting feature information from the respective images, using respective models trained for the particular imaging modalities or protocols used to acquire the respective images. This way, the particular feature information provided by each modality can be exploited.
  • The system may further comprise a transform-establishing subsystem 4 for establishing a coordinate transform defining a registration between the first image, the second image, and the adaptive mesh. This registration may be based on the adaptive mesh and the feature information in one or more of the images. Points of the adaptive mesh may be mapped onto points in the first or second image. A coordinate transform may be established which defines a relation between a coordinate system of the first image and a coordinate system of the second image. For example, the adaptive mesh may be defined with respect to the coordinate system of one of the images, and a coordinate transform may be established mapping the points of the mesh and of the image onto points of another image. To this end, feature information of that other image may be used and compared with the shape of the adaptive mesh. It is also possible to establish coordinate transforms from the adaptive mesh to different images; the transform from one image to the other can be derived from the transforms mapping the adaptive mesh to each of the images.
  • The registration may be based on the adaptive mesh and the feature information in one or more of the images. For example, a registration of a mesh to an image is computed. After that, the transform representing the registration may be inverted and used to register the two images. To start, for a given image, a mesh of fixed shape may be registered to the image via a parametric transformation (for example, but not limited to, an affine transformation). This registration may be controlled by “forces” established between mesh points (or triangles) and corresponding image points (also called target points or target surfaces) which may lead to a so-called external energy. For example, such an external energy may be computed as a sum of energies of imaginary springs attached between the mesh points and the target structures. By minimizing the external energy with respect to the parametric transform, the mesh may be aligned, or registered, with the image. To register two images, the transform may be inverted, if necessary. Instead of transforming the mesh to either image, an inverse transform may be used to align each image with the mesh. This is a way of establishing registration parameters between the two images.
  • The coordinate transform-establishing subsystem 4 may be arranged for establishing an affine transform. This way, only global changes are represented in the coordinate transform. More detailed changes may be accounted for by the deforming subsystem 2 and/or by the force-establishing subsystem 3.
  • The system may comprise a general outline-providing subsystem 5 for providing a shape model representing a general outline of the structure. For example, the subsystem 5 may comprise or have access to a database of shape models, e.g. via the input 6. These shape models may, for example, provide an average shape for a particular type of object or structure. For example, different models may be provided for the lungs and for the kidney. A user input or image metadata may be used to select one of the shape models from the database. The deforming subsystem 2 may be arranged for deforming the adaptive mesh, based also on the shape model. This way, the general shape of the structure which is segmented may be used as a reference during the deformation process. The shape model may also be used by the transform-establishing subsystem 4. The general outline-providing subsystem 5 may further provide access to multi-modal image feature information. For example, a database may be provided specifying which image features should be considered in relation to images from a particular imaging modality and/or images of a particular body part or application.
  • The first image and the second image may be acquired using one or more imaging modalities, for example from the group of CT, MR, Ultrasound, PET and SPECT. However, these modalities are not to be construed in a limiting sense. Different imaging modalities may result in different feature information. The feature information obtainable by different imaging modalities may be complementary. For example, CT and MR show different kinds of tissues. Multiple images using the same modality may also provide additional, complementary feature information compared to only one image. For example, the images may relate to different (preferably overlapping) portions of the subject. Moreover, different images may be acquired after applying or not applying different sorts of contrast agents or after waiting for different times to observe the dynamics of contrast distribution in the tissue over time.
  • The first image and the second image may relate to a same subject, for example a patient. The first image may be acquired while an imaged subject contains a structure it does not contain during the acquisition of the second image. For example, contrast agent inflow can be visualized in the images. Also, a sequence of images showing a moving object may be captured; these images may be used in the system described herein.
  • The adaptive mesh, after successful deformation, and the coordinate transformation(s), may be forwarded to a post-processing subsystem 9 for further processing. For example, the data may be stored in a database such as a patient database. It is also possible to use the results to identify the grey values of the images relating to the structure. This way, computations and comparisons become possible. The post-processing subsystem 9 may also be arranged for generating a visualization of the contour, for example as an overlay over one or more of the images. This visualization may be shown to a user on a display 10.
  • A medical imaging workstation may comprise the system set forth. A medical imaging workstation may further comprise user interface means, including for example the display 10, a keyboard and/or pointing device such as a mouse, and a network connection or other communications means. The communication means may be used to retrieve image information via input 6 and/or to store the results such as the deformed mesh or the coordinate transform, or data derived therefrom.
  • A medical imaging acquisition apparatus may comprise a scanner for acquiring the first image and/or the second image, and the system set forth. For example, the medical imaging apparatus may comprise the features of the medical imaging workstation.
  • FIG. 2 illustrates aspects of a method of establishing a contour of a structure. In step 201, the adaptive mesh is initialized. The adaptive mesh represents an approximate contour of the structure, the structure being represented at least partly by a first image, and the structure being represented at least partly by a second image. In step 202, the adaptive mesh is deformed based on feature information of the first image and feature information of the second image. The deforming step 202 may be iterated until the adaptive mesh has converged. In step 203, a coordinate transform is established. The coordinate transform may define a registration between the first image, the second image, and the adaptive mesh. The coordinate transform may be based on feature information in the respective image or images and the adaptive mesh. The coordinate transform may be used to register the images or to register the mesh to one or more of the images. The order of the steps may be interchanged, for example, step 203 may precede step 202. Also, either of steps 202 or 203 may be omitted. Moreover, steps 202 and 203 may be performed iteratively, by performing steps 202 and 203 in alternating fashion. This way, the registration may benefit from the updated mesh and the mesh deformation may benefit from the updated coordinate transform. The method may be implemented as a computer program product.
  • Multi-modal segmentation may be used for segmenting images with missing information by inclusion of information provided by other images. A model-based approach for image segmentation may be used for identifying most or all visible structures of an image. However, most model-based approaches are optimized for a particular imaging modality. By combining model-based approaches that have been optimized for different imaging modalities, the system may be enabled to identify most or all visible structures in each single image. By combining the available information about the visible structures from the images (preferably obtained from different imaging modalities), the different images complement each other. This will improve the segmentation compared to a segmentation of an individual image. Furthermore, any found correspondences between identified anatomical landmarks in different images can be pointed out in a suitable user interface.
  • It is possible to combine registration and segmentation. This may be done by refining an initial rough guess of the registration parameters based on a comparison of the detected visible structures (anatomical landmarks) in the images. Registration may thus be preferably driven by geometric information (describing the location of visible structures). However, it is also possible to drive the registration by a global matching of grey values. Using a model of the desired anatomy in combination with geometric information may improve the registration accuracy, because it takes into account the anatomical structures of interest instead of the whole image. Also, an initial registration may be obtained from prior knowledge about the relations between the scanner geometries with respect to the patient.
  • A multi-modal segmentation framework may comprise establishing image-based forces that “pull” on an adaptive mesh describing the structures of interest. The mesh may be defined with respect to a coordinate system. This coordinate system may be related to image coordinate systems via an affine transformation. Such affine transformations may establish a registration between the images and the common segmentation result.
  • The multi-modal segmentation may combine the image-based forces derived from all available images of a particular structure of interest. These image-based forces may be based on visible structures in the images. The image-based forces may “pull” the common mesh to a compromise position. If a particular region in an image does not contain structures of interest, these regions preferably do not contribute to the image-based forces. However, if a structure of interest is available in a corresponding region of another image, this structure preferably does contribute to the image-based forces. This provides a way of “bridging the gap” caused by image regions which do not reveal the structure of interest.
  • For any given portion of the adaptive mesh model, it is possible to assess which image (or images) represent(s) the structure on which the portion of the surface model is based, for example by considering the forces which attracted the portion of the adaptive surface model. Based on this assessment, a combined image may be composed of portions of the different images as a “patchwork” of most relevant image regions. A user interface may show an indication of which image contributed to a given portion of the adaptive mesh model. Moreover, the user interface may enable a user to visualize a desired image in order to assess the complementary image information. Such visualization may be local; for example, if the composed image contains a first portion of a first image, the user may choose to replace said first portion by a corresponding portion of a second image, to obtain a new composite image which may be visualized.
  • Combining information from multiple images, which may have been acquired from different modalities, allows segmenting structures that are only partly visible in each of the images but well-defined if all images are taken into account. Furthermore, by comparing the visible structures per image with the common segmentation result, an (affine) registration between the image coordinate systems may be estimated or refined.
  • In a multi-modal segmentation framework, different images In may be acquired within separate coordinate systems. E.g., the patient may be placed in different scanners, or scan orientations may change. Offsets as well as re-orientations and maybe even scaling effects due to geometric distortions may need to be accounted for. However, this is not always the case. It is also possible to acquire multiple images in the same coordinate system.
  • To arrive at a common segmentation, first, a reference coordinate system may be established, wherein the segmentation result may be represented. This will be called the common coordinate system (CCS) hereinafter. An image In may have its own image coordinate system which may be denoted by “ICSn”. It is also possible that all images have the same coordinate system. In such a case, ICSn is the same for each image n.
  • In some cases, the transform between the coordinate systems may be known. Furthermore, a global linear transform may sometimes be sufficient to map the CCS to any ICSn. Let x be some coordinate vector in the CCS, and let x′ be the corresponding vector in ICSn. Then, some transform Tn={Mn, tn} with a 3×3 matrix Mn and a translation vector tn maps x to x′:

  • x′=M n ·x+t n  (1)
  • Gradients ∇I, as examples of feature information, may also change under the transform. Gradients may be transformed as follows (∇I′=gradient in ICSn, ∇I=gradient if In were transformed into the CCS):

  • I′(x′)=(M n −1)T ·∇I(x)
    Figure US20120082354A1-20120405-P00001
    I(x)=M n T ·∇I′(x′)  (2)
  • where the notation T denotes matrix transposition. (Here, the point x′ where the transformed gradient ∇I′ is evaluated includes the transform in (1)). The multi-modal segmentation may be realized as follows: The internal energy of the deforming shape model may be established in the CCS. External energies may be set up for individual ones of the images In and accumulated with some weighting factors βn with Σβn=1:
  • E tot = n β n · E ext , n · E ext , n + α · E int ( 3 )
  • In combination, the common segmentation may produce a “best compromise” between the external forces from the In and the internal forces from the shape model.
  • Example formulations of the external energies if coordinate systems differ by Tn are described hereinafter. By applying the external energies and internal energies to the adaptive mesh, the adaptive mesh may be deformed to obtain a contour of a structure in the image or images. An example external energy Eext in some standard coordinate system may be written as (here ci represents the mesh point that is subject to optimization):
  • E ext = i f i · ( I ( x i target ) I ( x i target ) · ( c i - x i target ) ) 2 ( 4 )
  • To set up Eext,n, in case of a transformed coordinate system, target points may be detected in the image In. This may be done by transforming the triangle centers ci and normals from the CCS to ICSn and by searching a target point around a transformed triangle center. Within ICSn, a gradient direction ∇I′∥∇I′∥ may be determined at a detected target point.
  • A first option for Eext,n is to back-transform xi target into the CCS and to establish the external energy there. Thinking in “spring forces”, the springs may be attached to the triangles in the CCS pulling the mesh towards back-transformed target planes. This involves the back-transformation of the gradient direction using (2) and re-normalization, for example:
  • E ext , n = i f i · ( M n T · I ( x i target ) M n T · I ( x i target ) · ( c i - x i target ) ) 2 ( 5 )
  • where xi target is a back-transformed target point and ci is a triangle center in the CCS. Another option is to formulate Eext,n within the image In, i.e., within the ICSn.
  • The transformations between the image coordinate systems may be unknown. Finding these transformations may involve a registration task. This registration task may be solved in combination with the segmentation task.
  • Different aspects of the registration may be identified; these aspects may help to arrive at an initial coarse estimation of Mn and tn:
  • Gross orientation of the images, e.g., axial versus sagittal or coronal slices in the x-y plane (for MR images, oblique planes may also be acquired): This information is normally available from the scanner.
  • Scaling of the images: Normally, images should “live” in metric space with no or little distortions. We may use the gross orientation and a scaling factor of 1 as an initial guess for the transformation matrices Mn.
  • Translation: Images may have an unknown coordinate offset. However, object localization techniques may allow coarsely localizing the organ of interest.
  • The transforms may be refined per image In. For example, it is possible to take the current instance of the commonly adapted mesh and optimize the transformation Tn such as to minimize the external energy for In alone. Tn then may describe an improved mapping of the common mesh to the respective image. After performing this step for all images, it is possible to continue with the multi-modal segmentation using the updated transforms Tn. This procedure of iterative multi-modal mesh adaptation and re-estimation of all Tn may result in a converged segmentation including the registration parameters as encoded by Tn.
  • The segmentation and/or registration techniques described herein may be applied for example for segmentation of structures or complex organs that cannot be fully described by a single imaging modality alone. The techniques may further be applied for segmentation of structures represented by images that do not contain the full anatomical information. Complementary scans from different imaging modalities may provide any such missing information. For example, functional images, such as PET or SPECT images, may not provide full anatomical information. By performing the techniques presented herein on a functional image and an anatomical image, such as a CT image or MR image, an improved segmentation may be obtained. Moreover, the techniques may be used to register the functional image with the anatomical image. Moreover, the techniques may be used to generate a composite image showing both the functional and the anatomical information.
  • A method of establishing a contour of a structure may start with initializing a shape model. Next, the structure or parts of it may be identified in a first image at a first image location and in a second image at a second image location which may be different from the first image location. A first force value may be associated with a first portion of the initialized shape model, the first force value being based on the structure in the first image, the first portion of the initial shape model corresponding to the first image location. A second force value may be associated with a second portion of the initial shape model, the second force value being based on the structure in the second image, the second portion of the initial shape model corresponding to the second image location. The shape of the first portion of the initialized shape model may be adapted based on the first force value. The shape of the second portion of the initialized model may be adapted based on the second force value. If both portions of the initialized shape model overlap, they may be adapted based on both force values.
  • It will be appreciated that the invention also extends to computer programs, particularly computer programs on or in a carrier, adapted for putting the invention into practice. The program may be in the form of source code, object code, a code intermediate source and object code such as a partially compiled form, or in any other form suitable for use in the implementation of the method according to the invention. It will also be appreciated that such a program may have many different architectural designs. For example, a program code implementing the functionality of the method or system according to the invention may be subdivided into one or more subroutines. Many different ways to distribute the functionality among these subroutines will be apparent to the skilled person. The subroutines may be stored together in one executable file to form a self-contained program. Such an executable file may comprise computer executable instructions, for example processor instructions and/or interpreter instructions (e.g. Java interpreter instructions). Alternatively, one or more or all of the subroutines may be stored in at least one external library file and linked with a main program either statically or dynamically, e.g. at run-time. The main program contains at least one call to at least one of the subroutines. Also, the subroutines may comprise function calls to each other. An embodiment relating to a computer program product comprises computer executable instructions corresponding to each of the processing steps of at least one of the methods set forth. These instructions may be subdivided into subroutines and/or stored in one or more files that may be linked statically or dynamically. Another embodiment relating to a computer program product comprises computer executable instructions corresponding to each of the means of at least one of the systems and/or products set forth. These instructions may be subdivided into subroutines and/or stored in one or more files that may be linked statically or dynamically.
  • The carrier of a computer program may be any entity or device capable of carrying the program. For example, the carrier may include a storage medium, such as a ROM, for example a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example a floppy disc or hard disk. Further, the carrier may be a transmissible carrier such as an electrical or optical signal, which may be conveyed via electrical or optical cable or by radio or other means. When the program is embodied in such a signal, the carrier may be constituted by such a cable or other device or means. Alternatively, the carrier may be an integrated circuit in which the program is embedded, the integrated circuit being adapted for performing, or for use in the performance of, the relevant method.
  • It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb “comprise” and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims (15)

1. A system for establishing a contour of a structure, comprising
an initialization subsystem (1) for initializing an adaptive mesh representing an approximate contour of the structure, the structure being represented at least partly by a first image, and the structure being represented at least partly by a second image; and
a deforming subsystem (2) for deforming the adaptive mesh, based on feature information of the first image and feature information of the second image.
2. The system according to claim 1, the deforming subsystem comprising a force-establishing subsystem (3) for establishing a force acting on at least part of the adaptive mesh in dependence on the feature information of the first image and the feature information of the second image.
3. The system according to claim 2, the force-establishing subsystem (3) being arranged for establishing the force acting on the at least part of the adaptive mesh also in dependence on a type of the first image and/or a type of the second image.
4. The system according to claim 1, the deforming subsystem (3) comprising a feature information-extracting subsystem (11) for extracting feature information from the respective images, using respective models trained for the particular imaging modalities or protocols used to acquire the respective images.
5. The system according to claim 1, further comprising a transform-establishing subsystem (4) for establishing a first coordinate transform defining a registration between the adaptive mesh and at least one of the first image and the second image, based on feature information in the respective image and the adaptive mesh.
6. The system according to claim 5 the transform establishing a subsystem (4) being arranged for establishing a second coordinate transform defining a relation between a coordinate system of the first image and a coordinate system of the second image, based on the first transform.
7. The system according to claim 5 the coordinate transform being an affine transform.
8. The system according to claim 1, further comprising
a general outline-providing subsystem (5) for providing a shape model representing a general outline of the structure;
the deforming subsystem (2) being arranged for deforming the adaptive mesh based also on the shape model.
9. The system according to claim 1, the first image and the second image having been acquired using at least one imaging modality from the group of X-ray, CT, MR, Ultrasound, PET, SPECT, and magnetic particle imaging.
10. The system according to claim 1, the first and second image having been acquired using two different imaging modalities.
11. The system according to claim 1, the first image having been acquired while a particular object was visible in the imaged subject, the second image having been acquired while that particular object was not visible in the imaged subject.
12. A medical imaging workstation comprising the system according to claim 1.
13. A medical imaging acquisition apparatus comprising a scanner for acquiring the first image and the system according to claim 1.
14. A method of establishing a contour of a structure, comprising
initializing (201) an adaptive mesh representing an approximate contour of the structure, the structure being represented at least partly by a first image, and the structure being represented at least partly by a second image; and
deforming (202) the adaptive mesh, based on feature information of the first image and feature information of the second image.
15. A computer program product comprising instructions for causing a processor system to perform the steps of the method according to claim 14.
US13/377,401 2009-06-24 2010-06-18 Establishing a contour of a structure based on image information Abandoned US20120082354A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP09163572 2009-06-24
EP09163572.2 2009-06-24
PCT/IB2010/052756 WO2010150156A1 (en) 2009-06-24 2010-06-18 Establishing a contour of a structure based on image information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/665,735 US20170330328A1 (en) 2009-06-24 2017-08-09 Establishing a contour of a structure based on image information

Publications (1)

Publication Number Publication Date
US20120082354A1 true US20120082354A1 (en) 2012-04-05

Family

ID=42556644

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/377,401 Abandoned US20120082354A1 (en) 2009-06-24 2010-06-18 Establishing a contour of a structure based on image information
US15/665,735 Pending US20170330328A1 (en) 2009-06-24 2017-08-09 Establishing a contour of a structure based on image information

Family Applications After (1)

Application Number Title Priority Date Filing Date
US15/665,735 Pending US20170330328A1 (en) 2009-06-24 2017-08-09 Establishing a contour of a structure based on image information

Country Status (5)

Country Link
US (2) US20120082354A1 (en)
EP (1) EP2446416B1 (en)
JP (1) JP6007102B2 (en)
CN (1) CN102460509B (en)
WO (1) WO2010150156A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130308876A1 (en) * 2010-11-24 2013-11-21 Blackford Analysis Limited Process and apparatus for data registration
US9519949B2 (en) * 2015-03-13 2016-12-13 Koninklijke Philips N.V. Determining transformation between different coordinate systems
US20160379372A1 (en) * 2013-12-10 2016-12-29 Koninklijke Philips N.V. Model-based segmentation of an anatomical structure
WO2017162664A1 (en) * 2016-03-21 2017-09-28 Koninklijke Philips N.V. Apparatus for detecting deformation of a body part
US20170337680A1 (en) * 2014-11-28 2017-11-23 Koninklijke Philips N.V. Model-based segmentation of an anatomical structure
US9934579B2 (en) 2012-08-30 2018-04-03 Koninklijke Philips N.V. Coupled segmentation in 3D conventional ultrasound and contrast-enhanced ultrasound images
US9953429B2 (en) 2013-12-17 2018-04-24 Koninklijke Philips N.V. Model-based segmentation of an anatomical structure
US10043270B2 (en) * 2014-03-21 2018-08-07 Koninklijke Philips N.V. Image processing apparatus and method for segmenting a region of interest

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2480341B1 (en) 2013-01-24 2015-01-22 Palobiofarma S.L Novel pyrimidine derivatives as inhibitors of phosphodiesterase 10 (PDE-10)
CN105631920B (en) * 2014-10-31 2018-07-06 北京临近空间飞行器系统工程研究所 Sample Species radial basis function method of streamlining support points
CN106485186A (en) * 2015-08-26 2017-03-08 阿里巴巴集团控股有限公司 Image feature extraction method, device, terminal equipment and system
WO2019034546A1 (en) 2017-08-17 2019-02-21 Koninklijke Philips N.V. Ultrasound system with extraction of image planes from volume data using touch interaction with an image

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030036083A1 (en) * 2001-07-19 2003-02-20 Jose Tamez-Pena System and method for quantifying tissue structures and their change over time
US20040066955A1 (en) * 2002-10-02 2004-04-08 Virtualscopics, Llc Method and system for assessment of biomarkers by measurement of response to stimulus
US20040147830A1 (en) * 2003-01-29 2004-07-29 Virtualscopics Method and system for use of biomarkers in diagnostic imaging
US7010164B2 (en) * 2001-03-09 2006-03-07 Koninklijke Philips Electronics, N.V. Image segmentation
US20060126922A1 (en) * 2003-01-13 2006-06-15 Jens Von Berg Method of segmenting a three-dimensional structure
US20060159341A1 (en) * 2003-06-13 2006-07-20 Vladimir Pekar 3D image segmentation
US7167172B2 (en) * 2001-03-09 2007-01-23 Koninklijke Philips Electronics N.V. Method of segmenting a three-dimensional structure contained in an object, notably for medical image analysis
US20070127798A1 (en) * 2005-09-16 2007-06-07 Siemens Corporate Research Inc System and method for semantic indexing and navigation of volumetric images
US20080085042A1 (en) * 2006-10-09 2008-04-10 Valery Trofimov Registration of images of an organ using anatomical features outside the organ
US20080205719A1 (en) * 2005-06-15 2008-08-28 Koninklijke Philips Electronics, N.V. Method of Model-Based Elastic Image Registration For Comparing a First and a Second Image
US20080279429A1 (en) * 2005-11-18 2008-11-13 Koninklijke Philips Electronics, N.V. Method For Delineation of Predetermined Structures in 3D Images
US20090136099A1 (en) * 2007-11-26 2009-05-28 Boyden Edward S Image guided surgery with dynamic image reconstruction
US20090226060A1 (en) * 2008-03-04 2009-09-10 Gering David T Method and system for improved image segmentation
US20090310835A1 (en) * 2006-07-17 2009-12-17 Koninklijke Philips Electronics N. V. Efficient user interaction with polygonal meshes for medical image segmentation
US7796790B2 (en) * 2003-10-17 2010-09-14 Koninklijke Philips Electronics N.V. Manual tools for model based image segmentation
US20100232645A1 (en) * 2007-05-10 2010-09-16 Koninklijke Philips Electronics N.V. Model-based spect heart orientation estimation
US8050469B2 (en) * 2002-07-19 2011-11-01 Koninklijke Philips Electronics Automated measurement of objects using deformable models
US20130195323A1 (en) * 2012-01-26 2013-08-01 Danyu Liu System for Generating Object Contours in 3D Medical Image Data

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69332042T2 (en) * 1992-12-18 2003-01-02 Koninkl Philips Electronics Nv Locating deferral of relatively elastically deformed spatial images by matching surfaces
EP1780672A1 (en) * 2005-10-25 2007-05-02 Bracco Imaging, S.P.A. Method of registering images, algorithm for carrying out the method of registering images, a program for registering images using the said algorithm and a method of treating biomedical images to reduce imaging artefacts caused by object movement

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7167172B2 (en) * 2001-03-09 2007-01-23 Koninklijke Philips Electronics N.V. Method of segmenting a three-dimensional structure contained in an object, notably for medical image analysis
US7010164B2 (en) * 2001-03-09 2006-03-07 Koninklijke Philips Electronics, N.V. Image segmentation
US20030036083A1 (en) * 2001-07-19 2003-02-20 Jose Tamez-Pena System and method for quantifying tissue structures and their change over time
US8050469B2 (en) * 2002-07-19 2011-11-01 Koninklijke Philips Electronics Automated measurement of objects using deformable models
US20040066955A1 (en) * 2002-10-02 2004-04-08 Virtualscopics, Llc Method and system for assessment of biomarkers by measurement of response to stimulus
US20060126922A1 (en) * 2003-01-13 2006-06-15 Jens Von Berg Method of segmenting a three-dimensional structure
US20040147830A1 (en) * 2003-01-29 2004-07-29 Virtualscopics Method and system for use of biomarkers in diagnostic imaging
US20060159341A1 (en) * 2003-06-13 2006-07-20 Vladimir Pekar 3D image segmentation
US7796790B2 (en) * 2003-10-17 2010-09-14 Koninklijke Philips Electronics N.V. Manual tools for model based image segmentation
US20080205719A1 (en) * 2005-06-15 2008-08-28 Koninklijke Philips Electronics, N.V. Method of Model-Based Elastic Image Registration For Comparing a First and a Second Image
US20070127798A1 (en) * 2005-09-16 2007-06-07 Siemens Corporate Research Inc System and method for semantic indexing and navigation of volumetric images
US20080279429A1 (en) * 2005-11-18 2008-11-13 Koninklijke Philips Electronics, N.V. Method For Delineation of Predetermined Structures in 3D Images
US20090310835A1 (en) * 2006-07-17 2009-12-17 Koninklijke Philips Electronics N. V. Efficient user interaction with polygonal meshes for medical image segmentation
US20080085042A1 (en) * 2006-10-09 2008-04-10 Valery Trofimov Registration of images of an organ using anatomical features outside the organ
US20100232645A1 (en) * 2007-05-10 2010-09-16 Koninklijke Philips Electronics N.V. Model-based spect heart orientation estimation
US20090136099A1 (en) * 2007-11-26 2009-05-28 Boyden Edward S Image guided surgery with dynamic image reconstruction
US20090226060A1 (en) * 2008-03-04 2009-09-10 Gering David T Method and system for improved image segmentation
US20130195323A1 (en) * 2012-01-26 2013-08-01 Danyu Liu System for Generating Object Contours in 3D Medical Image Data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
C. Nikou et al., "Probabilistic multiobject deformable model for MR/SPECT brain image registration and segmentation", Proc. SPIE 3661, Medical Imaging 1999: Image Processing, 170 (May 21, 1999), pg. 170-181 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130308876A1 (en) * 2010-11-24 2013-11-21 Blackford Analysis Limited Process and apparatus for data registration
US9224229B2 (en) * 2010-11-24 2015-12-29 Blackford Analysis Limited Process and apparatus for data registration
US10242450B2 (en) 2012-08-30 2019-03-26 Koninklijke Philips N.V. Coupled segmentation in 3D conventional ultrasound and contrast-enhanced ultrasound images
US9934579B2 (en) 2012-08-30 2018-04-03 Koninklijke Philips N.V. Coupled segmentation in 3D conventional ultrasound and contrast-enhanced ultrasound images
US20160379372A1 (en) * 2013-12-10 2016-12-29 Koninklijke Philips N.V. Model-based segmentation of an anatomical structure
US9953429B2 (en) 2013-12-17 2018-04-24 Koninklijke Philips N.V. Model-based segmentation of an anatomical structure
US10043270B2 (en) * 2014-03-21 2018-08-07 Koninklijke Philips N.V. Image processing apparatus and method for segmenting a region of interest
US20170337680A1 (en) * 2014-11-28 2017-11-23 Koninklijke Philips N.V. Model-based segmentation of an anatomical structure
US9519949B2 (en) * 2015-03-13 2016-12-13 Koninklijke Philips N.V. Determining transformation between different coordinate systems
WO2017162664A1 (en) * 2016-03-21 2017-09-28 Koninklijke Philips N.V. Apparatus for detecting deformation of a body part

Also Published As

Publication number Publication date
WO2010150156A1 (en) 2010-12-29
EP2446416B1 (en) 2014-04-30
JP6007102B2 (en) 2016-10-12
US20170330328A1 (en) 2017-11-16
JP2012531649A (en) 2012-12-10
EP2446416A1 (en) 2012-05-02
CN102460509B (en) 2015-01-07
CN102460509A (en) 2012-05-16

Similar Documents

Publication Publication Date Title
Shen et al. HAMMER: hierarchical attribute matching mechanism for elastic registration
Rueckert et al. Non-rigid registration of breast MR images using mutual information
Maes et al. Multimodality image registration by maximization of mutual information
Dougherty et al. Validation of an optical flow method for tag displacement estimation
US6754374B1 (en) Method and apparatus for processing images with regions representing target objects
EP0602730B1 (en) Registration of Volumetric images which are relatively elastically deformed by matching surfaces
Isgum et al. Multi-atlas-based segmentation with local decision fusion—application to cardiac and aortic segmentation in CT scans
US5568384A (en) Biomedical imaging and analysis
Frangi et al. Automatic construction of multiple-object three-dimensional statistical shape models: Application to cardiac modeling
Lötjönen et al. Fast and robust multi-atlas segmentation of brain magnetic resonance images
US7653263B2 (en) Method and system for volumetric comparative image analysis and diagnosis
Candemir et al. Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration
US7583857B2 (en) System and method for salient region feature based 3D multi modality registration of medical images
US6909794B2 (en) Automated registration of 3-D medical scans of similar anatomical structures
US8331637B2 (en) System and method of automatic prioritization and analysis of medical images
US7298881B2 (en) Method, system, and computer software product for feature-based correlation of lesions from multiple images
Dawant et al. Automatic 3-D segmentation of internal structures of the head in MR images using a combination of similarity and free-form transformations. I. Methodology and validation on normal subjects
US20080101676A1 (en) System and Method For Segmenting Chambers Of A Heart In A Three Dimensional Image
EP1695287B1 (en) Elastic image registration
Collins et al. Model-based segmentation of individual brain structures from MRI data
Tobon-Gomez et al. Benchmarking framework for myocardial tracking and deformation algorithms: An open access database
EP2252204B1 (en) Ct surrogate by auto-segmentation of magnetic resonance images
US20080095465A1 (en) Image registration system and method
US20070167784A1 (en) Real-time Elastic Registration to Determine Temporal Evolution of Internal Tissues for Image-Guided Interventions
CA2769918A1 (en) Apparatus and method for registering two medical images

Legal Events

Date Code Title Description
AS Assignment

Owner name: KONINKLIJKE PHILIPS ELECTRONICS N.V., NETHERLANDS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PETERS, JOCHEN;ECABERT, OLIVIER;MEYER, CARSTEN;AND OTHERS;SIGNING DATES FROM 20100618 TO 20100625;REEL/FRAME:027357/0520