WO2010052615A2 - Extraction d'informations de mouvement - Google Patents

Extraction d'informations de mouvement Download PDF

Info

Publication number
WO2010052615A2
WO2010052615A2 PCT/IB2009/054792 IB2009054792W WO2010052615A2 WO 2010052615 A2 WO2010052615 A2 WO 2010052615A2 IB 2009054792 W IB2009054792 W IB 2009054792W WO 2010052615 A2 WO2010052615 A2 WO 2010052615A2
Authority
WO
WIPO (PCT)
Prior art keywords
image data
data
projection data
volumetric image
slices
Prior art date
Application number
PCT/IB2009/054792
Other languages
English (en)
Other versions
WO2010052615A3 (fr
Inventor
Peter Forthmann
Holger Schmitt
Udo Van Stevendaal
Thomas Koehler
Original Assignee
Koninklijke Philips Electronics N.V.
Philips Intellectual Property & Standards Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V., Philips Intellectual Property & Standards Gmbh filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2010052615A2 publication Critical patent/WO2010052615A2/fr
Publication of WO2010052615A3 publication Critical patent/WO2010052615A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/412Dynamic

Definitions

  • CT computer tomography
  • a computed tomography (CT) scanner generally includes an x-ray tube mounted on a rotatable gantry opposite a detector array including one or more rows of detector pixels.
  • the x-ray tube rotates around an examination region located between the x-ray tube and the detector array and emits radiation that traverses the examination region and an object or subject disposed therein.
  • the detector array detects radiation that traverses the examination region and generates projection data indicative of the examination region and the object or subject disposed therein.
  • a reconstructor processes the projection data and generates volumetric image data indicative of the examination region and the object or subject disposed therein.
  • the volumetric image data can be processed to generate one or more images that include the scanned portion of the object or subject.
  • the scanned portion of the object or subject includes a moving structure such as the heart or lung, or anatomy affected by the movement of such an organ.
  • motion artifacts may be introduced into the projection data and hence the images thereof.
  • Motion compensation algorithms have been used to reduce motion artifact.
  • some motion compensation algorithms require the temporal positions of the scanned anatomy during the various breathing phases.
  • Techniques for obtaining such information have included respiratory belts, light emitting landmarks, and the like.
  • the image data and the motion data need to be correlated or registered, and belts and other instrumentation and devices may increase procedure complexity and time, and cause patient discomfort. Aspects of the present application address the above-referenced matters and others.
  • a method includes creating a second set of projection data that includes substantially only selected structure of interest based on a first set of projection data that includes the selected structure of interest and other structure.
  • a method in another aspect, includes generating a second plurality of sliding window slices for projection data corresponding to for a last slice of a first plurality of slices. The method further includes selecting a second sliding window slice from the second plurality of sliding window slices based on the last slice of the first plurality of slices. The method further includes generating a second plurality of slices, including a first slice and a last slice, from a range of projection data around projection data corresponding to the last slice of the first plurality of slices.
  • a system in another aspect, includes a motion information extractor that extracts motion information from data from an imaging system.
  • the motion information extractor includes at least one of means for creating a set of projection data that includes substantially only selected structure of interest based on a different set of projection data that includes the selected structure of interest and other structure, and means for generating a volume of data for a moving structure based on a motion state obtained from the extracted motion information.
  • a method for correcting for motion in projection data includes obtaining first projection data indicative of both moving and static structures, generating second projection data, which is indicative substantially only of the moving structure, based on the first projection data, and using the second projection data to motion correct the first projection data.
  • FIGURE 1 illustrates an example imaging system in connection with a motion information extractor.
  • FIGURE 2 illustrates an example motion information extractor.
  • FIGURE 3 illustrates example projection data.
  • FIGURE 4 illustrates example image data.
  • FIGURE 5 illustrates example segmented image data.
  • FIGURE 6 illustrates example static only image data.
  • FIGURE 7 illustrates example motion only projection data.
  • FIGURE 8 illustrates an example method.
  • FIGURE 9 illustrates another example motion information extractor.
  • FIGURE 10 illustrates another example method.
  • FIGURE 1 illustrates an imaging system 100 such as a computed tomography (CT) scanner.
  • CT computed tomography
  • the imaging system 100 can be a positron emission tomography (PET) scanner, single photon emission computed tomography (SPECT) scanner, a combination PET/CT scanner, and/or other emission or transmission tomography scanner.
  • PET positron emission tomography
  • SPECT single photon emission computed tomography
  • the imaging system 100 includes a stationary gantry 102 and a rotating gantry 104, which is rotatably supported by the stationary gantry 102.
  • the rotating gantry 104 rotates around an examination region 106 about a longitudinal or z-axis.
  • a radiation source 108 such as an x-ray tube, is supported by and rotates with the rotating gantry 104, and emits radiation that traverses the examination region 106.
  • a source collimator 110 collimates the emitted radiation to form a generally fan, wedge, or cone shaped radiation that traverses the examination region 106.
  • a radiation sensitive detector array 112 detects radiation emitted by the radiation source 108 that traverses the examination region 106 and generates projection data indicative of the detected radiation.
  • the illustrated radiation sensitive detector array 112 includes one or more rows of radiation sensitive photosensor pixels along the z-axis.
  • a reconstructor 114 reconstructs the projection data and generates volumetric image data indicative of the examination region 106, including structure of an object or subject disposed therein. One or more images can be generated from the volumetric image data.
  • a motion information extractor 116 extracts motion information from the projection and/or volumetric imaged data.
  • extracted motion information is used to generate projection data that predominately includes moving structure of interest and substantially no non-moving structure. This allows for characterization of the moving structure of interest, which may otherwise be obscured by other structure.
  • extracted motion information is used to identify image data corresponding to a motion state of interest and facilitate reconstructing a volume of data based on motion state.
  • the moving structure can be a heart, a lung, the diaphragm, and/or other periodically moving organ of a human or animal patient, a moving object, flowing fluid, etc.
  • motion information extractor 116 devices such as respiratory belts, light emitting landmarks, and the like can be mitigated, which may decrease procedure complexity and time and patient discomfort relative to a configuration in which the motion information extractor 116 is not employed. It is to be understood that although the motion information extractor 116 is shown in connection with the imaging system 100, the motion information extractor 116 can be located remote from and/or employed without the imaging system 100.
  • a support 118 such as a couch, supports the object or subject in the examination region 106.
  • the support 118 is movable along the z-axis in coordination with the rotation of the rotating gantry 104 to facilitate helical, axial, or other desired scanning trajectories.
  • a general purpose computing system serves as an operator console 120, which includes human readable output devices such as a display and/or printer and input devices such as a keyboard and/or mouse.
  • Software resident on the console 120 allows the operator to control the operation of the system 100, for example, by allowing the operator to initiate scanning to obtain data that can be processed by the motion information extractor 116.
  • FIGURE 2 illustrates an example motion information extractor 116.
  • the motion information extractor 116 can receive volumetric image data from the reconstructor 114.
  • the motion information extractor 116 may include a reconstructor and/or employ another reconstructor and generate volumetric image data from projection data generated by the imaging system 100 and/or another system.
  • FIGURES 3 and 4 respectively show example projection data and volumetric image data for an area of the abdomen of a human patient.
  • a segmentor 204 segments the volumetric image data. In one instance, this may include segmenting the volumetric image data to extract substantially only structure of interest, such as moving structure, from the volumetric image data. The resulting volumetric image data is indicative substantially of the extracted moving structure of interest, and is generally referred to herein as motion only volumetric image data.
  • FIGURE 5 shows example segmented volumetric image data for the area of the abdomen show in FIGURE 4.
  • an image data subtracter 206 subtracts the segmented or motion only volumetric image data from the original volumetric image data.
  • the resulting difference volumetric image data is indicative substantially of static only volumetric image data, which represents volumetric image data with substantially no extracted moving structure of interest, and is generally referred to herein as static only volumetric image data.
  • FIGURE 6 shows example difference volumetric image data for the area of the abdomen show in FIGURE 4.
  • a forward-projector 208 forward projects the static only volumetric image data to generate projection data therefore, which includes substantially only the non extracted structure or not extracted moving structure of interest, and is generally referred to herein as static only projection data.
  • static only projection data includes substantially only the non extracted structure or not extracted moving structure of interest.
  • a projection data subtracter 210 subtracts the static only projection data from the original projection data.
  • the resulting difference projection data provides projection data generally referred to herein as motion only projection data.
  • This data represents projection data substantially corresponding to the moving structure of interest. With this data, the static parts of the scanned subject are effectively removed from the original projection data.
  • FIGURE 7 shows example motion only projection data.
  • the reconstructor 114 can be used to reconstruct the motion only projection data to generate motion only volumetric image data with substantially only the extracted structure and substantially no non-extracted structure.
  • This data can be further processed to determine information such as quantitative and/or qualitative information about the moving structure of interest, including, but not limited, one or more images of the moving structure of interest, information about the motion state of the structure, etc.
  • the motion only projection data is also processed to determine such information.
  • FIGURE 8 illustrates a method in connection with the motion state extractor of FIGURE 2.
  • an object or subject such as a patient is scanned.
  • the projection data is reconstructed to generate a first set of volumetric image data.
  • a second set of volumetric image data is generated by segmenting the first set of volumetric data to generate volumetric image data indicative of the segmented portion, which may only include moving structure of interest.
  • a third set of volumetric image data is generated by subtracting the second set of volumetric image data from the first set of volumetric image data. This data only includes the static structure.
  • the third set of volumetric image data is forward projected to generate a second projection data, which provides an estimate of the projection data that would render the third set of volumetric image data.
  • a third set of projection data is generated by subtracting the second set of projection data from the first set of projection data. The resulting data provides projection data indicative substantially only of the moving structure of interest.
  • the third set of projection data is reconstructed to generate a fourth set of volumetric image data, which substantially only includes the moving structure of interest. This data can be further processed as discussed herein.
  • FIGURE 9 illustrates another motion information extractor 116.
  • the motion information is used to track motion states (phases) for reconstruction purposes.
  • the motion information extractor 116 includes a motion state or phase selector 902, which selects data for reconstruction based on a motion state or phase of interest.
  • the phase selector 902 selects data based on image data generated by the reconstructor 114 and/or other reconstructor.
  • the image data includes a plurality of sliding window reconstructions or slices for a series of short scan segments. It is to be appreciated that the reconstructions can be generated with projection data corresponding to half rotations, or one hundred eighty degrees (180°) plus a fan angle, or other amounts projection data.
  • the reconstructed slices correspond to the same z-axis position within the scan volume, but different motion states.
  • the phase selector 902 presents the slices, for example, via a monitor or other display device, and receives user input indicative of the motion state of interest. In one instance, this can be achieved by having the user manually select one of the slices, for example, by entering a slice number, clicking on a slice with a mouse or other pointing device, and/or otherwise. In another instance, the user enters indicia that identifies the state, and the phase selector 902 automatically selects a slice corresponding to the state. For example, where the data includes the abdomen and the user enters selection indicia corresponding to the highest inhalation state of the lung, the phase selector 902 identifies the slice in which the lung is the largest. This can be done based on Hounsfield units or otherwise. Of course, the user can override the phase selector 902.
  • the phase selector 902 generates a signal based on the selected sliding window slice.
  • the signal or another signal may also provide information such as the selected motion state.
  • the signal(s) is provided to the reconstructor 114, which reconstructs a plurality of contiguous slices for a range of projection data around the projection data corresponding to the selected sliding window slice (the start slice). For example, the slices for projections in the range of start slice projection ⁇ ⁇ /2.
  • the reconstructor 114 reconstructs a second set of sliding window slices. These slices are based on the last slice of the above noted plurality of contiguous slices. Likewise, these sliding window slices can be generated with projection data corresponding to half rotations or otherwise, and correspond to the same z-axis position within the scan volume, but different motion states.
  • a motion state identifier 904 identifies which of the sliding window slices corresponds to the motion state of the last slice in the above noted plurality of contiguous slices.
  • a similarity metric determiner 906 is used to identify such a slice.
  • the similarity metric determiner 906 can determine a similarity between each of the sliding window slices and the last slice. In one instance, this includes determining a correlation value for each of the sliding window slices with respect to the last slice. Other metrics such as cross-correlation, root mean square, deviation, and/or other metrics can additionally or alternatively be used.
  • the sliding window slice having the highest degree of correlation with the last slice is deemed to correspond to the same motion state of the last slice.
  • the projection data corresponding to this slice can be used as the start data for the next set of contiguous slices to be generated.
  • the start projection data is provided to the reconstructor 114, which reconstructs the next set of plurality of contiguous slices for a range of projection data around the projection that marks the center of the reconstruction interval of the next start slice.
  • the above can be repeated until enough sets of slices (e.g., 2 to 100) for a desired volume (e.g., 1 mm to 100 mm) or field of view along the z-axis is generated.
  • the resulting volume of data can then be used for various reconstructions, including, but not limited to, multi-cycle reconstructions based on phase or motion states in which data from adjacent motion cycles is combined, which can improve image quality.
  • FIGURE 10 illustrates a method in connection with the motion state extractor of FIGURE 10.
  • a plurality of sliding window slices 1100, 1102, 1104, ..., 1106 for a series of short scan segments are generated.
  • the sliding window slice corresponding to a motion state of interest is selected as discussed herein.
  • slice 1104 is selected and referred to as the start slice.
  • a first plurality of contiguous slices 1108, including a first slice 1110 and a last slice 1112, is generated for a range of projection data around the projection data corresponding to the selected sliding window slice 1106.
  • a second plurality of sliding window slices 1114, 1116, ..., 1118 is generated based on the last slice 1112 of the first plurality of contiguous slices 1108.
  • the projection data for the sliding window slice of the second plurality with the highest degree of similarity (slice 1114 in the illustrated example) with the last slice 1112 of the first plurality of contiguous slices 1108 is selected as the next start projection data for the next set of contiguous slices, and acts 1006 to 1008 are repeated. If no more sets of contiguous slices are to be generated, then at 1014, the method ends.
  • the central projections corresponding to the first slice in the stacks of slices, or their associated points in time can be recorded for use as so- called phase points for other reconstructions like gated reconstruction, for example.
  • the techniques described herein may be implemented by way of computer readable instructions, which when executed by a computer processor(s), cause the processor(s) to carry out the described acts.
  • the instructions are stored in a computer readable storage medium associated with or otherwise accessible to a relevant computer, such as a dedicated workstation, a home computer, a distributed computing system, the console 120, and/or other computer.
  • the acts need not be performed concurrently with data acquisition.
  • the term “substantially” as used herein means that the data is at least 99% structure of interest or 99% free of other structure. In another instance, the term “substantially” as used herein means that the data is at least 95% structure of interest or 95% free of other structure. In another instance, the term “substantially” as used herein means that the data is at least 90% structure of interest or 90% free of other structure. In another instance, the term “substantially” as used herein means that the data is at least 80% structure of interest or 80% free of other structure. In another instance, the term “substantially” as used herein means that the data is at least 50% structure of interest or 50% free of other structure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Analysis (AREA)

Abstract

L'invention porte sur un procédé qui comprend la création d'un second ensemble de données de projection qui comprend sensiblement uniquement une structure d'intérêt sélectionnée sur la base d'un premier ensemble de données de projection qui comprend la structure d'intérêt sélectionnée et une autre structure. Un autre procédé comprend la génération d'une seconde pluralité de tranches de fenêtre glissante pour une dernière tranche d'une première pluralité de tranches, la sélection d'une seconde tranche de fenêtre glissante à partir de la seconde pluralité de tranches de fenêtre glissante sur la base de la dernière tranche de la première pluralité de tranches, et la génération d'une seconde pluralité de tranches, comprenant une première tranche et une dernière tranche, à partir d'une plage de données de projection autour de données de projection correspondant à la dernière tranche de la première pluralité de tranches.
PCT/IB2009/054792 2008-11-07 2009-10-28 Extraction d'informations de mouvement WO2010052615A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11230008P 2008-11-07 2008-11-07
US61/112,300 2008-11-07

Publications (2)

Publication Number Publication Date
WO2010052615A2 true WO2010052615A2 (fr) 2010-05-14
WO2010052615A3 WO2010052615A3 (fr) 2011-05-12

Family

ID=42153347

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2009/054792 WO2010052615A2 (fr) 2008-11-07 2009-10-28 Extraction d'informations de mouvement

Country Status (1)

Country Link
WO (1) WO2010052615A2 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198497A (zh) * 2011-09-28 2013-07-10 西门子公司 确定运动场和利用运动场进行运动补偿重建的方法和系统

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004098373A2 (fr) * 2003-05-07 2004-11-18 Stiftung Caesar Procede pour reduire des perturbations causees par des artefacts

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004098373A2 (fr) * 2003-05-07 2004-11-18 Stiftung Caesar Procede pour reduire des perturbations causees par des artefacts

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ZIEGLER A ET AL: "Iterative reconstruction of a region of interest for transmission tomography" PROCEEDINGS OF THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING (SPIE), SPIE, USA, vol. 6142, no. 614223, 12 February 2006 (2006-02-12), pages 614223-1, XP007905942 ISSN: 0277-786X DOI: DOI:10.1117/12.650666 *
ZIEGLER ANDY ET AL: "Iterative reconstruction of a region of interest for transmission tomography" MEDICAL PHYSICS, AIP, MELVILLE, NY, US, vol. 35, no. 4, 12 March 2008 (2008-03-12), pages 1317-1327, XP012115989 ISSN: 0094-2405 DOI: DOI:10.1118/1.2870219 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198497A (zh) * 2011-09-28 2013-07-10 西门子公司 确定运动场和利用运动场进行运动补偿重建的方法和系统

Also Published As

Publication number Publication date
WO2010052615A3 (fr) 2011-05-12

Similar Documents

Publication Publication Date Title
EP2501290B1 (fr) Dispositif de réglage du champ de vision du plan de balayage, dispositif de détermination et/ou dispositif d'évaluation de la qualité
US7348564B2 (en) Multi modality imaging methods and apparatus
US8553959B2 (en) Method and apparatus for correcting multi-modality imaging data
US6856666B2 (en) Multi modality imaging methods and apparatus
US8682415B2 (en) Method and system for generating a modified 4D volume visualization
RU2471204C2 (ru) Локальная позитронная эмиссионная томография
US7920670B2 (en) Keyhole computed tomography
US20120278055A1 (en) Motion correction in radiation therapy
US11410349B2 (en) Methods for data driven respiratory motion estimation
IL158197A (en) Methods and Compensation Device for Cutting
US20230130015A1 (en) Methods and systems for computed tomography
JP2018505390A (ja) 放射線放出撮像システム、記憶媒体及び撮像方法
US20110110570A1 (en) Apparatus and methods for generating a planar image
JP2004237076A (ja) マルチ・モダリティ・イメージング方法及び装置
US10299752B2 (en) Medical image processing apparatus, X-ray CT apparatus, and image processing method
US7853314B2 (en) Methods and apparatus for improving image quality
US9858688B2 (en) Methods and systems for computed tomography motion compensation
US8873823B2 (en) Motion compensation with tissue density retention
WO2010052615A2 (fr) Extraction d'informations de mouvement
US10736596B2 (en) System to improve nuclear image of moving volume
JP6877881B2 (ja) 医用画像処理装置、x線ct装置及び画像処理方法
JP2021525145A (ja) 時間ゲーティング3次元イメージング

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09796065

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09796065

Country of ref document: EP

Kind code of ref document: A2