WO2009004571A1 - Method and apparatus for image reconstruction - Google Patents

Method and apparatus for image reconstruction Download PDF

Info

Publication number
WO2009004571A1
WO2009004571A1 PCT/IB2008/052632 IB2008052632W WO2009004571A1 WO 2009004571 A1 WO2009004571 A1 WO 2009004571A1 IB 2008052632 W IB2008052632 W IB 2008052632W WO 2009004571 A1 WO2009004571 A1 WO 2009004571A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
artifact
data
reconstruction
processing
Prior art date
Application number
PCT/IB2008/052632
Other languages
French (fr)
Inventor
Peter Forthmann
Roland Proksa
Original Assignee
Koninklijke Philips Electronics N.V.
Philips Intellectual Property & Standards Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V., Philips Intellectual Property & Standards Gmbh filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2009004571A1 publication Critical patent/WO2009004571A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Definitions

  • the present invention relates generally to image reconstruction, and, more particularly, to artifact reduction in medical images.
  • CT Computer Tomography
  • X-ray data is computationally compiled from absorption data of X-rays that pass through an object and is reconstructed into an image. It is known that an image with low attenuation regions, such as soft biologic tissue, can contain artifacts generated by high attenuation objects. Artifacts degrade the quality of a CT image, obstruct identification and/or diagnosis and should be removed or reduced to obtain an accurate image. Artifact reduction is sometimes accomplished through reprojection and reconstruction of the image, using a number of mathematical systems.
  • US patent No. 6,266,388 to Jiang Hsieh describes a two-pass cone beam image reconstruction method that can reduce artifacts in a CT image by generating an error image using reprojection and reconstruction methods based on a portion of an initial reconstruction image, which is generated based on a collected cone beam image data set, and subtracting the error image from the initial reconstruction image.
  • An aspect of some embodiments of the present invention relates to reducing artifact in a medical image reconstructed from a set of input data, which is generally generated by a medical scan system, like a CT X-ray system, a MRI or any 3D X-ray system.
  • a medical scan system like a CT X-ray system, a MRI or any 3D X-ray system.
  • the basic idea of the present invention is: after a first pass, wherein a first image is reconstructed based on a set of input data, and a second pass, wherein the artifact comprised in the first image is reduced, a third pass is added to further process the output image of the second pass, referred to as the second image, to further determine an error image representing the artifact comprised in the first image, and subtract the error image from the first image, to obtain a third image, in which the artifact is further reduced.
  • Any algorithms known in the art that are appropriate for reconstructing images from a set of CT data, segmenting a CT image into a plurality of images, reprojecting images to form at least a set of data, can be used in the present invention.
  • a method for artifact reduction in an image comprising the steps of: reconstructing a first image by processing a first set of input data using a reconstruction algorithm, the first image comprising an artifact; generating a second image by processing the first image, the second image comprising a reduced artifact; and generating a third image by processing the first image and the second image, the third image comprising a further-reduced artifact.
  • the step of generating the third image comprises the steps of: generating an error image representing the artifacts comprised in the first image, based on the second image, and generating the third image by subtracting the error image from the first image.
  • the error image can further accurately represent the artifact in the first image, thus the third image comprises less artifact after the more accurate error image is subtracted from the first image.
  • the step of generating the error image comprises: segmenting the second image to provide an image with high attenuation objects separate from low attenuation objects; reprojecting the segmented image to form at least a set of data; reconstructing an image from the set of data; and determining the error image based on the reconstructed image.
  • the method can further comprise at least one set of iterative steps of utilizing an output image of a previous step comprising a reduced artifact and the first image to generate a new image, in which the artifact is further reduced than the artifact of the output image of the previous step.
  • an apparatus for reconstructing an image using data collected in a cone beam scan comprising a processor configured for: generating a first image by processing the collected data using a reconstruction algorithm, the first image comprising an artifact; generating a second image by processing the first image, the second image comprising a reduced artifact; and generating a third image by processing the first image and the second image, the third image comprising a further-reduced artifact.
  • the processor can be configured in an image reconstructor, a computer, or a device capable of processing a medical data set, of a medical system, e.g., a CT system and a MRI system.
  • Fig. 1 is a schematic drawing of a cone beam CT system
  • Fig. 2 is a schematic drawing of a cone beam CT system comprising a processor configured to reduce artifact in accordance with an exemplary embodiment of the invention
  • Fig. 3 is a block diagram showing a method used to reduce artifact in a cone beam CT image in accordance with an exemplary embodiment of the invention.
  • CT image in accordance with an exemplary embodiment of the invention.
  • Fig. 5 illustrates a block of a computer program implementing the artifact reduction methods in accordance with an exemplary embodiment of the invention.
  • the reconstructed image is applied as an input to the computer 150 for storing, displaying and other subsequent processing operations.
  • the computer 150 also can send commands to the control mechanism 120 to control the gantry 110.
  • the control mechanism 120 and DAS 130 are integrated together.
  • the algorithms described below may be performed by a processor 210 in image reconstructor 140, as shown in Fig. 2.
  • the processor 210 also can be located in computer 150, or other devices coupled to the CT system.
  • step S310 the collected CT data are used for reconstruction purposes in order to form a first image, i.e., a first reconstruction image.
  • reconstruction S310 creates many images of the scanned object from the cone beam data, each image recording an X-ray slice taken through a single plane passing through an object.
  • the used reconstruction algorithm can be an exact reconstruction algorithm or an inexact reconstruction algorithm. Generally, an exact reconstruction algorithm cannot reconstruct an artifact- free image from an incomplete trajectory like a circle.
  • cone beam artifacts can be produced by an inexact reconstruction algorithm.
  • Step S310 can be referred to as 1 st pass.
  • step S320 firstly, the first image is processed to generate an image representing artifacts in the first image. Then the image representing artifacts is subtracted from the first image to form a second image, in which the artifact is reduced and which is thus referred to as artifact- reduced image. Step S320 can be referred to as 2 nd pass.
  • step S330 firstly, in step S332, the output image of the 2 nd pass, i.e., the second image, is processed to generate an error image, which better represents the artifacts in the first image. Then, in step S334, the error image is subtracted from the first image to form a third image, in which the artifacts are further reduced.
  • the error image can represent the artifact more accurately than the image representing the artifacts generated in S320, so the artifacts in the third image are further reduced than in the second image.
  • Step S330 can be referred to as 3 rd pass. If the quality of the third image is good enough, the artifact-reduction process can be stopped, and the third image can be fed into the computer 150 of Fig.
  • the input image is the output image of a directly preceding step, and is processed to generate an error image representing the artifacts in the first image.
  • a higher quality image can be generated by subtracting the generated error image from the first image. The number of iterative steps depends on the requirements.
  • the algorithms used in the segmentation, reprojection, reconstruction, filtering and determination processes can be the respective algorithms known in the art, that are appropriate for each individual step.
  • the algorithms used in each segmentation process can be the same, like the use of a certain threshold on the value of each pixel, gradient identification, continuity reconstruction, or other methods known in the art that yield identified high attenuation regions that are separate from the image.
  • the algorithms used in the reprojection process, reconstruction process, filtering process, and determination process of each step can be the same.
  • the skilled person in the art should know that the segmentation algorithms, reprojection algorithms, reconstruction algorithms, filtering algorithms and determination algorithms used in different passes can be similar, as long as they can reproduce the artifacts as accurately as possible and do not introduce artifacts.
  • Fig. 4 is a detailed block diagram showing a method used to reduce artifact in a cone beam CT image in accordance with an exemplary embodiment of the invention.
  • Step S4100 comprises two steps: step S4110 for acquiring CT data and step S4120 for reconstructing a first image by processing the acquired CT data using a reconstruction algorithm.
  • the reconstruction algorithm can be an inexact reconstruction algorithm or an exact reconstruction algorithm.
  • Step S4100 performs the function of 1 st pass, reconstructing a first image.
  • Step S4200 performs the function of 2 nd pass, generating a second image, i.e., an artifact- reduced image.
  • the first image i.e., reconstructed data generated by S4120, is segmented.
  • segmentation refers to a process where the volume is separated into high attenuation and low attenuation regions.
  • An exemplary method of accomplishing this separation can be one that uses a threshold on the value of each pixel, a gradient identification method, a continuity reconstruction method, or any other appropriate method.
  • the segmented data set has separate areas identified as the high attenuation objects, and these high attenuation objects are separated from the first image.
  • the resultant high attenuation object image is filtered in step S4220 to remove any high frequency degradation that may have occurred during processing. For example, if a simple threshold segmentation algorithm is used in a segmentation process, a filtering process can smooth out the edges in the segmented image.
  • Step S4220 is optional and can be skipped, especially when the used segmentation algorithm doesn't introduce sharp edges in the segmented image or comprises a smoothing/filtering operation.
  • step S4230 the high attenuation object is reprojected.
  • step S4240 the reprojected high attenuation data is reconstructed.
  • reconstruction step S4240 uses the same algorithms utilized in step S4120 to generate an image of the high attenuation objects together with artifacts that these high attenuation objects created in the low attenuation objects through a reconstruction algorithm.
  • a consistent CT data set can replace an inconsistent CT data set.
  • step S4260 the artifact portion of the reconstructed image is determined.
  • step S4270 artifacts so determined are subtracted from the data set, generated by segmentation step S4210 of the consistent data set containing segmented high attenuation objects and low attenuation objects. It can also be achieved by subtracting the output image of S4260 from the first image, i.e., the output image of S4120, to obtain the second image.
  • the second image contains reduced artifacts.
  • Step S4300 performs the function of 3 rd pass, generating a third image, i.e., a further artifact- reduced image.
  • S4300 comprises segmentation step S4310, optional filtering step S4320, reprojection step S4330, reconstruction step S4340, determination step S4360 and artifact subtraction step S4370. These steps perform the same or similar functions as the corresponding steps comprised in S4200, the major differences being that the input of S4310 is the second image, i.e., the artifact-reduced image, and that the determined image of S4360 is subtracted from the first image, i.e., the initial reconstruction image reconstructed based on the collected CT data, instead of the second image or its corresponding data set.
  • another embodiment of the present invention further provides a computer program configured to perform the abovementioned artifact-reduction methods.
  • the computer program comprises three reconstruction modules.
  • a first reconstruction module configured to generate a first image by processing a set of collected data using a reconstruction algorithm, the first image comprising an artifact.
  • a second reconstruction module configured to generate a second image by processing the first image, the second image comprising a reduced artifact.
  • a third reconstruction module configured to generate a third image by processing the first image and the second image, the third image comprising a further-reduced artifact.
  • the second reconstruction module is further configured to: segment the first image to provide at least a segmented image with high attenuation objects separate from low attenuation objects; reproject the segmented image to form at least a second set of data; reconstruct an image from the second set of data using a reconstruction algorithm; determine an image representing the artifact based on the reconstructed image; and subtract the determined image from the first image to obtain the second image.
  • the third reconstruction module is further configured to: segment the second image to provide at least a segmented image with high attenuation objects separate from low attenuation objects; reproject the segmented image to form at least a third set of data; reconstruct an image from the third set of data using a reconstruction algorithm; determine an image representing the artifact based on the reconstructed image; and subtract the determined image from the first image to obtain the third image.
  • all the reconstruction algorithms used in these three reconstruction modules can be the same.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

To reduce artifacts comprised in a CT image, the present invention provides a method comprising the steps of: Reconstructing a first image by processing one first set of CT data using a reconstruction algorithm, the first image comprising an artifact; Generating a second image by processing the first image, the second image comprising a reduced artifact; and Generating a third image by processing the first image and the second image, the third image comprising a further-reduced artifact. In an exemplary embodiment of the present invention, the step of generating the third image comprises the steps of: generating an error image representing the artifacts, based on the second image, and generating the third image by subtracting the error image from the first image. By using the provided method, the artifacts can be greatly reduced and hence a better-quality CT image can be obtained.

Description

METHOD AND APPARATUS FOR IMAGE RECONSTRUCTION
Field of the Invention
The present invention relates generally to image reconstruction, and, more particularly, to artifact reduction in medical images.
Background of the Invention
Computer Tomography (CT) X-ray data is computationally compiled from absorption data of X-rays that pass through an object and is reconstructed into an image. It is known that an image with low attenuation regions, such as soft biologic tissue, can contain artifacts generated by high attenuation objects. Artifacts degrade the quality of a CT image, obstruct identification and/or diagnosis and should be removed or reduced to obtain an accurate image. Artifact reduction is sometimes accomplished through reprojection and reconstruction of the image, using a number of mathematical systems.
US patent No. 6,266,388 to Jiang Hsieh, describes a two-pass cone beam image reconstruction method that can reduce artifacts in a CT image by generating an error image using reprojection and reconstruction methods based on a portion of an initial reconstruction image, which is generated based on a collected cone beam image data set, and subtracting the error image from the initial reconstruction image.
Summary of the Invention
An aspect of some embodiments of the present invention relates to reducing artifact in a medical image reconstructed from a set of input data, which is generally generated by a medical scan system, like a CT X-ray system, a MRI or any 3D X-ray system. The basic idea of the present invention is: after a first pass, wherein a first image is reconstructed based on a set of input data, and a second pass, wherein the artifact comprised in the first image is reduced, a third pass is added to further process the output image of the second pass, referred to as the second image, to further determine an error image representing the artifact comprised in the first image, and subtract the error image from the first image, to obtain a third image, in which the artifact is further reduced.
Any algorithms known in the art that are appropriate for reconstructing images from a set of CT data, segmenting a CT image into a plurality of images, reprojecting images to form at least a set of data, can be used in the present invention.
There is thus provided, in accordance with an exemplary embodiment of the invention, a method for artifact reduction in an image, comprising the steps of: reconstructing a first image by processing a first set of input data using a reconstruction algorithm, the first image comprising an artifact; generating a second image by processing the first image, the second image comprising a reduced artifact; and generating a third image by processing the first image and the second image, the third image comprising a further-reduced artifact.
In an embodiment of the present invention, the step of generating the third image comprises the steps of: generating an error image representing the artifacts comprised in the first image, based on the second image, and generating the third image by subtracting the error image from the first image. By processing the second image which comprises a reduced artifact, the error image can further accurately represent the artifact in the first image, thus the third image comprises less artifact after the more accurate error image is subtracted from the first image.
In an embodiment of the present invention, the step of generating the error image comprises: segmenting the second image to provide an image with high attenuation objects separate from low attenuation objects; reprojecting the segmented image to form at least a set of data; reconstructing an image from the set of data; and determining the error image based on the reconstructed image.
In an embodiment of the present invention, to obtain an image in which the artifact is further reduced, the method can further comprise at least one set of iterative steps of utilizing an output image of a previous step comprising a reduced artifact and the first image to generate a new image, in which the artifact is further reduced than the artifact of the output image of the previous step. Thus, a better-quality image with fewer artifacts can be obtained after an image reconstruction process having more than three passes.
There is thus provided, in accordance with an exemplary embodiment of the invention, an apparatus for reconstructing an image using data collected in a cone beam scan, the apparatus comprising a processor configured for: generating a first image by processing the collected data using a reconstruction algorithm, the first image comprising an artifact; generating a second image by processing the first image, the second image comprising a reduced artifact; and generating a third image by processing the first image and the second image, the third image comprising a further-reduced artifact.
The processor can be configured in an image reconstructor, a computer, or a device capable of processing a medical data set, of a medical system, e.g., a CT system and a MRI system.
There is thus provided, in accordance with an exemplary embodiment of the invention, a computer program configured to perform the methods described above.
Other objects and effects of the present invention will become apparent from the following description and the appended claims when taken in conjunction with the accompanying drawings, and a more comprehensive understanding of the present invention will be obtained. Brief Description of the Drawings
Fig. 1 is a schematic drawing of a cone beam CT system;
Fig. 2 is a schematic drawing of a cone beam CT system comprising a processor configured to reduce artifact in accordance with an exemplary embodiment of the invention;
Fig. 3 is a block diagram showing a method used to reduce artifact in a cone beam CT image in accordance with an exemplary embodiment of the invention; and
Fig. 4 is a detailed block diagram showing a method used to reduce artifact in a cone beam
CT image in accordance with an exemplary embodiment of the invention.
Fig. 5 illustrates a block of a computer program implementing the artifact reduction methods in accordance with an exemplary embodiment of the invention.
Throughout the above drawings, like reference numerals will be understood to refer to like, similar or corresponding features or functions.
Detailed Description of Exemplary Embodiments
Referring to Fig. 1, a computed tomography (CT) imaging system 100 is schematically shown as including a gantry 110, a control mechanism 120, a data acquisition system (DAS) 130, an image reconstructor 140 and a computer 150. The gantry 110 can be a "third generation" CT scanner having an X-ray source and detector array for scanning an object. The control mechanism 120 can provide power and time signals to control the gantry's X-ray source and the gantry's rotational speed and position. The DAS 130 samples data collected from the gantry 110 and converts the data to digital signals for subsequent processing. The image reconstructor 140 receives sampled and digitized x-ray data from DAS 130 and performs image reconstruction. The reconstructed image is applied as an input to the computer 150 for storing, displaying and other subsequent processing operations. The computer 150 also can send commands to the control mechanism 120 to control the gantry 110. In some CT systems, the control mechanism 120 and DAS 130 are integrated together. The algorithms described below may be performed by a processor 210 in image reconstructor 140, as shown in Fig. 2. Alternatively, the processor 210 also can be located in computer 150, or other devices coupled to the CT system.
Fig. 3 is a block diagram illustrating a CT X-ray imaging reconstruction method 300 according to an exemplary embodiment of the present invention, used to process CT data collected from a CT scanner and reduce artifacts in the image.
In step S310, the collected CT data are used for reconstruction purposes in order to form a first image, i.e., a first reconstruction image. Normally, reconstruction S310 creates many images of the scanned object from the cone beam data, each image recording an X-ray slice taken through a single plane passing through an object. The used reconstruction algorithm can be an exact reconstruction algorithm or an inexact reconstruction algorithm. Generally, an exact reconstruction algorithm cannot reconstruct an artifact- free image from an incomplete trajectory like a circle. For Helical acquisition, i.e., a complete trajectory, cone beam artifacts can be produced by an inexact reconstruction algorithm. Step S310 can be referred to as 1st pass.
In step S320, firstly, the first image is processed to generate an image representing artifacts in the first image. Then the image representing artifacts is subtracted from the first image to form a second image, in which the artifact is reduced and which is thus referred to as artifact- reduced image. Step S320 can be referred to as 2nd pass.
As regards step S330, firstly, in step S332, the output image of the 2nd pass, i.e., the second image, is processed to generate an error image, which better represents the artifacts in the first image. Then, in step S334, the error image is subtracted from the first image to form a third image, in which the artifacts are further reduced. In S330, the error image can represent the artifact more accurately than the image representing the artifacts generated in S320, so the artifacts in the third image are further reduced than in the second image. Step S330 can be referred to as 3rd pass. If the quality of the third image is good enough, the artifact-reduction process can be stopped, and the third image can be fed into the computer 150 of Fig. 2. If not, one or more steps can follow to pursue higher quality, i.e., reducing artifact further. In each following step, the input image is the output image of a directly preceding step, and is processed to generate an error image representing the artifacts in the first image. A higher quality image can be generated by subtracting the generated error image from the first image. The number of iterative steps depends on the requirements.
In S320, S330, and subsequent steps, if applicable, the algorithms used in the segmentation, reprojection, reconstruction, filtering and determination processes, can be the respective algorithms known in the art, that are appropriate for each individual step. The algorithms used in each segmentation process can be the same, like the use of a certain threshold on the value of each pixel, gradient identification, continuity reconstruction, or other methods known in the art that yield identified high attenuation regions that are separate from the image. Similarly, the algorithms used in the reprojection process, reconstruction process, filtering process, and determination process of each step, can be the same. The skilled person in the art should know that the segmentation algorithms, reprojection algorithms, reconstruction algorithms, filtering algorithms and determination algorithms used in different passes can be similar, as long as they can reproduce the artifacts as accurately as possible and do not introduce artifacts.
Fig. 4 is a detailed block diagram showing a method used to reduce artifact in a cone beam CT image in accordance with an exemplary embodiment of the invention. Step S4100 comprises two steps: step S4110 for acquiring CT data and step S4120 for reconstructing a first image by processing the acquired CT data using a reconstruction algorithm. The reconstruction algorithm can be an inexact reconstruction algorithm or an exact reconstruction algorithm. Step S4100 performs the function of 1st pass, reconstructing a first image. Step S4200 performs the function of 2nd pass, generating a second image, i.e., an artifact- reduced image. In step S4210, the first image, i.e., reconstructed data generated by S4120, is segmented. Generally, segmentation refers to a process where the volume is separated into high attenuation and low attenuation regions. An exemplary method of accomplishing this separation can be one that uses a threshold on the value of each pixel, a gradient identification method, a continuity reconstruction method, or any other appropriate method. The segmented data set has separate areas identified as the high attenuation objects, and these high attenuation objects are separated from the first image. The resultant high attenuation object image is filtered in step S4220 to remove any high frequency degradation that may have occurred during processing. For example, if a simple threshold segmentation algorithm is used in a segmentation process, a filtering process can smooth out the edges in the segmented image. Step S4220 is optional and can be skipped, especially when the used segmentation algorithm doesn't introduce sharp edges in the segmented image or comprises a smoothing/filtering operation. In step S4230, the high attenuation object is reprojected. In step S4240, the reprojected high attenuation data is reconstructed. Normally, reconstruction step S4240 uses the same algorithms utilized in step S4120 to generate an image of the high attenuation objects together with artifacts that these high attenuation objects created in the low attenuation objects through a reconstruction algorithm. Alternatively, a consistent CT data set can replace an inconsistent CT data set.
In step S4260, the artifact portion of the reconstructed image is determined. In step S4270, artifacts so determined are subtracted from the data set, generated by segmentation step S4210 of the consistent data set containing segmented high attenuation objects and low attenuation objects. It can also be achieved by subtracting the output image of S4260 from the first image, i.e., the output image of S4120, to obtain the second image. The second image contains reduced artifacts.
Step S4300 performs the function of 3rd pass, generating a third image, i.e., a further artifact- reduced image. S4300 comprises segmentation step S4310, optional filtering step S4320, reprojection step S4330, reconstruction step S4340, determination step S4360 and artifact subtraction step S4370. These steps perform the same or similar functions as the corresponding steps comprised in S4200, the major differences being that the input of S4310 is the second image, i.e., the artifact-reduced image, and that the determined image of S4360 is subtracted from the first image, i.e., the initial reconstruction image reconstructed based on the collected CT data, instead of the second image or its corresponding data set. After the step S4300, a better-quality image, i.e., the artifact is further reduced, is generated. It can be fed into a computer or other image processing and displaying devices. Alternatively, if the output image of S4300 is not good enough, which means the artifact level still needs to be further reduced, other similar steps can be performed. Each follow-up step uses the output image of its previous step as input of its segmentation step, determines a corresponding artifact image, and subtracts the determined artifact image from the initial reconstruction image.
As shown in Fig. 5, another embodiment of the present invention further provides a computer program configured to perform the abovementioned artifact-reduction methods. The computer program comprises three reconstruction modules. A first reconstruction module, configured to generate a first image by processing a set of collected data using a reconstruction algorithm, the first image comprising an artifact. A second reconstruction module, configured to generate a second image by processing the first image, the second image comprising a reduced artifact. And a third reconstruction module, configured to generate a third image by processing the first image and the second image, the third image comprising a further-reduced artifact. The second reconstruction module is further configured to: segment the first image to provide at least a segmented image with high attenuation objects separate from low attenuation objects; reproject the segmented image to form at least a second set of data; reconstruct an image from the second set of data using a reconstruction algorithm; determine an image representing the artifact based on the reconstructed image; and subtract the determined image from the first image to obtain the second image. The third reconstruction module is further configured to: segment the second image to provide at least a segmented image with high attenuation objects separate from low attenuation objects; reproject the segmented image to form at least a third set of data; reconstruct an image from the third set of data using a reconstruction algorithm; determine an image representing the artifact based on the reconstructed image; and subtract the determined image from the first image to obtain the third image.
Optionally, all the reconstruction algorithms used in these three reconstruction modules can be the same.
It should be known to the person skilled in the art that the method described above also can be used in other medical imaging systems, like magnetic resonance imaging, and any 3D X-ray imaging system. Thus, these MRI and 3D X-ray systems can have corresponding apparatuses to perform the artifact reduction methods.
The embodiments described above are only illustrative, and not intended to limit the technical approach of the present invention. Those skilled in the art will understand that the technique approaches of the present invention can be modified or equally displaced without departing from the spirit and scope of the technique approaches of the present invention, which will also fall within the protective scope of the claims of the present invention.

Claims

Claims:
1. A method for artifact reduction in an image comprising the steps of: a) Reconstructing a first image by processing a first set of input data using a reconstruction algorithm, the first image comprising an artifact; b) Generating a second image by processing the first image, the second image comprising a reduced artifact; and c) Generating a third image by processing the first image and the second image, the third image comprising a further-reduced artifact.
2. A method as claimed in claim 1, wherein the step c) further comprises the steps of:
I) Generating an error image representing the artifact in the first image, based on the second image; and
II) Generating the third image by subtracting the error image from the first image.
3. A method as claimed in claim 2, wherein said step I) further comprises the steps of: i) Segmenting the second image to provide at least a segmented image with high attenuation objects separate from low attenuation objects; ii) Reprojecting the segmented image to form at least a second set of data; iii) Reconstructing an image from the second set of data using a reconstruction algorithm; and iv) Determining the error image based on the reconstructed image.
4. A method as claimed in claim 1, wherein said step b) further comprises the steps of:
I) Segmenting the first image to provide at least a segmented image with high attenuation objects separate from low attenuation objects;
II) Reprojecting the segmented image to form at least a third set of data;
III) Reconstructing an image from the third set of data using a reconstruction algorithm; IV) Determining an image representing the artifact in the first image, based on the reconstructed image; and
V) Subtracting the determined image from the first image to obtain the second image.
5. A method as claimed in claim 1, further comprising the steps of: d) Generating a fourth image by processing the first image and the third image, the fourth image comprising an artifact that is further reduced than the artifact of the third image.
6. A method as claimed in claim 1, wherein the first set of input data is at least part of data collected from any one of a cone beam X-ray device and a magnetic resonance device.
7. A method as claimed in claim 1, wherein the first set of input data is a set of data collected from a 3D X-ray scan device.
8. A method as claimed in any one of claims 1, 3 and 4, wherein the reconstruction algorithm is compatible with cone beam X-ray data.
9. Apparatus for reducing artifact in an image, the apparatus comprising a processor configured to:
Generate a first image by processing a set of collected data using a reconstruction algorithm, the first image comprising an artifact;
Generate a second image by processing the first image, the second image comprising a reduced artifact; and
Generate a third image by processing the first image and the second image, the third image comprising a further-reduced artifact.
10. Apparatus as claimed in claim 9, wherein, to generate the second image, the processor is further configured to: I) Segment the first image to provide at least a segmented image with high attenuation objects separate from low attenuation objects;
II) Reproject the segmented image to form at least a second set of data;
III) Reconstruct an image from the second set of data, using a reconstruction algorithm;
IV) Determine an image representing the artifact, based on the reconstructed image; and
V) Subtract the determined image from the first image to obtain the second image.
11. Apparatus as claimed in claim 9, wherein, to generate the third image, the processor is further configured to: i) Segment the second image to provide at least a segmented image with high attenuation objects separate from low attenuation objects; ii) Reproject the segmented image to form at least a third set of data; iii) Reconstruct an image from the third set of data, using a reconstruction algorithm; iv) Determine the error image based on the reconstructed image; and v) Subtract the error image from the first image to generate the third image.
12. Apparatus as claimed in claim 9, wherein the set of collected data is collected from any one of a cone beam X-ray device and a magnetic resonance device.
13. A computer program for reducing artifact in an image, the computer program comprising: a) A first reconstruction module, configured to generate a first image by processing a set of collected data using a reconstruction algorithm, the first image comprising an artifact; b) A second reconstruction module, configured to generate a second image by processing the first image, the second image comprising a reduced artifact; and c) A third reconstruction module, configured to generate a third image by processing the first image and the second image, the third image comprising a further-reduced artifact.
14. A computer program as claimed in claim 13, wherein the second reconstruction module is further configured to:
I) Segment the first image to provide at least a segmented image with high attenuation objects separate from low attenuation objects;
II) Reproject the segmented image to form at least a second set of data;
III) Reconstruct an image from the second set of data, using a reconstruction algorithm;
IV) Determine an image representing the artifact, based on the reconstructed image; and
V) Subtract the determined image from the first image to obtain the second image.
15. A computer program as claimed in claim 14, wherein the third reconstruction module is further configured to:
I) Segment the second image to provide at least a segmented image with high attenuation objects separate from low attenuation objects;
II) Reproject the segmented image to form at least a third set of data;
III) Reconstruct an image from the third set of data, using a reconstruction algorithm;
IV) Determine an image representing the artifact, based on the reconstructed image; and
V) Subtract the determined image from the first image to obtain the third image.
16. A computer program as claimed in claim 15, wherein the reconstruction algorithms used in the first reconstruction module, the second reconstruction module and the third reconstruction module are the same reconstruction algorithms.
PCT/IB2008/052632 2007-07-05 2008-07-01 Method and apparatus for image reconstruction WO2009004571A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200710127466 2007-07-05
CN200710127466.8 2007-07-05

Publications (1)

Publication Number Publication Date
WO2009004571A1 true WO2009004571A1 (en) 2009-01-08

Family

ID=39816930

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2008/052632 WO2009004571A1 (en) 2007-07-05 2008-07-01 Method and apparatus for image reconstruction

Country Status (1)

Country Link
WO (1) WO2009004571A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008038357B3 (en) * 2008-08-19 2010-01-14 Siemens Aktiengesellschaft Method for generating 2D slice images from 3D projection data acquired by means of a CT system from an examination object containing metallic parts
GB2453177B (en) * 2007-09-28 2010-03-24 Christie Hospital Nhs Foundati Image enhancement method
CN109472835A (en) * 2017-09-07 2019-03-15 西门子保健有限责任公司 Handle the method for medical image and the image processing system of medical image
US20210304461A1 (en) * 2020-03-24 2021-09-30 Siemens Healthcare Gmbh Method and apparatus for providing an artifact-reduced x-ray image dataset

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002086822A1 (en) * 2001-04-23 2002-10-31 Philips Medical Systems Technologies Ltd. Ct image reconstruction
WO2005076221A1 (en) * 2004-02-05 2005-08-18 Koninklijke Philips Electronics, N.V. Image-wide artifacts reduction caused by high attenuating objects in ct deploying voxel tissue class

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002086822A1 (en) * 2001-04-23 2002-10-31 Philips Medical Systems Technologies Ltd. Ct image reconstruction
WO2005076221A1 (en) * 2004-02-05 2005-08-18 Koninklijke Philips Electronics, N.V. Image-wide artifacts reduction caused by high attenuating objects in ct deploying voxel tissue class

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HSIEH JIANG ET AL: "An iterative approach to the beam hardening correction in cone beam CT", MEDICAL PHYSICS, AIP, MELVILLE, NY, US, vol. 27, no. 1, 1 January 2000 (2000-01-01), pages 23 - 29, XP012010948, ISSN: 0094-2405 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2453177B (en) * 2007-09-28 2010-03-24 Christie Hospital Nhs Foundati Image enhancement method
DE102008038357B3 (en) * 2008-08-19 2010-01-14 Siemens Aktiengesellschaft Method for generating 2D slice images from 3D projection data acquired by means of a CT system from an examination object containing metallic parts
CN109472835A (en) * 2017-09-07 2019-03-15 西门子保健有限责任公司 Handle the method for medical image and the image processing system of medical image
CN109472835B (en) * 2017-09-07 2023-12-01 西门子保健有限责任公司 Method for processing medical image data and image processing system for medical image data
US20210304461A1 (en) * 2020-03-24 2021-09-30 Siemens Healthcare Gmbh Method and apparatus for providing an artifact-reduced x-ray image dataset
US11854125B2 (en) * 2020-03-24 2023-12-26 Siemens Healthcare Gmbh Method and apparatus for providing an artifact-reduced x-ray image dataset

Similar Documents

Publication Publication Date Title
JP4820582B2 (en) Method to reduce helical windmill artifact with recovery noise for helical multi-slice CT
US7706497B2 (en) Methods and apparatus for noise estimation for multi-resolution anisotropic diffusion filtering
US9486178B2 (en) Radiation tomographic image generating apparatus, and radiation tomographic image generating method
US10013780B2 (en) Systems and methods for artifact removal for computed tomography imaging
US7747057B2 (en) Methods and apparatus for BIS correction
US20100183214A1 (en) System and Method for Highly Attenuating Material Artifact Reduction in X-Ray Computed Tomography
EP2357617B1 (en) X-ray computed tomography apparatus, reconstruction processing apparatus and image processing apparatus
JP2008535612A (en) Image processing system for circular and helical cone beam CT
CN110415311B (en) PET image reconstruction method, system, readable storage medium and apparatus
JP6505513B2 (en) X-ray computed tomography imaging apparatus and medical image processing apparatus
CN114548238A (en) Image three-dimensional reconstruction method and device, electronic equipment and storage medium
KR101783964B1 (en) Tomography apparatus and method for reconstructing a tomography image thereof
US6845143B2 (en) CT image reconstruction
JP5329103B2 (en) Image processing apparatus and X-ray CT apparatus
EP3680860A1 (en) Tomographic imaging apparatus and method of generating tomographic image
WO2009004571A1 (en) Method and apparatus for image reconstruction
US6980622B2 (en) Method and apparatus for image reconstruction and X-ray CT imaging apparatus
EP3329851B1 (en) Medical imaging apparatus and method of operating the same
JP2016198504A (en) Image generation device, x-ray computer tomography device and image generation method
JP6615531B2 (en) X-ray computed tomography apparatus and medical image processing apparatus
US11771390B2 (en) Method and device for determining the contour of anatomical structures in a digital X-ray-based fluoroscopic image
JP6052425B2 (en) Contour image generating device and nuclear medicine diagnostic device
JP3678375B2 (en) Radiation tomography equipment
CN110730977B (en) Low dose imaging method and device
JPH08272945A (en) Image processor

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08763424

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08763424

Country of ref document: EP

Kind code of ref document: A1