USRE42034E1 - Method and apparatus for ultra fast symmetry and SIMD based projection-back projection for 3D pet image reconstruction - Google Patents

Method and apparatus for ultra fast symmetry and SIMD based projection-back projection for 3D pet image reconstruction Download PDF

Info

Publication number
USRE42034E1
USRE42034E1 US12/498,952 US49895209A USRE42034E US RE42034 E1 USRE42034 E1 US RE42034E1 US 49895209 A US49895209 A US 49895209A US RE42034 E USRE42034 E US RE42034E
Authority
US
United States
Prior art keywords
projecting
image data
data
projection
symmetry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US12/498,952
Inventor
Zang Hee Cho
Young Bo Kim
Cheol Ok Lee
In Ki Hong
Sung Tak Chung
Hang Keun Kim
Young Don Son
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industry Academic Cooperation Foundation of Gachon University of Medicine and Science
Original Assignee
Industry Academic Cooperation Foundation of Gachon University of Medicine and Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020070027305A external-priority patent/KR20070110186A/en
Priority claimed from KR1020070040515A external-priority patent/KR20070110191A/en
Application filed by Industry Academic Cooperation Foundation of Gachon University of Medicine and Science filed Critical Industry Academic Cooperation Foundation of Gachon University of Medicine and Science
Priority to US12/498,952 priority Critical patent/USRE42034E1/en
Application granted granted Critical
Publication of USRE42034E1 publication Critical patent/USRE42034E1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01TMEASUREMENT OF NUCLEAR OR X-RADIATION
    • G01T1/00Measuring X-radiation, gamma radiation, corpuscular radiation, or cosmic radiation
    • G01T1/29Measurement performed on radiation beams, e.g. position or section of the beam; Measurement of spatial distribution of radiation
    • G01T1/2914Measurement of spatial distribution of radiation
    • G01T1/2985In depth localisation, e.g. using positron emitters; Tomographic imaging (longitudinal and transverse section imaging; apparatus for radiation diagnosis sequentially in different planes, steroscopic radiation diagnosis)
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/037Emission tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2211/00Image generation
    • G06T2211/40Computed tomography
    • G06T2211/424Iterative

Definitions

  • the present invention relates to a method and apparatus for ultra fast symmetry and SIMD based projection-backprojection for 3D PET image reconstruction. More specifically, the present invention relates to a method and apparatus based on the symmetry properties of the projection and backprojection processes, especially in the 3D OSEM algorithm that requires a plurality of projections and back-projections.
  • PET scanners e.g., HRRT (High Resolution Research Tomograph) developed by CTI (now Siemens)
  • HRRT High Resolution Research Tomograph
  • CTI now Siemens
  • the amount of collected coincidence line data contains more than 4.5 ⁇ 10 9 coincidence lines of response generated by as many as 120,000 nuclear detectors.
  • HRRT High Resolution Research Tomograph
  • Such large amount of data and the reconstruction of this data set pose to be a real problem in HRRT. That is, they pose to be major problems in achieving further developments and applications of high resolution PET scanners.
  • obtaining one set of reconstructed images often requires many hours of image reconstruction.
  • tomographic images can be reconstructed by two approaches, one being an analytic method and the other being an iterative approach.
  • PET scanners of different types were developed in the mid 1970's and the application of various tomographic image reconstruction techniques was naturally introduced in the field.
  • an analytic approach such as backprojection and filtering or filtered back-projection (FB)
  • FB filtered back-projection
  • an artifact known as a streak artifact is frequently generated. This is especially true when the detector arrangements are not uniform such as in the case of HRRT (e.g., Siemens' High Resolution Research Tomograph) where the detectors are arranged in a set of blocks in an octagonal shape.
  • HRRT Siemens' High Resolution Research Tomograph
  • Projection methods usually employ a system matrix, which is determined by the geometric factor of the scanner. As the resolution of the PET image improves and the number of slices increases, the size of the matrix is also increased drastically in proportion to the increases in LORs, thus resulting in not only the need for a large memory but also the total computation time.
  • Current HRRT for example, requires nearly eighty minutes of reconstruction time, in addition to the generation of sinograms and appropriate data streaming processes such as attenuation, random and scatter corrections, a set of precursors to the reconstruction processes.
  • a number of alternative proposals such as linear integration have been proposed, as well as the use of multiple CPUs or a cluster computer system approach. Most of the techniques, however, are not practically useful since such cluster computing requires a large data transfer time, although the overall computation is faster than a single unit.
  • Obtaining or generating projection data can be divided into two categories, namely, ray-driven method and pixel-driven method.
  • the ray-driven method calculates the linear integration directly along the ray path connecting the centers of the two opposite detector cells, whereas the pixel-driven method calculates the linear integration along the ray path centered around an image pixel for the entire projection angles.
  • the ray-driven method is often used in projection, while pixel-driven method is used in backprojection.
  • FIG. 1A illustrates a 3-D object and its projection to a 2-D projection plane.
  • FIG. 1B illustrates a y′-z plane view of FIG. 1 A.
  • FIG. 2A illustrates a rotation of projection ray (ray-path) or frame onto the image plane which is on the fixed reference (x,y) coordinate.
  • FIG. 2B illustrates the case where the projection ray (ray-path) frame coincides with the fixed (x′,y′) coordinate.
  • FIG. 3 illustrates the relationships among z, y′, x′, ⁇ , x r , y r , r y r ,n, ⁇ , Z y r ,n, ⁇ by using the y′-z plane view.
  • FIG. 4A illustrates an example of the “Mirror-symmetry” used in the proposed SSP method.
  • FIG. 4B illustrates an example of the “ ⁇ -symmetry” used in the proposed SSP method.
  • FIG. 5A illustrates the y′-symmetry in the proposed SSP method.
  • FIG. 5B illustrates the ⁇ -symmetry in the proposed SSP method.
  • FIG. 6 illustrates the concept of the balanced job distribution based on the sum of the ray path length.
  • FIG. 7 is a flow chart of the projection according to an embodiment of the present invention.
  • FIG. 8 is a flow chart of the back-projection according to an embodiment of the present invention.
  • FIG. 9A illustrates a comparison of projection data between the existing method and the proposed SSP method.
  • FIG. 9B illustrates cut-views of sinogram at a specific view.
  • FIG. 10A illustrates a comparison of simple back-projection images between the existing method and the proposed SSP method.
  • FIG. 11A illustrates a set of reconstructed images with the existing method, the proposed SSP method and the differences.
  • FIG. 12 illustrates the symmetry relationship in SIMD.
  • FIG. 1A illustrates a 3-D object and its projection to a 2-D projection plane.
  • FIG. 1B is a y′-z plane view of FIG. 1 A.
  • the line integral will be performed along ⁇ right arrow over ( ⁇ ) ⁇ x r ,y r , ⁇ , ⁇ .
  • the angle + ⁇ indicates the oblique angle of the image planes toward an upper side.
  • x r and y r are the coordinates of the 2-D projection plane of the 3-D object.
  • indicates rotation of the projection ray set in reference to the coordinate axis (x,y).
  • Dotted lines with arrow indicate the projection rays.
  • Each projection ray is determined by four variables, i.e., (x r ,y r , ⁇ , ⁇ ).
  • the projection plane shown in FIG. 1A is composed of 2D projection data (or bed of nails) in coordinate (x r ,y r ) at a given ⁇ and ⁇ .
  • the projection ray has 4 dimensions, namely, x r , y r , ⁇ and ⁇ , as shown in the FIG. 1 .
  • Projection is a process of converting a 3D object function or image information in 3D coordinates to 2D projection coordinates, and the projection plane as shown in FIG. 1 .
  • the projection plane is determined as shown in FIG. 1 A.
  • Equation (1) describes that projection P ⁇ , ⁇ (x r ,y r ) is a sum of the pixel values in an image function I(x,y,z) along the path of projection ray, ⁇ right arrow over ( ⁇ ) ⁇ x r ,y r , ⁇ , ⁇ , in the image domain.
  • ⁇ (.) represents a sampling function.
  • FIG. 2 illustrates the concepts of the rotating projection ray (ray-path) frame onto the image plane in (x,y) and the proposed fixed (aligned) projection frame with a rotating image plane.
  • FIG. 2A illustrates the rotation of projection ray (ray-path) or frame onto the image plane, which is on the fixed reference (x,y) coordinate. This is the conventional scheme applied to most of the image reconstructions.
  • FIG. 2B illustrates a case where the projection ray (ray-path) frame coincides with the fixed (x′,y′) coordinate. The latter means that the image plane is now rotated instead of the reference frame, the projection ray frame.
  • This latter scheme is the basis of the proposed SSP (Symmetry and SIMD based Projection-backprojection) method.
  • the image function I (x′,y′,z) represents the intermediate stage of an image to be reconstructed and is a temporary image data.
  • R ⁇ is the rotation matrix
  • FIG. 3 illustrates the relationships among z, y′, x′, ⁇ , x r , y r , r y r ,n, ⁇ and Z y r ,n, ⁇ by using the y′-z plane view. It should be noted that these relationships are valid on all the y′-z planes with any x′. Also, y′ is now noted with discrete value n.
  • Z y r ,n, ⁇ is integer value of Z y r ,n, ⁇
  • r y r ,n, ⁇ is interpolation coefficient or remainder of Z y r ,n, ⁇
  • n is discrete value of y′ N I x r
  • the backprojection process is the transpose of projection. Therefore, the same idea as in the projection (or projection data generation) process can be applied to backprojection or image data generation.
  • an iterative reconstruction such as the OS-EM algorithm
  • a plurality of projections and backprojections are required. That is, algorithm requires many repeated image reconstructions as well as projection data generations.
  • I ⁇ as the sum of the projection planes about ⁇ for a fixed ⁇ as given below as an intermediate stage image before the finally reconstructed image.
  • the computation can then be further simplified, for example, by using the symmetry property in ⁇ , I ⁇ ⁇ ( x , y , z ) ⁇ ⁇ ⁇ ⁇ ⁇ P ⁇ , ⁇ ⁇ ( x ′ , z - y ′ ⁇ tan ⁇ ⁇ ⁇ ) ⁇ d ⁇ ( 8 )
  • I ⁇ ( x , y , z ) ⁇ ⁇ ⁇ ⁇ R - ⁇ ⁇ I ⁇ ⁇ ( x ′ , y ′ ⁇ , z ) ⁇ d ⁇ ( 10 )
  • Equation (10) is equivalent to rotating the set of partially reconstructed image planes at a given angle ⁇ and then integrating these partially reconstructed images over all ⁇ 's to form the finally reconstructed image I(x,y,z).
  • the full utilization of symmetry properties will be the central part of the proposed SSP method.
  • the symmetric points represent the similar coefficients (the same value but with different polarity and/or different coordinates) that will be shared in computation without additional calculations as well as the use of identical instructions. That is, the symmetric points have the same interpolation coefficient or complementary value.
  • One of the interesting aspects of the image reconstruction is the use of the symmetric properties of each image point. That is, once one point is calculated with an appropriate interpolation coefficient, one can assign the same coefficient value in the opposite point(s) by using symmetry properties of various types.
  • One of the simplest forms is the use of the y′-axis symmetry, as shown in FIG. 4 A. In this computation, a point, (x′ 0 ,y′ 0 ) at +x r is identical in value (x′ 1 ,y′ 1 ) at ⁇ x r with change in polarity only. An illustration of this “Mirror-symmetry” is shown in FIG. 4 A.
  • FIG. 4 illustrates the examples of the “Mirror-symmetry” and “ ⁇ -symmetry” used in the proposed SSP method.
  • FIG. 4A illustrates an example of “Mirror-symmetry” against the y′ axis. That is, once a coordinate point (x′ 0 ,y′ 0 ) is known, a symmetric point (x′ 1 ,y′ 1 ) across the y′ axis can be obtained or assigned without additional computation. This will speed up the computation by a factor of two or computation time by half.
  • FIG. 4 illustrates the examples of the “Mirror-symmetry” and “ ⁇ -symmetry” used in the proposed SSP method.
  • FIG. 4A illustrates an example of “Mirror-symmetry” against the y′ axis. That is, once a coordinate point (x′ 0 ,y′ 0 ) is known, a symmetric point (x′ 1 ,y′ 1 ) across the y′ axis can be obtained or assigned without additional
  • 4B illustrates the symmetric property of the projection data against 0, i.e., if an interpolation coefficient of a coordinate point (x′ 0 ,y′ 0 ) is known at ⁇ , then a symmetric point (x′ 1 ,y′ 0 ) at ⁇ +90° can be obtained without additional computation.
  • the projection data calculation at ⁇ +90° share the same coordinate values at ⁇ with simple coordinate changes, i.e., (x′,y′) at ⁇ to ( ⁇ y′,x′) at ⁇ +90°. This symmetry property reduces the computation time by half.
  • y′- and ⁇ -symmetry are in the y′-z plane. It should be noted that y′-symmetry is intimately related to ⁇ -symmetry. Therefore, it is easy to illustrate the two together, as follows.
  • the y′-symmetry property has the effect of changing the interpolation coefficients to complementary values. In fact, y′-symmetry has the same symmetry property as ⁇ -symmetry, as discussed below.
  • FIG. 5 A the property of y′-symmetry is illustrated in FIG. 5 A. As shown in FIG. 5A , variable Z can now be noted as follows (see also FIG.
  • r y r ,n, ⁇ and 1 ⁇ r y r ,n, ⁇ represent die interpolation coefficients.
  • Equation (13) can be transformed into a factored form to reduce the number of instructions required to execute the computation. Furthermore, the interpolation coefficient r y r ,n, ⁇ can be reduced to simple functions of only n and ⁇ and can be derived from (11). Because y r is always an integer and r y r ,n, ⁇ satisfies the condition 0 ⁇ r y r ,n, ⁇ ⁇ 1, r y r ,n, ⁇ is a function of only n and ⁇ . Therefore, the notation r y r ,n, ⁇ can be simplified to r n, ⁇ (see and compare with FIGS. 3 and 5 A).
  • (1 ⁇ r y r ,n, ⁇ ) is equal to r y r , ⁇ n, ⁇ .
  • (1 ⁇ r y r , ⁇ n, ⁇ ) is equal to r n, ⁇ .
  • Equation (14) is for P ⁇ , ⁇ (x r ,y r ), while (16) is for the P ⁇ , ⁇ (x r ,y r ).
  • (16) is for the P ⁇ , ⁇ (x r ,y r ).
  • the relations given in (15), i.e., r n, ⁇ , r n, ⁇ , Z y r ,n, ⁇ , and Z y r ,n, ⁇ can be used once again and further simultaneously calculate P ⁇ , ⁇ (x r ,y r ) and P ⁇ , ⁇ (x r , y r ).
  • y′- and ⁇ -symmetries share the common properties.
  • FIG. 5 illustrates y′- and/or ⁇ -symmetry in the proposed SSP method.
  • FIG. 5A shows y′-symmetry
  • FIG. 5B shows ⁇ -symmetry.
  • the left side of each figure shows the z-y′ plane cut-view, while the right part shows an expanded view of the left.
  • Each quarter illustrates the details of the interpolation coefficient, r′s.
  • This y′- or ⁇ -symmetry property indicates that if one of the interpolation coefficients is calculated for one point, then the remaining interpolation coefficients for the other three points can be obtained without additional computation. The amount of calculations, therefore, can be reduced to one fourth (1 ⁇ 4) by using y′- and ⁇ -symmetries.
  • FIG. 12 shows these sixteen pairs of symmetry relationship between rotated image data sets and projection data sets.
  • the upper group (a-1 and a-2) shares the same ⁇ , while the lower group shares ⁇ .
  • Each group is divided by two sub-groups.
  • Each upper subgroup shares the same linear-interpolation coefficients, while each lower group shares the complementary values to that of the each upper group.
  • Each single subgroup therefore, can be handled as a single SIMD data set (or single SLMD pack).
  • Z y r ,y′, ⁇ and z y r , ⁇ y′, ⁇ share the same value.
  • z y r ,y′, ⁇ and z y r , ⁇ y′, ⁇ also share the same value.
  • sixteen symmetry pairs are divided into two groups, each sharing the same ⁇ .
  • these two groups were once again divided into two subgroups.
  • the members of subgroups a-1 and b-2 share the same interpolation coefficient (e.g., r y′, ⁇ ), while the subgroup a-2 and b-1 have complementary values (e.g., 1 ⁇ r y′, ⁇ ) as interpolation coefficients.
  • SI x ′ ⁇ ( n , z ) ⁇ I ⁇ ( x ′ , n , z ) , I ⁇ ( - x ′ , n , z ) , I ⁇ ( - n ′ , x ′ , z ) , I ⁇ ( - n , - x ′ , z ) ⁇ ( 17 )
  • SI x ′ ⁇ ( - n , z ) ⁇ I ⁇ ( x ′ , - n , z ) , I ⁇ ( - x ′ , - n , z ) , I ⁇ ( n , x ′ , z ) , I ⁇ ( n , x ′ , z ) , I ⁇ ( n , - x ′ , z ) , I ⁇ ( n , - x
  • SI means an SIMD image data set.
  • SP means an SIMD projection data set.
  • Equation (17) shows the arrangement of the SIMD data set for the upper group of FIG. 12 .
  • This process is referred to as packing and single SIMD instruction data set consists of four points. This process can be applied to the lower groups having ⁇ .
  • SI x r (n,Z y r ,n, ⁇ ), SI x r (n,Z y r ,n, ⁇ +1) and SP ⁇ , ⁇ (x r ,y r ) are according to subgroup “a-1” of FIG. 12 .
  • SI x r ( ⁇ n,Z y r , ⁇ n, ⁇ ), SI x r ( ⁇ n,Z y r , ⁇ n ⁇ +1) and SP ⁇ , ⁇ (x r ,y r ) are according to subgroup “a-2” of FIG. 12 .
  • SI x r (n,Z y r , ⁇ n, ⁇ ), SI x r (n,Z Y r , ⁇ n, ⁇ +1) and SP ⁇ , ⁇ (x r ,y r ) are according to subgroup “b-1” of FIG. 12 .
  • SI x r ( ⁇ n,Z r r ,n, ⁇ ), SI x r ( ⁇ n,Z y r ,n, ⁇ +1) and SP ⁇ , ⁇ (x r ,y r ) are according to subgroup “b-2” of FIG. 12 .
  • This SIMD packing concept can be applied equally to a backward projection.
  • the recently available PC provides a multi-processor environment.
  • an instruction called “thread programming technique” can be used.
  • the proposed SSP method is capable of being expanded to multi-processor systems. In this case, a job distribution with an equal amount of load is important for optimal performance.
  • the job load is designed to be distributed equally along the x′ or x r axis. This is because there is uneven distribution of computational load if an equal range of x′ is assigned to each CPU, as illustrated in FIG. 6 .
  • FIG. 6 illustrates the concept of the balanced job distribution based on the sums of the ray path length, that is, equal distribution of the computational load for each group S1, S2, S3 and S4.
  • the job loads are divided into four groups of equal area, i.e., S1, S2, S3 and S4. This redistribution suggests that each group has almost the same area, i.e., the same amount of computational load.
  • FIGS. 7 and 8 The schematics of the new SSP projector and backprojector are provided in FIGS. 7 and 8 , respectively.
  • the aligned frame was used by utilizing the rotation operation as well as the symmetry properties to reduce the number of loop counts.
  • the total number of instructions was reduced by combining the symmetry properties and SIMD technique.
  • FIG. 7 is a flow chart showing the projection.
  • the inner-most block 706 within the loop includes z loop, the interpolation operation, the integration along the ray path 708 and scanning along the x′ axis 710 .
  • the intermediate results of rotated projection should be unpacked in order to be stored back to the original position.
  • the sizes of (I-Pack) and SIMD data sets (I-Pack+P-Pack) must be less than the cache sizes of L1 and L2, respectively, in order to maximize the cache hit ratio and the data access speed.
  • FIG. 8 is a flow chart showing the backprojection.
  • 8 projection data which have symmetric relationships, are packed for use in the SIMD.
  • These packed projection data, P-Pack are interpolated linearly along the y r direction to calculate the packed backprojected image data, I-Pack, for the final image I ⁇ (x′,y′,z) for all ⁇ directions.
  • these packed backprojected image data are unpacked and aligned to each valid position in the nominee image data I ⁇ (x′,y′,z) after each step of ⁇ loop.
  • These procedures are repeated by loops of y r and x r until the semifinal image I ⁇ (x′,y′,z) is fully reconstructed.
  • Therion image I ⁇ (x′,y′,z) is reconstructed for each view angle, ⁇ .
  • Each semi-final image I ⁇ (x′,y′,z) is rotated by its own view angle ⁇ and then added up to reconstruct the final image I (x,y,z), the image at the original Cartesian coordinates.
  • the sizes of SIMD data sets, (I-Pack+P-Pack) and (I-Pack) should be less than the cache sizes of L2 and L1, respectively, for the maximization of the computation.
  • the rotation operation was used to utilize the aligned frame.
  • the projection plane is aligned with the aligned from (x′,y′) and each incoming image plane is then rotated according to its respective view angle.
  • the original image is recovered from the aligned frame by rotating back to the original view angle, thus permitting an intermediate reconstructed image data to be formed.
  • a memory packing is performed prior to each projection/backprojection step. After the projection step, the packed SIMD data set in the projection domain is unpacked and returned to the original position. Similarly, the packed SIMD data set in the image domain is unpacked after each backprojection step.
  • HRRT High Resolution Research Tomograph developed by CPS/Siemens, Knoxville, Tenn., U.S.A
  • HRRT has 119,808 detectors and is designed for ultra high resolution brain scanning. Since HRRT has the largest number of detectors (with the smallest detectors size) among the human PET scanners that are currently available, it has the highest computational complexity for image reconstruction. The number of computations has been increased substantially (in proportion to the number of LORs) and is much larger compared to other existing PET scanners.
  • the reconstruction software provided by the HRRT package supports iterative algorithms such as OP-OSEM3D and is the preferred reconstruction algorithm in the HRRT system.
  • the SSP method is supported by commercially available computer systems such as a common PC.
  • Three other platforms were chosen to compare the SSP method against the existing method.
  • the first platform is PC 1, which has Intel dual-core 3.0 GHz, 4 GB RAM, 1 MB L2 cache per CPU.
  • the second platform, PC2 is a high-end PC with two dual-core AMD 2.4 GHz, 8 GB RAM, 1 MB L2 cache per CPU.
  • the third platform is the current HRRT computer platform, which consists of an eight node cluster system (this is denoted as HRRT CPS system), wherein each node consists of two Xeon 3.06 GHz processors with 512 KB L2 cache and 2 GB RAM.
  • span in 3D means the mode of axial compression.
  • the test was performed in span 3 and span 9 , which are typically used in HRRT.
  • the parameters for OP-OSEM3D are as follows: 256 ⁇ 256 ⁇ 207 image pixels; six iterations; and 16 subsets. Thread programming is also used.
  • FIGS. 9 and 10 show the results of the projector and backprojector of the SSP.
  • the SSP technique or method is basically a tri-linear interpolation, whereas the existing method is a bi-linear method.
  • FIG. 9B shows the profiles for projection data at 45° and 90°. The same comparison was made on the backprojection data as well as the final reconstructed images, wherein the results are shown in FIGS. 10 and 11 .
  • the projected data and image quality of SSP are equivalent to the existing method, but yet with an advantage of nearly two orders of magnitude in computational speed.
  • the performance aspect of the SSP method (i.e., execution time on PC 1 and 2) was also compared.
  • the computation time was measured for the two existing compression modes, namely, span 3 and span 9.
  • the results indicate that the SSP method, as expected, indeed enhanced the performance by almost 80 times, 2 ( ⁇ +90° symmetry) ⁇ 2 (x r or mirror-symmetry) ⁇ 2 (y′-symmetry) ⁇ 2 ( ⁇ -symmetry) ⁇ 4 (SIMD)+other optimizations, compared to the existing method (original HRRT package). It should be noted that this performance enhancement can be obtained more or less independent of the system architecture.
  • the relative improvement will be higher as more oblique rays or angles are used, as seen from TABLES III and IV.
  • FIG. 9A illustrates a comparison of projection data between the existing method and the proposed SSP method.
  • Top Existing
  • Middle SSP
  • Bottom The difference between SSP and the existing method
  • FIG. 9B illustrates the cut-views of sinogram at a specific view.
  • FIG. 10A illustrates a comparison of simple back-projection images between the existing method and the proposed SSP method.
  • Top Existing method.
  • Middle New SSP technique.
  • Top Comparison between the existing and new SSP. Bottom: The differences between the two)
  • FIG. 11 illustrates a comparison of reconstructed images with six iterations and their differences.
  • FIG. 11A illustrates a set of reconstructed images with the existing method (top), with new SSP method (middle) and the differences (bottom).
  • the proposed SSP method will improve the overall computation time, especially when a dynamic functional study is desired.
  • the image reconstruction time for a normal HRRT PET scan with a commercially available PC requires about seven to eight minutes for span 3 after completing the sinogram formation and the precorrection processes compared with eighty minutes with current HRRT reconstruction package (OP-OSEM3D) utilizing an 8-node cluster system.
  • the present invention provides a fast method of calculating the ray path, which leads to the projection data and reconstructed image in 3D by using the symmetry properties of the projection and image data. Exploiting these simple geometrical symmetry properties and with the help of SIMD, a new ultra fast reconstruction method was developed, which has a computational speed advantage of nearly two orders of magnitudes compared to the existing reconstruction package (specifically that of OP-OSEM3D, the latest and most widely used PET method, especially for HRRT without compromising image quality).
  • the key concepts introduced in the SSP method can be summarized as follows. First, interpolation operations are reduced from three dimension to one dimension by using rotation based projection, or the aligned frame concept.
  • the rotation based projection is combined with symmetric properties of the ( ⁇ , ⁇ ,y′,x r ) and coupled with SIMD technique with optimized L1 and L2 cache use.
  • the sixteen symmetry points which share the same interpolation coefficients (or their complementary values), were grouped, thus permitting them to be processed simultaneously in a projection/backprojection operation to thereby achieve a substantially improved overall reconstruction time.
  • SIMD operations have also been incorporated.
  • the SIMD scheme allowed us to simultaneously access four data.
  • the data size per loop suitable for an L2 cache size was optimized, as well as the data structures for minimizing the memory stride.
  • the projection and backprojection operations were performed in the aligned frame by using the symmetry concept and SIMD with cache optimization and reduced the number of instructions in the reconstruction of an image from the sinogram data provided by the HRRT PET.
  • the proposed SSP method incorporates the symmetry properties of projection to maximum, thereby reducing the computation time by as much as 16 times.
  • an overall computational speed gain by a factor close to 80 was achieved.
  • Such an improvement in computation time will open the door for the dynamic functional study of high resolution PET such as HRRT, which suffered from an unacceptably long reconstruction time and has been a serious undesired obstacle in molecular imaging.
  • This improvement will provide numerous new opportunities for PET users, especially for HRRT users in the molecular imaging community. It is clear that the proposed method helps the PET researchers to contemplate a possible dynamic study, which was limited by the unacceptable image reconstruction time imposed by the reconstruction process. This newly developed SSP method can also be applied to precorrection processes such as attenuation and scatter correction. Finally, with the new SSP method, it will be possible to achieve true interactive scanning. The new SSP method can also be applied to large classes of existing PET scanners of various types as well as to the cluster systems for dynamic studies.
  • any reference in this specification to “one embodiment,” “an embodiment,” “example embodiment,” etc. means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the invention.
  • the appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Molecular Biology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

A method and apparatus are provided for reconstructing 3D image. The method may include the steps of: detecting a plurality of line of response (LORs) emitted from an object; transforming the plurality of LORs into first sinogram data; back-projecting the first sinogram data with a plurality of projection angles to produce image data for the object; and projecting the produced image data with the plurality of projection angles to transform the image data into second sinogram data. The back-projecting may include filling pixels of image plane for each of the plurality of projection angles with the first sinogram data and rotating a coordinate axis of the image plane with corresponding projection angle to produce the image data. The projecting may include rotating the image data with corresponding projection angle in an opposite direction before projecting the image data with the plurality of projection angles. The projecting and the back-projecting may use symmetry properties in coordinate space.

Description

The present application claims priority from Korean Patent Application No. 10-2006-0042155 filed on May 10, 2006, Korean Patent Application No. 10-2007-0027305 filed on Mar. 20, 2007, and Korean Patent Application No. 10-2007-0040515 filed on Apr. 25, 2007, the entire subject matter of which is incorporated herein by reference.
BACKGROUND
1. Field
The present invention relates to a method and apparatus for ultra fast symmetry and SIMD based projection-backprojection for 3D PET image reconstruction. More specifically, the present invention relates to a method and apparatus based on the symmetry properties of the projection and backprojection processes, especially in the 3D OSEM algorithm that requires a plurality of projections and back-projections.
2. Background
There has been a remarkable progress in PET development over the recent years, especially in the areas of hardware, software and computer implementation of image reconstruction. Recent developments in PET scanners (e.g., HRRT (High Resolution Research Tomograph) developed by CTI (now Siemens)) allow greatly enhanced resolution and sensitivity. In such PET scanners, the amount of collected coincidence line data contains more than 4.5×109 coincidence lines of response generated by as many as 120,000 nuclear detectors. Such large amount of data and the reconstruction of this data set pose to be a real problem in HRRT. That is, they pose to be major problems in achieving further developments and applications of high resolution PET scanners. Thus, in such types of scanners, obtaining one set of reconstructed images often requires many hours of image reconstruction. For example, in HRRT with full data collection in normal brain scans (using SPAN 3), the image reconstruction time is almost eighty minutes. This makes it practically impossible to attempt any list mode based dynamic imaging since the image reconstruction time takes many days (as long as 43 hours or more for 32 frame dynamic image reconstruction).
In general, tomographic images can be reconstructed by two approaches, one being an analytic method and the other being an iterative approach. PET scanners of different types were developed in the mid 1970's and the application of various tomographic image reconstruction techniques was naturally introduced in the field. In case of an analytic approach such as backprojection and filtering or filtered back-projection (FB), an artifact known as a streak artifact is frequently generated. This is especially true when the detector arrangements are not uniform such as in the case of HRRT (e.g., Siemens' High Resolution Research Tomograph) where the detectors are arranged in a set of blocks in an octagonal shape. These types of detector arrangements often involve missing data due to the gaps between the blocks and result in a severe streak artifact in the case of the FB technique. Therefore, alternative approaches such as an EM (Expectation Maximum) algorithm have been sought. Generally, the EM approach requires several steps in the reconstruction process, the two major steps of which are: projection (forward projection) to create projection data from the image or object and backprojection into the image domain for the final image reconstruction. In the EM algorithms, these two processes are repeated until satisfactory images are obtained. Obviously, these repeated projection and backprojection processes are time consuming and have been the major drawback of the EM approach compared to straight filtered backprojection (FB) algorithm. In addition, in case of 3D image reconstruction, the computational burden increases out of proportion due to the astronomical increases in the coincidence lines or the line of responses (LOR). This is a major stumbling block in the daily operation of high resolution PET scanners. Thus, there is a strong need for improving the computational speed or the reconstruction time in EM approaches, especially with high end PET scanners such as HRRT.
Projection methods usually employ a system matrix, which is determined by the geometric factor of the scanner. As the resolution of the PET image improves and the number of slices increases, the size of the matrix is also increased drastically in proportion to the increases in LORs, thus resulting in not only the need for a large memory but also the total computation time. Current HRRT, for example, requires nearly eighty minutes of reconstruction time, in addition to the generation of sinograms and appropriate data streaming processes such as attenuation, random and scatter corrections, a set of precursors to the reconstruction processes. To remedy the computational burden of image reconstruction, a number of alternative proposals such as linear integration have been proposed, as well as the use of multiple CPUs or a cluster computer system approach. Most of the techniques, however, are not practically useful since such cluster computing requires a large data transfer time, although the overall computation is faster than a single unit.
Obtaining or generating projection data can be divided into two categories, namely, ray-driven method and pixel-driven method. The ray-driven method calculates the linear integration directly along the ray path connecting the centers of the two opposite detector cells, whereas the pixel-driven method calculates the linear integration along the ray path centered around an image pixel for the entire projection angles. The ray-driven method is often used in projection, while pixel-driven method is used in backprojection.
In early reconstruction techniques, projection was obtained by weighing the ray passing through the areas of pixels with the assumption that the ray path is a strip with a finite width. It, however, involves a large amount of computation as well as the storage of a large number of matrix or data. Concurrently, Shepp and Logan proposed a simple and computationally efficient algorithm, which requires computing the length of the ray path intersecting with each pixel (instead of the areas).
To speed up the computation, there have been a number of attempts to reduce the reconstruction time. An incremental algorithm has also been developed in which the symmetric property between the neighboring pixels is considered to calculate the position of intersection of a ray. This idea was expanded to 3D reconstruction in cylindrical geometry using oblique rays. In 3D form with a multi-ring system such as HRRT, it became apparent that true 3D approaches will be required to fully utilize the oblique rays to thereby improve the statistics of the image.
BRIEF DESCRIPTION OF THE DRAWINGS
Arrangements and embodiments may be described in detail with reference to the following drawings in which like reference numerals refer to like elements and wherein:
FIG. 1A illustrates a 3-D object and its projection to a 2-D projection plane.
FIG. 1B illustrates a y′-z plane view of FIG. 1A.
FIG. 1C illustrates an example of the transverse plane at θ=0° and the line integrals along the y′ line projected on to the xr.
FIG. 2A illustrates a rotation of projection ray (ray-path) or frame onto the image plane which is on the fixed reference (x,y) coordinate.
FIG. 2B illustrates the case where the projection ray (ray-path) frame coincides with the fixed (x′,y′) coordinate.
FIG. 3 illustrates the relationships among z, y′, x′, θ, xr, yr, ry r ,n,θ, Zy r ,n,θ by using the y′-z plane view.
FIG. 4A illustrates an example of the “Mirror-symmetry” used in the proposed SSP method.
FIG. 4B illustrates an example of the “φ-symmetry” used in the proposed SSP method.
FIG. 5A illustrates the y′-symmetry in the proposed SSP method.
FIG. 5B illustrates the θ-symmetry in the proposed SSP method.
FIG. 6 illustrates the concept of the balanced job distribution based on the sum of the ray path length.
FIG. 7 is a flow chart of the projection according to an embodiment of the present invention.
FIG. 8 is a flow chart of the back-projection according to an embodiment of the present invention.
FIG. 9A illustrates a comparison of projection data between the existing method and the proposed SSP method.
FIG. 9B illustrates cut-views of sinogram at a specific view.
FIG. 10A illustrates a comparison of simple back-projection images between the existing method and the proposed SSP method.
FIG. 10B illustrates cut-views (profiles) at an x axis (y=154, z=103).
FIG. 11A illustrates a set of reconstructed images with the existing method, the proposed SSP method and the differences.
FIG. 11B illustrates cut-views (profiles) at an x axis (y=154, z=103).
FIG. 12 illustrates the symmetry relationship in SIMD.
DETAILED DESCRIPTION
A detailed description may be provided with reference to the accompanying drawings. One of ordinary skill in the art may realize that the following description is illustrative only and is not in any way limiting. Other embodiments of the present invention may readily suggest themselves to such skilled persons having the benefit of this disclosure.
Overview of the Projection, Backprojection and Symmetry Properties
A. Overview of Projection and Backprojection in the Aligned (Reference) Frame with Rotated Projection Plane
FIG. 1A illustrates a 3-D object and its projection to a 2-D projection plane. FIG. 1B is a y′-z plane view of FIG. 1A. There is provided a relation between projection planes and the path of the projection ray, i.e., {right arrow over (ι)}x r ,y r ,φ,θ. The line integral will be performed along {right arrow over (ι)}x r ,y r ,φ,θ. The angle +θ indicates the oblique angle of the image planes toward an upper side. xr and yr are the coordinates of the 2-D projection plane of the 3-D object. FIG. 1C is an example of the transverse plane at θ=0° and the line integrals along the y′ line projected on to the xr. In this figure, φ indicates rotation of the projection ray set in reference to the coordinate axis (x,y). Dotted lines with arrow indicate the projection rays. Each projection ray is determined by four variables, i.e., (xr,yr,φ,θ). The projection plane shown in FIG. 1A is composed of 2D projection data (or bed of nails) in coordinate (xr,yr) at a given θ and φ.
In 3D tomographic image processing, the projection ray has 4 dimensions, namely, xr, yr, φ and θ, as shown in the FIG. 1. Projection is a process of converting a 3D object function or image information in 3D coordinates to 2D projection coordinates, and the projection plane as shown in FIG. 1.
With a fixed view angle φ and oblique angle θ, the projection plane is determined as shown in FIG. 1A. In this projection plane, the definition of the projection operation can be expressed in the form of linear integral given by: P ϕ , θ ( x r , y r ) = I ( x , y , z ) δ ( ι x r , y r , ϕ , θ ) x y z ( 1 )
where
{right arrow over (ι)}x r ,y r ,φ,θ is the ray path,
xr=x′=x cos φ+y sin φ,
yr=z−(−x sin φ+y cos φ)tan θ.
Equation (1) describes that projection Pφ,θ(xr,yr) is a sum of the pixel values in an image function I(x,y,z) along the path of projection ray, {right arrow over (ι)}x r ,y r ,φ,θ, in the image domain. Here, δ(.) represents a sampling function.
FIG. 2 illustrates the concepts of the rotating projection ray (ray-path) frame onto the image plane in (x,y) and the proposed fixed (aligned) projection frame with a rotating image plane. FIG. 2A illustrates the rotation of projection ray (ray-path) or frame onto the image plane, which is on the fixed reference (x,y) coordinate. This is the conventional scheme applied to most of the image reconstructions. FIG. 2B illustrates a case where the projection ray (ray-path) frame coincides with the fixed (x′,y′) coordinate. The latter means that the image plane is now rotated instead of the reference frame, the projection ray frame. This latter scheme is the basis of the proposed SSP (Symmetry and SIMD based Projection-backprojection) method. This scheme allows the symmetry property of the image plane to be utilized, thereby reducing the overall projection and backprojection time. In case of backprojection, the image function I (x′,y′,z) represents the intermediate stage of an image to be reconstructed and is a temporary image data.
It is known that a projection plane at angle θ can be rotated (or aligned) against reference axes, (x,y). Further, x′ coincides with xr in FIG. 2. The well known relation between the rotated coordinate (x′,y′,z) and the image coordinate (x,y,z) in the cylindrical coordinate are provided by the following: ( x y z ) = R ϕ · ( x y z ) = ( cos ϕ sin ϕ 0 - sin ϕ cos ϕ 0 0 0 1 ) ( x y z ) ( 2 )
where
Rφ is the rotation matrix.
Equation (1) can then be simplified as the weighed sum along the ray path and can be written as follows: P ϕ , θ ( x r , y r ) = I ( x , y , z ) δ ( ι x r , y r , ϕ , θ ) x y z = I ( x , y , z ) y ( 3 )
where
I(x′,y′,z)=RφI(x,y,z)
x′=xr
In (3),y′ is assumed to be the integration variable.
As a special case, when the oblique angle θ is equal to zero, the projection rays are in parallel with the y′ axis (see FIG. 1). In multi-layer or 3D PET, oblique rays (θ≠0) are collected and used for true 3D reconstruction since the full utilization of the oblique rays will enhance the image quality due to the increased statistics. Extension of (3) to a 3D case can be represented by θ≠0. It should be noted that the coordinates in the transverse plane are now independent of θ and the ray projected to the transverse plane is parallel to the y′ axis. By the trigonometric relationship depicted in FIG. 3, z can be written as follows:
z=yr+y′tan θ  (4)
FIG. 3 illustrates the relationships among z, y′, x′, θ, xr, yr, ry r ,n,θ and Zy r ,n,θ by using the y′-z plane view. It should be noted that these relationships are valid on all the y′-z planes with any x′. Also, y′ is now noted with discrete value n.
By substituting (4) into (3), the following discrete form of projection data can be obtained: P ϕ , θ ( x r , y r ) = I ( x r , y , y r + y tan θ ) y n = - N I x r / 2 N I x r / 2 [ I ( x r , n , y r + n tan θ ) ] n = - N I x r / 2 N I x r / 2 [ I ( x r , n , Z y r , n , θ ) ( 1 - r y r , n , θ ) + I ( x r , n , Z y r , n , θ + 1 ) ( r y r , n , θ ) ] ( 5 )
where
zy r ,n,θ=yr+n tan θ=Zy r ,n,θ+ry r ,n,θ
Zy r ,n,θεINTEGER, ry r ,n,θεREAL, and 0≦ry r ,n,θ<1.
Zy r ,n,θ is integer value of Zy r ,n,θ
ry r ,n,θ is interpolation coefficient or remainder of Zy r ,n,θ
ry r ,n,θ+ry r ,−n,θ=1 or ry r ,−n,θ=1−ry r ,n,θ
n is discrete value of y′ N I x r
    • is integration length of y′ at given y′ at given yr, φ, and θ.
It should be noted that since projection rays and the coordinates (x,y,z) or (x′,y′,z) are not always coincident. Interpolation is always required. In case of SSP implementation, a 1D linear interpolation is needed along the z axis, as shown in (5). Note that ry r ,n,θ and 1−ry r ,n,θ are the interpolation coefficients along the g axis.
On the other hand, the backprojection process is the transpose of projection. Therefore, the same idea as in the projection (or projection data generation) process can be applied to backprojection or image data generation. In case of an iterative reconstruction such as the OS-EM algorithm, a plurality of projections and backprojections are required. That is, algorithm requires many repeated image reconstructions as well as projection data generations.
Now the backprojection from projection data, Pφ,θ(xr,yr), can be performed, as shown by the following: I ( x , y , z ) = ϕ θ P ϕ , θ ( x r , y r ) θ ϕ = ϕ θ P ϕ , θ ( x cos ϕ + y sin ϕ , ( 6 ) z - ( - x sin ϕ + y cos ϕ ) tan θ ) θ ϕ
By using rotation, the matrix defined in (2), (6) can also be written as: I ( x , y , z ) = ϕ θ P ϕ , θ ( x , z - y tan θ ) θ ϕ ( 7 )
In the aligned frame, xr=x′ is defined.
To simplify the notation, let us define Iφ as the sum of the projection planes about θ for a fixed φ as given below as an intermediate stage image before the finally reconstructed image. The computation can then be further simplified, for example, by using the symmetry property in θ, I ϕ ( x , y , z ) θ P ϕ , θ ( x , z - y tan θ ) θ ( 8 )
Using the rotation matrix given in (2) once again, (8) can be noted as follows:
Iφ(x′,y′,z)=RφI100 (x,y,z)
or Iφ(x,y,z)=R−φIφ(x′,y′,z)   (9)
Using (9), (7) can further be noted as: I ( x , y , z ) = ϕ R - ϕ I ϕ ( x , y , z ) ϕ ( 10 )
Equation (10) is equivalent to rotating the set of partially reconstructed image planes at a given angle φ and then integrating these partially reconstructed images over all φ's to form the finally reconstructed image I(x,y,z).
B. Symmetry Properties
As discussed above, the full utilization of symmetry properties will be the central part of the proposed SSP method. Hereafter, the symmetric points represent the similar coefficients (the same value but with different polarity and/or different coordinates) that will be shared in computation without additional calculations as well as the use of identical instructions. That is, the symmetric points have the same interpolation coefficient or complementary value. The applicant found that there exists sixteen symmetric points, which are practically usable, namely “Mirror-symmetry,” “φ-symmetry,” “y′-symmetry” and “θ-symmetry.”
(a) Mirror-symmetry (+x′ and −x′, see FIG. 4A)
One of the interesting aspects of the image reconstruction is the use of the symmetric properties of each image point. That is, once one point is calculated with an appropriate interpolation coefficient, one can assign the same coefficient value in the opposite point(s) by using symmetry properties of various types. One of the simplest forms is the use of the y′-axis symmetry, as shown in FIG. 4A. In this computation, a point, (x′0,y′0) at +xr is identical in value (x′1,y′1) at −xr with change in polarity only. An illustration of this “Mirror-symmetry” is shown in FIG. 4A. In this illustration, an example is given for symmetric points due to “Mirror-symmetry”, i.e., (x′0,y′0)→(x′1,y′1)=(−x′0,y′0). This symmetry property reduces the computation time by half.
(b) φ-symmetry (φ and φ+90° see FIG. 4B)
Similar to the case of the “Mirror-symmetry”, one can also demonstrate that there is symmetry between the points at φ and φ+90° symmetry. In this case, (x′0,y′0)→(x′1,y′1)=(−y′0, x′0), i.e., the polarity as well as the coordinates are changed. FIG. 4B illustrates this φ-symmetry case with a numerical example. As shown in the figure, once a point is calculated at φ and an interpolation coefficient value for this point is obtained, for example, at (x′0,y′0), the same coefficient value can be used at (x′1,y′1) with coordinate and polarity changes as (−y′0,x′0) after a 90° rotation, i.e., at φ+90°. This symmetry property once again permits the computation time to be reduced by half.
FIG. 4 illustrates the examples of the “Mirror-symmetry” and “φ-symmetry” used in the proposed SSP method. FIG. 4A illustrates an example of “Mirror-symmetry” against the y′ axis. That is, once a coordinate point (x′0,y′0) is known, a symmetric point (x′1,y′1) across the y′ axis can be obtained or assigned without additional computation. This will speed up the computation by a factor of two or computation time by half. FIG. 4B illustrates the symmetric property of the projection data against 0, i.e., if an interpolation coefficient of a coordinate point (x′0,y′0) is known at φ, then a symmetric point (x′1,y′0) at φ+90° can be obtained without additional computation. In other words, the projection data calculation at φ+90° share the same coordinate values at φ with simple coordinate changes, i.e., (x′,y′) at φ to (−y′,x′) at φ+90°. This symmetry property reduces the computation time by half.
(c) y′-symmetry (+n and −n, see FIG. 5A)
Unlike mirror- and φ-symmetry, y′- and θ-symmetry are in the y′-z plane. It should be noted that y′-symmetry is intimately related to θ-symmetry. Therefore, it is easy to illustrate the two together, as follows. The y′-symmetry property has the effect of changing the interpolation coefficients to complementary values. In fact, y′-symmetry has the same symmetry property as θ-symmetry, as discussed below. First, the property of y′-symmetry is illustrated in FIG. 5A. As shown in FIG. 5A, variable Z can now be noted as follows (see also FIG. 3):
zy r ,n,θ=yr+ntanθ=Zy r ,n,θ+ry r ,n,θ  (11)
where
n, Zy r ,n,θ and y rεINTEGER
ry r ,n,θεREAL and 0≦ry r ,n,θ<1
z y r , n , θ + z y r , - n , θ = ( y r + n tan θ ) + ( y r - n tan θ ) = Z y r , n , θ + r y r , n , θ + Z y r , - n , θ + r y r , - n , θ = 2 y r ε I N T E G E R ( 12 ) r y r , n , θ + r y r , - n , θ = 1 r y r , - n , θ = 1 - r y r , n , θ
where
ry r ,n,θ and 1−ry r ,n,θ represent die interpolation coefficients.
Using this symmetry, two interpolation operations can be simultaneously performed, thereby reducing the number of loops in y′(=n) by half. Therefore, (5) can be rewritten as follows: P ϕ , θ ( x r , y r ) = I ( x r , 0 , y r ) + n = 1 N I x r / 2 [ I ( x r , n , Z y r , n , θ ) ( 1 - r y r , n , θ ) + I ( x r , n , Z y r , n , θ + 1 ) r y r , n , θ + I ( x r , - n , Z y r , - n , θ ) ( 1 - r y r , - n , θ ) + I ( x r , - n , Z y r , - n , θ + 1 ) r y r , - n , θ ] ( 13 )
Equation (13) can be transformed into a factored form to reduce the number of instructions required to execute the computation. Furthermore, the interpolation coefficient ry r ,n,θ can be reduced to simple functions of only n and θ and can be derived from (11). Because yr is always an integer and ry r ,n,θ satisfies the condition 0≦ry r ,n,θ<1, ry r ,n,θ is a function of only n and θ. Therefore, the notation ry r ,n,θ can be simplified to rn,θ (see and compare with FIGS. 3 and 5A). Further, (1−ry r ,n,θ) is equal to ry r ,−n,θ. Similarly, (1−ry r ,−n,θ) is equal to rn,θ. Using the simplified forms, (13) can be transformed to a factored form, as shown below: P ϕ , θ ( x r , y r ) = I ( x r , 0 , y r ) + x = 1 N I x r / 2 [ { I ( x r , n , Z y r , n , θ ) + I ( x r , - n , Z y r , - n , θ + 1 ) } ( 1 - r n , θ ) + { I ( x r , n , Z y r , n , θ + 1 ) + I ( x r , - n , Z y r , - n , θ ) } r n , θ ] ( 14 )
(d) θ-symmetry (+θ and −θ, see FIG. 5B)
Now, let us illustrate how the θ-symmetry shares the calculation of interpolation coefficients with the y′-symmetry discussed above. As shown in FIG. 5B, only the changes in the interpolation coefficients change their own complementary values when the signs of θ change.
Using (4), (5), (11) and (12), it can be shown that z and r are given as: z y r , n , θ = y r + n tan θ = y r + ( - n ) tan ( - θ ) = z y r , - n , - θ z y r , n , - θ = y r + n tan ( - θ ) = y r + ( - n ) tan θ = z y r , - n , θ ( 15 ) Therefore , r - n , θ = r n , - θ = 1 - r n , θ r - n , - θ = r n , θ Z y r , n , θ = Z y r , - n , - θ Z y r , n , - θ = Z y r , - n , θ
Using the relation given in (15), (14) can be written as follows: P ϕ , - θ ( x r , y r ) = I ( x r , 0 , y r ) + n = 1 N I x r / 2 [ { I ( x r , n , Z y r , n , - θ ) + I ( x r , - n , Z y r , - n , - θ + 1 ) } ( 1 - r n , - θ ) + { I ( x r , n , Z y r , n , - θ + 1 ) + I ( x r , - n , Z y r , - n , - θ ) } r n , - θ ] ( 16 ) = I ( x r , 0 , y r ) + n = 1 N I x r / 2 [ { I ( x r , n , Z y r , - n , θ ) + I ( x r , - n , Z y r , n , θ + 1 ) } r n , θ + { I ( x r , n , Z y r , - n , θ + 1 ) + I ( x r , - n , Z y r , n , θ ) } ( 1 - r n , θ ) ]
Equation (14) is for Pφ,θ(xr,yr), while (16) is for the Pφ,−θ (xr,yr). For the computation of (16), the relations given in (15), i.e., rn,θ, rn,−θ, Zy r ,n,θ, and Zy r ,n,−θ can be used once again and further simultaneously calculate Pφ,θ(xr,yr) and Pφ,−θ(x r, yr).
As shown above, y′- and θ-symmetries share the common properties. The y′-symmetry, therefore, is used to reduce the number of loop counts y′(=n), while the θ-symmetry is used to reduce the number of loop counts θ.
In summary, a total of sixteen points of symmetry for the SSP method, namely, 2 (Mirror-symmetry), 2 (φ+90° symmetry), 2 (y′-symmetry) and 2 (θ-symmetry), are used. Thus, multiplication of these leads to 16. In addition to these symmetry based reductions in computation time (16 times), as will be detailed, SIMD will speed-up the computation further by another factor of 4. As such, a total speed gain of 64 (16×4) has been achieved with the new SSP method.
FIG. 5 illustrates y′- and/or θ-symmetry in the proposed SSP method. FIG. 5A shows y′-symmetry, while FIG. 5B shows θ-symmetry. The left side of each figure shows the z-y′ plane cut-view, while the right part shows an expanded view of the left. Each quarter illustrates the details of the interpolation coefficient, r′s. This y′- or θ-symmetry property indicates that if one of the interpolation coefficients is calculated for one point, then the remaining interpolation coefficients for the other three points can be obtained without additional computation. The amount of calculations, therefore, can be reduced to one fourth (¼) by using y′- and θ-symmetries.
Parallelization of Computation using SIMD for SSP method
A. SIMD (Single Instruction Multiple Data) and Symmetry Properties
It is important to reduce the number of instructions per loop for practical purposes, while reducing the number of loops within the projector/backprojector, to realize a fast algorithm. The number of instructions per loop was reduced by using the SIMD technique. The use of SIMD reduces the number of instructions per loop, thereby reducing the overall computation time by a factor of four. The operand of SIMD should be determined carefully to improve the efficiency. As shown above, projection/backprojection of sixteen points were simultaneously performed. For this operation, sixteen symmetry points were divided by two groups, one for +θ and the other for −θ. Since a single SIMD instruction data set consists of four data sets, each group having the same (+θ or −θ), was coupled to one of the two SIMD instruction data-sets.
These two groups having the same θ is based on (14) and (16). FIG. 12 shows these sixteen pairs of symmetry relationship between rotated image data sets and projection data sets. The upper group (a-1 and a-2) shares the same θ, while the lower group shares −θ. Each group is divided by two sub-groups. Each upper subgroup shares the same linear-interpolation coefficients, while each lower group shares the complementary values to that of the each upper group. Each single subgroup, therefore, can be handled as a single SIMD data set (or single SLMD pack). Zy r ,y′,θ and zy r ,−y′,−θ share the same value. Similarly, zy r ,y′,θ and zy r ,−y′,θ also share the same value.
As noted above, sixteen symmetry pairs are divided into two groups, each sharing the same θ. In addition, these two groups were once again divided into two subgroups. The members of subgroups a-1 and b-2 share the same interpolation coefficient (e.g., ry′,θ), while the subgroup a-2 and b-1 have complementary values (e.g., 1−ry′,θ) as interpolation coefficients. Each subgroup represents one packed SIMD data, as shown below: SI x ( n , z ) = { I ( x , n , z ) , I ( - x , n , z ) , I ( - n , x , z ) , I ( - n , - x , z ) } ( 17 ) SI x ( - n , z ) = { I ( x , - n , z ) , I ( - x , - n , z ) , I ( n , x , z ) , I ( n , - x , z ) } ( S P ) ϕ , θ ( x r , y r ) = { P ϕ , θ ( x r , y r ) , P ϕ , θ ( - x r , y r ) , P ϕ + π 2 , θ ( x r , y r ) , P ϕ + π 2 , θ ( - x r , y r ) , } ( S P ) ϕ , - θ ( x r , y r ) = { P ϕ , - θ ( x r , y r ) , P ϕ , - θ ( - x r , y r ) , P ϕ + π 2 , θ ( x r , y r ) , P ϕ + π 2 , θ ( - x r , y r ) , }
where
SI means an SIMD image data set.
SP means an SIMD projection data set.
Equation (17) shows the arrangement of the SIMD data set for the upper group of FIG. 12. This process is referred to as packing and single SIMD instruction data set consists of four points. This process can be applied to the lower groups having −θ.
As noted above, the upper group and the lower group of FIG. 12 are based on (14) and (16). Equations (14) and (16) can now be transformed into SIMD versions as given in (17), as shown below: ( S P ) ϕ , θ ( x r , y r ) = ( S I ) x r ( 0 , y r ) + n = 1 N I x r / 2 [ ( 1 - r n , θ ) { ( S I ) x r ( n , Z y r , n , θ ) + ( S I ) x r ( - n , Z y r , - n , θ + 1 ) } + r n , θ { ( S I ) x r ( n , Z y r , n , θ + 1 ) + ( S I ) x r ( - n , Z y r , - n , θ ) } ] ( 18 ) ( S P ) ϕ , - θ ( x r , y r ) = ( S I ) x r ( 0 , y r ) + n = 1 N I x r / 2 [ ( 1 - r n , θ ) { ( S I ) x r ( n , Z y r , - n , θ + 1 ) + ( S I ) x r ( - n , Z y r , n , θ ) } + r n , θ { ( S I ) x r ( n , Z y r , - n , θ ) + ( S I ) x r ( - n , Z y r , n , θ + 1 ) } ]
where
SIx r (n,Zy r ,n,θ), SIx r (n,Zy r ,n,θ+1) and SPφ,θ(x r,yr) are according to subgroup “a-1” of FIG. 12.
SIx r (−n,Zy r ,−n,θ), SIx r (−n,Zy r ,−nθ+1) and SPφ,θ(xr,yr) are according to subgroup “a-2” of FIG. 12.
SIx r (n,Zy r ,−n,θ), SIx r (n,ZY r ,−n,θ+1) and SPφ,−θ(xr,yr) are according to subgroup “b-1” of FIG. 12.
SIx r (−n,Zr r ,n,θ), SIx r (−n,Zy r ,n,θ+1) and SPφ,−θ(xr,yr) are according to subgroup “b-2” of FIG. 12.
Zy r ,n,θ=Zy r ,−n,−θ
Zy r m,n,θ=Zy r ,n,−θ
This SIMD packing concept can be applied equally to a backward projection.
As a precaution, it should be noted that there are several special cases within the above symmetric pairs. These special cases indicate that some symmetry pairs will share the same rotating image point, thereby resulting in errors. TABLE II shows these special cases, which require special attention and correction.
TABLE II
THE SPECIAL CASES WITHOUT SYMMETRY COUNTERPART
Symmetry pairs on image plane Special case
φ-symmetry I(x′,y′,zy r ,y′,θ) and I(−y′,x′,zy r ,y′,θ) The points of x′ = y′
Mirror I(x′,y′,zy r ,y′,θ) and I(−x′,y′,zy r ,y′,θ) The points of x′ = 0
symmetry
φ-symmetry I(x′,y′,zy r ,y′,θ) and I(x′,=y′,zy r ,y′,θ) The points of y′ = 0
B. Optimization of Job Distribution in Multi-Processor Environment
The recently available PC provides a multi-processor environment. To utilize this parallel computation capability, an instruction called “thread programming technique” can be used. The proposed SSP method is capable of being expanded to multi-processor systems. In this case, a job distribution with an equal amount of load is important for optimal performance.
As the method according to the present invention is based on the assumption of circular cylindrical FOV, the job load is designed to be distributed equally along the x′ or xr axis. This is because there is uneven distribution of computational load if an equal range of x′ is assigned to each CPU, as illustrated in FIG. 6.
FIG. 6 illustrates the concept of the balanced job distribution based on the sums of the ray path length, that is, equal distribution of the computational load for each group S1, S2, S3 and S4. In this case, the job loads are divided into four groups of equal area, i.e., S1, S2, S3 and S4. This redistribution suggests that each group has almost the same area, i.e., the same amount of computational load.
Description of the Overall Method
The schematics of the new SSP projector and backprojector are provided in FIGS. 7 and 8, respectively. For SSP, the aligned frame was used by utilizing the rotation operation as well as the symmetry properties to reduce the number of loop counts. In addition, the total number of instructions was reduced by combining the symmetry properties and SIMD technique.
FIG. 7 is a flow chart showing the projection. The inner-most block 706 within the loop includes z loop, the interpolation operation, the integration along the ray path 708 and scanning along the x′ axis 710. After the projection for the entire θ range, the intermediate results of rotated projection should be unpacked in order to be stored back to the original position. It should be noted that the sizes of (I-Pack) and SIMD data sets (I-Pack+P-Pack) must be less than the cache sizes of L1 and L2, respectively, in order to maximize the cache hit ratio and the data access speed.
FIG. 8 is a flow chart showing the backprojection. First, 8 projection data, which have symmetric relationships, are packed for use in the SIMD. These packed projection data, P-Pack, are interpolated linearly along the yr direction to calculate the packed backprojected image data, I-Pack, for the semifinal image Iφ (x′,y′,z) for all θ directions. As an intermediate step, these packed backprojected image data are unpacked and aligned to each valid position in the semifinal image data Iφ (x′,y′,z) after each step of θ loop. These procedures are repeated by loops of yr and xr until the semifinal image Iφ (x′,y′,z) is fully reconstructed. The semifinal image Iφ (x′,y′,z) is reconstructed for each view angle, φ. Each semi-final image Iφ (x′,y′,z) is rotated by its own view angle θ and then added up to reconstruct the final image I (x,y,z), the image at the original Cartesian coordinates. Similar to the projection operation, the sizes of SIMD data sets, (I-Pack+P-Pack) and (I-Pack) should be less than the cache sizes of L2 and L1, respectively, for the maximization of the computation.
As shown in FIG. 7 and FIG. 8, the rotation operation was used to utilize the aligned frame. Before the projection step, the projection plane is aligned with the aligned from (x′,y′) and each incoming image plane is then rotated according to its respective view angle. After the backprojection step, the original image is recovered from the aligned frame by rotating back to the original view angle, thus permitting an intermediate reconstructed image data to be formed.
To reduce processing time, the efficient use of cache was found to be an important factor in the overall computation. Memory allocation and loop orders should be optimized for the memory access pattern. Since xr(=x′) is set to the variable of the outermost loop, the memory stride on read and write can be confined to a certain range within the L2 cache size. Furthermore, the variable of the innermost loop should be z or yr, since most of the computation intensive operations in the SSP method are related to interpolation of z or yr. Therefore, it is desirable to allocate the initial memory by the order [x′]:[y′]:[z] for backprojection while for projection, the order should be [φ]:[xr]:[θ]:[yr]. These memory allocations, therefore, will determine the loop orders. These loop orders of projector/backprojector are also depicted in FIG. 7 and FIG. 8. As a result of using these symmetries, the total number of loop counts can be reduced by half at each step.
For the SIMD, a memory packing (step) is performed prior to each projection/backprojection step. After the projection step, the packed SIMD data set in the projection domain is unpacked and returned to the original position. Similarly, the packed SIMD data set in the image domain is unpacked after each backprojection step.
Experimental Results
A. Methods
To evaluate the efficiency of the SSP method, the data or sonogram was applied to HRRT (High Resolution Research Tomograph developed by CPS/Siemens, Knoxville, Tenn., U.S.A). HRRT has 119,808 detectors and is designed for ultra high resolution brain scanning. Since HRRT has the largest number of detectors (with the smallest detectors size) among the human PET scanners that are currently available, it has the highest computational complexity for image reconstruction. The number of computations has been increased substantially (in proportion to the number of LORs) and is much larger compared to other existing PET scanners. The reconstruction software provided by the HRRT package supports iterative algorithms such as OP-OSEM3D and is the preferred reconstruction algorithm in the HRRT system. One of the advantages of OP-OSEM3D, among others, is preventing the occurrence of negative bias when the random rate is high. This algorithm was chosen to test the performance of our new SSP method since it requires the largest memory size and is the most computation-intensive algorithm. Therefore, unless otherwise specified, the HRRT OP-OSEM3D package S/W will be referred to as the existing method.
As can be seen, the SSP method is supported by commercially available computer systems such as a common PC. Three other platforms were chosen to compare the SSP method against the existing method. The first platform is PC 1, which has Intel dual-core 3.0 GHz, 4 GB RAM, 1 MB L2 cache per CPU. The second platform, PC2, is a high-end PC with two dual-core AMD 2.4 GHz, 8 GB RAM, 1 MB L2 cache per CPU. The third platform is the current HRRT computer platform, which consists of an eight node cluster system (this is denoted as HRRT CPS system), wherein each node consists of two Xeon 3.06 GHz processors with 512 KB L2 cache and 2 GB RAM.
The concept of “span” in 3D means the mode of axial compression. The test was performed in span 3 and span 9, which are typically used in HRRT. The parameters for OP-OSEM3D are as follows: 256×256×207 image pixels; six iterations; and 16 subsets. Thread programming is also used.
B. Results
Before discussing the computational speed, the accuracy of the reconstructed images (compared to the original HRRT package provided by CPS/Siemens) appears to have been emphasized. First, the results of the projector and backprojector of the SSP were compared with the existing HRRT package. As shown in FIGS. 9 and 10, only a little difference was found. As mentioned previously, the SSP technique or method is basically a tri-linear interpolation, whereas the existing method is a bi-linear method. FIG. 9B shows the profiles for projection data at 45° and 90°. The same comparison was made on the backprojection data as well as the final reconstructed images, wherein the results are shown in FIGS. 10 and 11. As shown therein, the projected data and image quality of SSP are equivalent to the existing method, but yet with an advantage of nearly two orders of magnitude in computational speed.
The performance aspect of the SSP method (i.e., execution time on PC 1 and 2) was also compared. The computation time was measured for the two existing compression modes, namely, span 3 and span 9. The results indicate that the SSP method, as expected, indeed enhanced the performance by almost 80 times, 2 (φ+90° symmetry)×2 (xr or mirror-symmetry)×2 (y′-symmetry)×2 (θ-symmetry)×4 (SIMD)+other optimizations, compared to the existing method (original HRRT package). It should be noted that this performance enhancement can be obtained more or less independent of the system architecture. Furthermore, as the method suggests, the relative improvement will be higher as more oblique rays or angles are used, as seen from TABLES III and IV.
FIG. 9A illustrates a comparison of projection data between the existing method and the proposed SSP method. (Top: Existing, Middle: SSP, Bottom: The difference between SSP and the existing method) FIG. 9B illustrates the cut-views of sinogram at a specific view. (Cop: Comparison at φ=45°, Middle: Comparison at φ=90°, Bottom: The differences between SSP and the existing method at φ=45° and φ=90°, respectively).
FIG. 10A illustrates a comparison of simple back-projection images between the existing method and the proposed SSP method. (Top: Existing method. Middle: New SSP technique. Bottom: The differences) FIG. 10B illustrates the cut-views (profiles) at an x axis (y=154, z=103). (Top: Comparison between the existing and new SSP. Bottom: The differences between the two)
FIG. 11 illustrates a comparison of reconstructed images with six iterations and their differences. FIG. 11A illustrates a set of reconstructed images with the existing method (top), with new SSP method (middle) and the differences (bottom). FIG. 11B illustrates the cut-views (profiles) at an x axis (y=154, z=103).
TABLE III
THE COMPARISONS OF RUNNING TIME
OF THE VARIOUS OPERATIONS ON PC1.
Existing new ratio
Projection (span 9) 1539(s) 24(s) 64.1
Backprojection (span 9) 1527(s) 25(s) 61.1
Projection (span 3) 4276(s) 51(s) 83.8
Backprojection (span 3) 4255(s) 53(s) 80.3
Projection + backprojection 3066(s) 49(s) 62.6
(span 9)
Projection + backprojection 8531(s) 104(s)  82.0
(span 3)
PC1: Intel 3.0 GHz 1 dual-core CPU, 4 GB RAM, 1 MB L2 cache per CPU.
TABLE IV
THE COMPARISONS OF RUNNING TIME
OF THE VARIOUS OPERATIONS ON PC2
Existing new ratio
Projection (span 9)  556(s)  9(s) 61.8
Backprojection (span 9)  679(s) 11(s) 61.7
Projection (span 3) 1600(s) 22(s) 72.2
Backprojection (span 3) 1767(s) 22(s) 80.3
Projection + backprojection 1235(s) 20(s) 61.7
(span 9)
Projection + backprojection 3608(s) 44(s) 82
(span 3)
PC2: AMD 2.4 Ghz 2 dual-core CPU, 8 GB RAM, 1 MB L2 cache per CPU.
The use of more oblique rays or angles (e.g., span 3) results in better axial resolution and statistics. Since reconstruction using the EM algorithm consists of a large number of projections and backprojections, this improvement in speed is especially important. Finally, a comparative study of the HRRT CPS system was compared with the original HRRT CPS algorithm vs. our 2 dual core CPU (without cluster) configuration, PC2, with the SSP method and obtained a speed gain of a factor of 9 to 11. The reason for the differences between the calculation (64 times) and the actual system based computation (9-11 times) is probably due to the 16 Xeon-CPUs based original HRRT CPS system. The HRRT CPS system having 16 Xeon-CPUs is much more powerful than PC2 having only 2 dual-core CPUs. This H/W difference could have been the result of such reduced speed gain. In addition, SSP method can easily extend to the cluster version.
The proposed SSP method will improve the overall computation time, especially when a dynamic functional study is desired. Currently, the image reconstruction time for a normal HRRT PET scan with a commercially available PC (PC2 type) requires about seven to eight minutes for span 3 after completing the sinogram formation and the precorrection processes compared with eighty minutes with current HRRT reconstruction package (OP-OSEM3D) utilizing an 8-node cluster system.
CONCLUSION
The present invention provides a fast method of calculating the ray path, which leads to the projection data and reconstructed image in 3D by using the symmetry properties of the projection and image data. Exploiting these simple geometrical symmetry properties and with the help of SIMD, a new ultra fast reconstruction method was developed, which has a computational speed advantage of nearly two orders of magnitudes compared to the existing reconstruction package (specifically that of OP-OSEM3D, the latest and most widely used PET method, especially for HRRT without compromising image quality). The key concepts introduced in the SSP method can be summarized as follows. First, interpolation operations are reduced from three dimension to one dimension by using rotation based projection, or the aligned frame concept. Second, the rotation based projection is combined with symmetric properties of the (θ,φ,y′,xr) and coupled with SIMD technique with optimized L1 and L2 cache use. The sixteen symmetry points, which share the same interpolation coefficients (or their complementary values), were grouped, thus permitting them to be processed simultaneously in a projection/backprojection operation to thereby achieve a substantially improved overall reconstruction time. To simultaneously process these symmetric points in the projection/backprojection, SIMD operations have also been incorporated. In particular, the SIMD scheme allowed us to simultaneously access four data. In addition, the data size per loop suitable for an L2 cache size was optimized, as well as the data structures for minimizing the memory stride.
In summary, the projection and backprojection operations were performed in the aligned frame by using the symmetry concept and SIMD with cache optimization and reduced the number of instructions in the reconstruction of an image from the sinogram data provided by the HRRT PET. In other words, the proposed SSP method incorporates the symmetry properties of projection to maximum, thereby reducing the computation time by as much as 16 times. Together with the use of the SIMD operator and cache optimization, an overall computational speed gain by a factor close to 80 was achieved. Such an improvement in computation time will open the door for the dynamic functional study of high resolution PET such as HRRT, which suffered from an unacceptably long reconstruction time and has been a serious undesired obstacle in molecular imaging.
This improvement will provide numerous new opportunities for PET users, especially for HRRT users in the molecular imaging community. It is clear that the proposed method helps the PET researchers to contemplate a possible dynamic study, which was limited by the unacceptable image reconstruction time imposed by the reconstruction process. This newly developed SSP method can also be applied to precorrection processes such as attenuation and scatter correction. Finally, with the new SSP method, it will be possible to achieve true interactive scanning. The new SSP method can also be applied to large classes of existing PET scanners of various types as well as to the cluster systems for dynamic studies.
Any reference in this specification to “one embodiment,” “an embodiment,” “example embodiment,” etc., means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment. Further, when a particular feature, structure or characteristic is described in connection with any embodiment, it is submitted that it is within the purview of one skilled in the art to effect such feature, structure or characteristic in connection with other ones of the embodiments.
Although embodiments have been described with reference to a number of illustrative embodiments thereof, it should be understood that numerous other modifications and embodiments can be devised by those skilled in the art that will fall within the spirit and scope of the principles of this disclosure. More particularly, various variations and modifications are possible in the component parts and/or arrangements of the subject combination arrangement within the scope of the disclosure, the drawings and the appended claims. In addition to variations and modifications in the component parts and/or arrangements, alternative uses will also be apparent to those skilled in the art.

Claims (21)

1. A method for reconstructing 3D image, comprising:
detecting a plurality of line of responses (LORs) emitted from an object;
transforming said plurality of LORs into first sinogram data;
back-projecting said first sinogram data with a plurality of projection angles to produce image data for said object; and
projecting said produced image data with said plurality of projection angles to transform said image data into second sinogram data,
wherein said back-projecting includes filling pixels of image plane for each of said plurality of projection angles with said first sinogram data and rotating a coordinate axis of said image plane with corresponding projection angle to produce said image data,
wherein said projecting includes rotating said image data with corresponding projection angle in an opposite direction before projecting said image data with said plurality of projection angles, and
wherein said projecting and said back-projecting use symmetry properties in coordinate space.
2. The method of claim 1, wherein said symmetry properties include at least one of symmetry properties in view angles (φ-symmetry), symmetry properties in oblique angles (θ-symmetry) and symmetry properties in interpolation coefficients along the projection lines (y′-symmetry).
3. The method of claim 2, wherein said projecting and back-projecting include executing a plurality of projecting operations having said symmetry properties simultaneously by using single instruction multiple data (SIMD).
4. The method of claim 3, wherein said using SIMD includes arranging data set having same oblique angle in amplitude.
5. The method of claim 2, wherein said projecting and back-projecting include dividing said image data into a plurality of equal subsets, and at each of a plurality of processors, performing projecting operation for each of said plurality of subsets.
6. The method of claim 5, wherein said dividing said image data into a plurality of equal subsets includes distributing jobs so that sums of ray path length is equal.
7. The method of claim 1, wherein said projecting and said back-projecting include executing a plurality of projecting operation having said symmetry properties simultaneously by using single instruction multiple data (SIMD).
8. The method of claim 7, wherein said using SIMD includes arranging data set having same oblique angle in amplitude.
9. The method of claim 1, wherein said projecting and back-projecting include dividing said image data into a plurality of equal subsets, and at each of a plurality of processors, performing projecting operation for each of said plurality of subsets.
10. The method of claim 9, wherein said dividing said image data into a plurality of equal subsets includes distributing jobs so that sums of ray path length is equal.
11. An apparatus for reconstructing 3D image, comprising:
means for detecting a plurality of line of responses (LORs) emitted from an object;
means for transforming said plurality of LORs into first sinogram data;
means for back-projecting said first sinogram data with a plurality of projection angles to produce image data for said object; and
means for projecting said produced image data with said plurality of projection angles to transform said image data into second sinogram data,
wherein said back-projecting includes filling pixels of image plane for each of said plurality of projection angles with said first sinogram data and rotating a coordinate axis of said image plane with corresponding projection angle to produce said image data,
wherein said projecting includes rotating said image data with corresponding projection angle in an opposite direction before projecting said image data with said plurality of projection angles, and
wherein said projecting and said back-projecting use symmetry properties in coordinate space.
12. The apparatus of claim 11, wherein said symmetry properties include at least one of symmetry properties in view angles (φ-symmetry), symmetry properties in oblique angles (θ-symmetry) and symmetry properties in interpolation coefficients along the projection lines (y′-symmetry).
13. The apparatus of claim 12, wherein said projecting and said back-projecting include executing a plurality of projecting operation having said symmetry properties simultaneously by using single instruction multiple data (SIMD).
14. The apparatus of claim 13, wherein said using SIMD includes arranging data set having same oblique angle in amplitude.
15. The apparatus of claim 12, wherein said projecting and said back-projecting include dividing said image data into a plurality of equal subsets, and at each of a plurality of processors, performing projecting operation for each of said plurality of subsets.
16. The apparatus of claim 15, wherein said dividing said image data into a plurality of equal subsets includes distributing jobs so that sums of ray path length is equal.
17. The apparatus of claim 11, wherein said projecting and back-projecting include executing a plurality of projecting operations having said symmetry properties simultaneously by using single instruction multiple data (SIMD).
18. The apparatus of claim 17, wherein said using SIMD includes arranging data set having same oblique angle in amplitude.
19. The apparatus of claim 11, wherein said projecting and back-projecting include dividing said image data into a plurality of equal subsets, and at each of a plurality of processors, performing projecting operation for each of said plurality of subsets.
20. The apparatus of claim 19, wherein said dividing said image data into a plurality of equal subsets includes distributing jobs so that sums of ray path length is equal.
21. An apparatus for reconstructing a 3D image, the apparatus comprising:
apparatus for detecting a plurality of line of responses (LORs) emitted from an object; and
a processor programmed with an algorithm for:
transforming the plurality of LORs into first sinogram data;
back-projecting the first sinogram data with a plurality of projection angles to produce image data for the object, the back-projecting including filling pixels of an image plane for each of said plurality of projection angles with said first sinogram data and rotating a coordinate axis of the image plane with a corresponding projection angle to produce the image data; and
projecting the image data with the plurality of projection angles to transform the image data into second sinogram data, the projecting including rotating the image data with a corresponding projection angle in an opposite direction before projecting the image data with the plurality of projection angles, wherein the projecting and back-projecting use symmetry properties in coordinate space.
US12/498,952 2006-05-10 2009-07-07 Method and apparatus for ultra fast symmetry and SIMD based projection-back projection for 3D pet image reconstruction Active USRE42034E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/498,952 USRE42034E1 (en) 2006-05-10 2009-07-07 Method and apparatus for ultra fast symmetry and SIMD based projection-back projection for 3D pet image reconstruction

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
KR10-2006-0042155 2006-05-10
KR20060042155 2006-05-10
KR10-2007-0027305 2007-03-20
KR1020070027305A KR20070110186A (en) 2006-05-10 2007-03-20 Method and apparatus for ultra fast symmetry and simd based projection-backprojection for 3d pet image reconstruction
KR1020070040515A KR20070110191A (en) 2006-05-10 2007-04-25 Method and apparatus for ultra fast symmetry and simd based projection-backprojection for 3d pet image reconstruction
KR10-2007-0040515 2007-04-25
US11/800,657 US7456406B2 (en) 2006-05-10 2007-05-07 Method and apparatus for ultra fast symmetry and SIMD based projection-backprojection for 3D pet image reconstruction
US12/498,952 USRE42034E1 (en) 2006-05-10 2009-07-07 Method and apparatus for ultra fast symmetry and SIMD based projection-back projection for 3D pet image reconstruction

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US11/800,657 Reissue US7456406B2 (en) 2006-05-10 2007-05-07 Method and apparatus for ultra fast symmetry and SIMD based projection-backprojection for 3D pet image reconstruction

Publications (1)

Publication Number Publication Date
USRE42034E1 true USRE42034E1 (en) 2011-01-18

Family

ID=38789018

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/800,657 Ceased US7456406B2 (en) 2006-05-10 2007-05-07 Method and apparatus for ultra fast symmetry and SIMD based projection-backprojection for 3D pet image reconstruction
US12/498,952 Active USRE42034E1 (en) 2006-05-10 2009-07-07 Method and apparatus for ultra fast symmetry and SIMD based projection-back projection for 3D pet image reconstruction

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/800,657 Ceased US7456406B2 (en) 2006-05-10 2007-05-07 Method and apparatus for ultra fast symmetry and SIMD based projection-backprojection for 3D pet image reconstruction

Country Status (2)

Country Link
US (2) US7456406B2 (en)
DE (1) DE102007020879A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5263402B2 (en) * 2009-09-04 2013-08-14 株式会社島津製作所 Data processing method for nuclear medicine and diagnostic apparatus for nuclear medicine
FR2989785B1 (en) * 2012-04-20 2014-06-13 Imacisio CHARACTERIZATION METHOD OF A PET SYSTEM, ACQUISITION METHOD, IMAGE RECONSTRUCTION METHOD, DEVICE AND PET SYSTEM IMPLEMENTING SUCH A METHOD
KR20140042461A (en) 2012-09-28 2014-04-07 삼성전자주식회사 Method and apparatus to correct motion
CN103593868B (en) * 2013-10-12 2016-04-27 沈阳东软医疗系统有限公司 Based on projecting method and the device of the image reconstruction of PET
CN104484857B (en) * 2014-12-26 2017-08-18 国网重庆市电力公司电力科学研究院 A kind of instrumented data read method and system
CN105389788B (en) 2015-10-13 2019-03-05 沈阳东软医疗系统有限公司 The method for reconstructing and device, merging method and device of the more bed images of PET
CN106910157B (en) * 2017-02-17 2020-06-26 公安部第一研究所 Multi-stage parallel image reconstruction method and device
CN108537755B (en) * 2018-04-16 2022-02-15 广东工业大学 PET image enhancement method and system based on geometric structure constraint
US11436765B2 (en) * 2018-11-15 2022-09-06 InstaRecon Method and system for fast reprojection
WO2021102614A1 (en) * 2019-11-25 2021-06-03 中国科学院深圳先进技术研究院 Method and terminal for processing positron emission tomography (pet) data

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6282257B1 (en) * 1999-06-23 2001-08-28 The Board Of Trustees Of The University Of Illinois Fast hierarchical backprojection method for imaging
US20020054659A1 (en) * 2000-08-14 2002-05-09 Miwa Okumura Radiation detector, radiation detecting system and X-ray CT apparatus
US20020106051A1 (en) 2000-12-20 2002-08-08 Wido Menhardt Image reconstruction using multiple X-ray projections
US20020122528A1 (en) 2000-12-29 2002-09-05 Besson Guy M. Computed tomography (ct) weighting for high quality image reconstruction
US20020154727A1 (en) 2001-02-16 2002-10-24 Ruola Ning System and method for fast parallel cone-beam reconstruction using one or more microprocessors
US20030161443A1 (en) * 2002-02-28 2003-08-28 The Board Of Trustees Of The University Of Illinois Methods and apparatus for fast divergent beam tomography
US20050123089A1 (en) 2003-12-09 2005-06-09 Man Bruno D. Method and apparatus for reduction of artifacts in computed tomography images
US7522695B2 (en) * 2005-07-19 2009-04-21 Ge Medical Systems Global Technology Company, Llc X-ray CT apparatus

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006062002A (en) 2004-08-25 2006-03-09 Oki Electric Ind Co Ltd Method of segmenting substrate of semiconductor device
KR20070027305A (en) 2005-09-06 2007-03-09 아성기계 주식회사 Steam nozzle of pattern processing apparatus for various designshape of fiber fabrics
KR20070040515A (en) 2005-10-12 2007-04-17 삼성전자주식회사 Method for manufacturing a specimen

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6282257B1 (en) * 1999-06-23 2001-08-28 The Board Of Trustees Of The University Of Illinois Fast hierarchical backprojection method for imaging
US20020054659A1 (en) * 2000-08-14 2002-05-09 Miwa Okumura Radiation detector, radiation detecting system and X-ray CT apparatus
US20020106051A1 (en) 2000-12-20 2002-08-08 Wido Menhardt Image reconstruction using multiple X-ray projections
US20020122528A1 (en) 2000-12-29 2002-09-05 Besson Guy M. Computed tomography (ct) weighting for high quality image reconstruction
US20020154727A1 (en) 2001-02-16 2002-10-24 Ruola Ning System and method for fast parallel cone-beam reconstruction using one or more microprocessors
US20030161443A1 (en) * 2002-02-28 2003-08-28 The Board Of Trustees Of The University Of Illinois Methods and apparatus for fast divergent beam tomography
US20050123089A1 (en) 2003-12-09 2005-06-09 Man Bruno D. Method and apparatus for reduction of artifacts in computed tomography images
US7522695B2 (en) * 2005-07-19 2009-04-21 Ge Medical Systems Global Technology Company, Llc X-ray CT apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
I.K. Hong, et al., "Fast Forward Projection and Backward Projection Algorithm Using SIMD," IEEE, San Francisco 2006, Conference Program, M14-372.
Official Korean Action, with Translation, dated Sep. 18, 2008, as cited in Korean Patent Application No. 10-2007-0040515. 8 Pgs.

Also Published As

Publication number Publication date
US7456406B2 (en) 2008-11-25
US20070278410A1 (en) 2007-12-06
DE102007020879A1 (en) 2009-04-02

Similar Documents

Publication Publication Date Title
USRE42034E1 (en) Method and apparatus for ultra fast symmetry and SIMD based projection-back projection for 3D pet image reconstruction
Hong et al. Ultra fast symmetry and SIMD-based projection-backprojection (SSP) algorithm for 3-D PET image reconstruction
Sabne et al. Model-based iterative CT image reconstruction on GPUs
Pratx et al. Fast, accurate and shift-varying line projections for iterative reconstruction using the GPU
US5253171A (en) Parallel processing method and apparatus based on the algebra reconstruction technique for reconstructing a three-dimensional computerized tomography (CT) image from cone beam projection data
US8767908B2 (en) Exact and approximate rebinning of time-of-flight PET positron emission tomography data
Chen et al. Parallelization of the EM algorithm for 3-D PET image reconstruction
US9189870B2 (en) Method, computer readable medium and system for tomographic reconstruction
US20100266178A1 (en) System and method for acceleration of image reconstruction
Lu et al. Cache-aware GPU optimization for out-of-core cone beam CT reconstruction of high-resolution volumes
Li et al. Parallel iterative cone beam CT image reconstruction on a PC cluster
US20120189158A1 (en) Optimized implementation of back projection for computed tomography (ct)
US20050151736A1 (en) Method and device for constructing an image in a spatial volume
CN111640054A (en) Three-dimensional reconstruction method based on GPU acceleration
KR100996546B1 (en) Method and apparatus for ultra fast symmetry and simd based projection-backprojection for 3d pet image reconstruction
Teimoorisichani et al. A cube-based dual-GPU list-mode reconstruction algorithm for PET imaging
Xu et al. Mapping iterative medical imaging algorithm on cell accelerator
Hong et al. Fast forward projection and backward projection algorithm using SIMD
Deng Parallel computing techniques for computed tomography
Kudo et al. Performance of quasi-exact cone-beam filtered backprojection algorithm for axially truncated helical data
Ha et al. A GPU-accelerated multivoxel update scheme for iterative coordinate descent (ICD) optimization in statistical iterative CT reconstruction (SIR)
Steckmann et al. High performance cone-beam spiral backprojection with voxel-specific weighting
Jiang et al. A direct filtered back-projection reconstruction method for inverse geometry CT without gridding: a simulation study
KR20070110191A (en) Method and apparatus for ultra fast symmetry and simd based projection-backprojection for 3d pet image reconstruction
Fan et al. A block-wise approximate parallel implementation for ART algorithm on CUDA-enabled GPU

Legal Events

Date Code Title Description
FPAY Fee payment

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FPAY Fee payment

Year of fee payment: 8

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2553); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

Year of fee payment: 12