WO2011133606A2 - Reconstruction d'images volumétriques et localisation 3d d'une tumeur en temps réel sur la base d'une unique image obtenue par projection de rayons x à des fins de radiothérapie du cancer du poumon - Google Patents

Reconstruction d'images volumétriques et localisation 3d d'une tumeur en temps réel sur la base d'une unique image obtenue par projection de rayons x à des fins de radiothérapie du cancer du poumon Download PDF

Info

Publication number
WO2011133606A2
WO2011133606A2 PCT/US2011/033133 US2011033133W WO2011133606A2 WO 2011133606 A2 WO2011133606 A2 WO 2011133606A2 US 2011033133 W US2011033133 W US 2011033133W WO 2011133606 A2 WO2011133606 A2 WO 2011133606A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
tumor
pca
dvfs
volumetric
Prior art date
Application number
PCT/US2011/033133
Other languages
English (en)
Other versions
WO2011133606A3 (fr
Inventor
Steve B. Jiang
Ruijiang Li
Original Assignee
The Regents Of The University Of California
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Regents Of The University Of California filed Critical The Regents Of The University Of California
Publication of WO2011133606A2 publication Critical patent/WO2011133606A2/fr
Publication of WO2011133606A3 publication Critical patent/WO2011133606A3/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/58Testing, adjusting or calibrating thereof
    • A61B6/582Calibration
    • A61B6/583Calibration using calibration phantoms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/12Arrangements for detecting or locating foreign bodies
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B23/00Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes
    • G09B23/28Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine
    • G09B23/286Models for scientific, medical, or mathematical purposes, e.g. full-sized devices for demonstration purposes for medicine for scanning or photography techniques, e.g. X-rays, ultrasonics
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • A61N2005/1061Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam using an x-ray imaging system having a separate imaging source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/100764D tomography; Time-sequential 3D tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • G06T2207/10121Fluoroscopy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • G06T2207/30064Lung nodule

Definitions

  • This application relates to devices and techniques that use radiotherapy techniques.
  • Radiotherapy treatment machines equipped with an on-board imaging system, which includes one kV x-ray source and one flat panel imager can be used to track lung tumors directly without implanted markers in
  • Direct localization techniques are mainly for images acquired from the anterior-posterior (AP) direction and the tumors with well-defined boundaries. Also, direct localization techniques involve acquisition and labeling of the training fluoroscopic data prior to treatment. In direct tumor localization, the third dimension of tumor motion perpendicular to the x-ray imager cannot be resolved. For localization based on implanted markers, invasive procedure of marker implantation has an associated risk of pneumothorax. Surrogate-based localization sometimes involves no additional radiation dose to the patient (e.g., optical localization) and comes with a varying relationship between tumor motion and surrogates both intra- and inter- fractionally.
  • a method for implementing real-time volumetric image reconstruction and 3D tumor localization includes performing deformable image registration between a reference phase and remaining N-l phases for a set of volumetric images of a patient at N breathing phases to obtain a set of N-l deformation vector fields (DVFs).
  • the set of DVFs are represented using eigenvectors and coefficients obtained from principle component analysis (PC A).
  • PCA coefficients are varied to generate new DVFs.
  • the generated new DVFs are applied on a reference image to generate new volumetric images.
  • the method can include generating a reconstructed volumetric image from a single projection image by optimizing the PCA coefficients such that a computed projection image matches a measured one of the new volumetric images.
  • the method can include deriving a 3D location of a tumor by applying an inverted DVF on a position of the tumor in the reference image.
  • the method can include using x-ray projections with rotational geometry.
  • the method can include using patient's 3D surface images measured with a range camera.
  • the method can include using one or multiple markers placed on the patient's chest or abdomen.
  • the method can include using one or multiple fiducial markers implanted inside the patient.
  • the method can include using patient's anatomic features such as diaphragm.
  • the method can include using pressure measured with a belt.
  • the method can include using lung volume measured with a spirometer.
  • the method can include using lung air flow measured with a spirometer.
  • the method can include incorporating respiratory motion.
  • the method can include using fixed-angle geometry, such as fluoroscopy.
  • a method for implement real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy includes generating a parameterized PCA model.
  • Generating the parameterized PCA model includes approximating a DVF relative to a reference image as a function of space and time by a linear combination of the sample mean vector and a few eigenvectors corresponding to the largest eigenvalues.
  • the method can include generating a set of optimal PCA coefficients such that the projection of the corresponding volumetric image matches with the measured x-ray projection.
  • the method can include identifying a tumor position, which can include using a push forward DVF, which is defined on the coordinate of the reference image and tells how each voxel in the reference image moves by adopting an efficient fixed-point algorithm for deformation inversion and calculating tumor position.
  • a graphics processing unit is configured to perform operations including performing deformable image registration between a reference phase and remaining N-1 phases for a set of volumetric images of a patient at N breathing phases to obtain a set of N-1 deformation vector fields (DVFs).
  • the GPU is configured to represent the set of DVFs using eigenvectors and coefficients obtained from principle component analysis (PC A).
  • PC A principle component analysis
  • the GPU is configured to vary the PC A coefficients to generate new DVFs.
  • the GPU is configured to apply the generated new DVFs on a reference image to generate new volumetric images.
  • Implementations can optionally include one or more of the following features.
  • the GPU can be configured to generate a reconstructed volumetric image from a single projection image by optimizing the PCA coefficients such that a computed projection image matches a measured one of the new volumetric images.
  • the GPU can be configured to derive a 3D location of a tumor by applying an inverted DVF on a position of the tumor in the reference image.
  • a method of manufacturing a GPU includes using an algorithm to perform the following operations using a programming environment selected from a group comprising compute unified device architecture (CUD A), Brook+, OpenCL and DirectCompute.
  • the operations include performing deformable image registration between a reference phase and remaining N-1 phases for a set of volumetric images of a patient at N breathing phases to obtain a set of N-1 deformation vector fields (DVFs); representing the set of DVFs using eigenvectors and coefficients obtained from principle component analysis (PCA); varying the PCA
  • Implementations can optionally include one or more of the following features.
  • the manufactured GPU can perform operations including generating a reconstructed volumetric image from a single projection image by optimizing the PCA coefficients such that a computed projection image matches a measured one of the new volumetric images.
  • the manufactured GPU can perform operations comprising deriving a 3D location of a tumor by applying an inverted DVF on a position of the tumor in the reference image.
  • the subject matter described in this specification potentially can provide one or more of the following advantages. For example, the described techniques, systems and apparatus can use a single projection image for real-time volumetric image reconstruction and 3D tumor localization.
  • FIGS. 1A and IB are flow charts illustrating a principal component analysis (PCA) lung motion model based process 100 for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy.
  • PCA principal component analysis
  • FIGS. 2A, 2B and 2C are flow charts that summarize an exemplary respiratory motion prediction incorporating process for volumetric image reconstruction and tumor localization based on a single x-ray projection.
  • FIG. 3A shows a "measured" projection of a test image at a right posterior oblique (RPO) angle.
  • FIG. 3B shows an objective value (top) and a relative image reconstruction error (bottom) at each iteration given the image in FIG 1A.
  • FIGS. 4A, 4B and 4C show a sagittal view of an absolute difference image between: A) ground truth test and reference images, B) ground truth test image and image reconstructed using a single projection, and C) ground truth test image and the deformed reference image using demons.
  • Tumor is a round object near the center of the slice.
  • FIGS. 4D, 4E and 4F show the same as FIGS. 4A, 4B and 4C, except for coronal view.
  • Tumor is a round object in the right lung.
  • FIG. 5 shows on top row: relative image error between the ground truth test image and: reference image; image reconstructed using the proposed algorithm as a function of cone beam projection angle; and on bottom row: same as top row, except for 3D localization error.
  • FIGS. 6A, 6B, 6C, 6D, 6E, 6F, 6G, and 6H show the SI tumor position estimated by the respiratory motion prediction incorporating algorithm as well as the ground truth for the digital respiratory phantom for Cases 1 through 8.
  • FIGS. 7A and 7B show the estimated and ground truth tumor position in the SI direction for both regular and irregular breathing.
  • Results for the five lung cancer patients are shown in FIGS. 8A, 8B, 8C, 8D, 8E, 8F, 8G, 8H, 81 and 8J, where the tumor positions on the axial and tangential directions are shown separately.
  • FIGS. 9A, 9B, 9C, 9D, 9E, and 9F show the image reconstruction and tumor localization results for patient 1 at an EOE phase and an EOI phase (indicated in FIG. 8, Case 11), as well as the corresponding cone beam x-ray projections.
  • FIG. 10 shows the average localization error on tangential and axial directions as well as computational time versus the downsampling factor.
  • FIGS. 11A, 1 IB, 11C, 1 ID, 1 IE and 1 IF show scatter plots between the image intensities of the preprocessed cone beam projection and of the computed projection of the reconstructed image at two different projection angles for patient 1, corresponding to the two projections shown in FIGS. 9A, 9B, 9C, 9D, 9E and 9F.
  • FIG. 12 is a block diagram of an exemplary system 1200 for implementing real-time volumetric image reconstruction and 3D tumor localization for lung cancer radiotherapy.
  • the techniques, structures and apparatus described in this application can be used to implement real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy.
  • an algorithm can be provided to reconstruct volumetric image and to locate 3D tumor position from a single x-ray projection image.
  • implementations can include application of the algorithm on a graphics processing unit (GPU) to obtain real-time (e.g. sub-second) performance.
  • the described algorithm and respiratory motion prediction incorporating algorithm can also use surface image, surface marker, implanted marker, diaphragm, lung volume, and air flow to reconstruct a volumetric image and to derive the 3D tumor position.
  • FIGS. 1A and IB are flow charts illustrating a principal component analysis (PCA) lung motion model based process 100 for real-time volumetric image
  • the process 100 can be implemented using a Graphics Processing Unit (GPU) as described below.
  • GPU Graphics Processing Unit
  • deformable image registration can be performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs) (110).
  • a parametric PCA lung motion model can be obtained (120). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from the PCA lung motion model (130).
  • the GPU can perform image reconstruction using the obtained parametric PCA lung motion model (140).
  • the GPU can identify a set of optimal PCA coefficients such that the projection of the corresponding volumetric image matches with the measured x-ray projection (142).
  • the PCA coefficients are initialized to those obtained for the last projection (144). Then the PCA coefficients are varied until the projection of the corresponding volumetric image matches with the measured x-ray projection (146).
  • new DVFs can be generated, which, when applied on the reference image, can lead to new volumetric images.
  • a volumetric image can be construed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one.
  • the 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image (150).
  • the described algorithm or process can be implemented on graphics processing units (GPUs) to achieve real-time efficiency.
  • graphics processing units GPUs
  • the training data using a realistic and dynamic mathematical phantom with 10 breathing phases are described in this document.
  • the testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude.
  • volumetric images can be
  • reconstructed and tumor positions can be localized in 3D in near real-time from a single x-ray image.
  • An algorithm based on the PCA lung motion model can be used to reconstruct volumetric images and extract 3D tumor motion information in real time from a single x-ray projection in a non-invasive (no marker implantation required), accurate, and efficient way.
  • the described techniques for volumetric image reconstruction and 3D tumor localization can use 4DCT from treatment simulation as a priori knowledge.
  • deformable image registration can be performed between a reference phase and the other N-l phases, resulting in a set of N-l deformation vector fields (DVFs).
  • This set of DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from PCA.
  • a new DVF can be generated.
  • a new volumetric image can be obtained when the DVF is applied on the reference image.
  • a volumetric image can be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection image matches the measured one of the new volumetric image.
  • the 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image.
  • the DVF relative to a reference image as a function of space and time can be approximated by a linear combination of the sample mean vector and a few eigenvectors correspondin to the largest eigenvalues, i.e.,
  • x(t) is the parameterized DVF as a function a space and time, x is the mean DVF with respect to time;
  • u,t are the eigenvectors obtained from PCA and are functions of space;
  • the scalars w k (f ) are PCA coefficients and are functions of time. The eigenvectors are fixed after
  • PCA and the temporal evolution of the PCA coefficients can drive the new volumetric image and the dynamic lung motion in real time.
  • the PCA motion model is so flexible that it is capable of represent tumor motion which is beyond that of the training set. For instance, if the PCA coefficients are allowed to change arbitrarily, then given two eigenvectors, the tumor can move anywhere in a plane defined by the corresponding entries in the eigenvectors, and thus hysteresis motion can be handled; given three eigenvectors, the tumor can move anywhere in the space. Each voxel is associated with three entries (corresponding to three canonical directions in space) in each and every eigenvector.
  • each eigenvector defines a vector (or a direction) in space along which each voxel moves; two eigenvectors span a plane; and three eigenvectors span the entire 3D space where the tumor can move.
  • This important feature can allow the PCA motion model to capture a wide range of tumor motion trajectories.
  • the first three PCA coefficients associated with the largest eigenvalues in the PCA lung motion model can be retained. Because the PCA model attempts to efficiently encode the complex spatiotemporal patterns of the entire lung motion, the selection of three PCA coefficients is based on both temporal and spatial considerations.
  • the first three PCA components can be interpreted in the temporal domain as three physical quantities: the mean, velocity, and acceleration of the tumor motion.
  • the motion of the entire lung is more complicated due to the high spatial dimensionality, because of the high degree of redundancy and correlation, the entire lung motion may still be well approximated by the three physical components.
  • the entire lung motion can be completely represented by as few as two PCA components, and for more realistic and complicated lung motion, the PCA spectrum is still dominated by only two or three modes for most breathing patterns.
  • three is the least number of PCA coefficients that are capable of capturing a full 3D tumor motion trajectory.
  • using more PCA coefficients although more flexible, could lead to the overfitting problem and may not be necessary.
  • a set of optimal PCA coefficients are identified such that the projection of the reconstructed volumetric image corresponding to the new DVF matches with the measured x-ray projection.
  • f 0 the reference image
  • f the reconstructed image
  • y the measured projection image
  • P a projection matrix which computes the projection image of f .
  • Equation (2) The cost function can be represented using Equation (2):
  • step 1 the update for a, b is the unique minimizer of the cost function with fixed w .
  • step 2 is a gradient descent method with variable w and fixed a, b, where the step size ⁇ ⁇ is found by
  • the algorithm is also terminated when the maximum number of iterations is reached (e.g., fixed at 10 for one aspect). In the case of largest deformation from the reference image, it can take around 8 or 9 iterations for the algorithm to 'converge,' in the sense that the cost function does not change much after that. Therefore, a maximum of 10 iterations should be sufficient to get convergent results.
  • the maximum number of iterations e.g., fixed at 10 for one aspect.
  • the DVF is updated according to Eq. (2) and the reconstructed image f n+1 is obtained through trilinear interpolation.
  • df/dx has to be consistent with the interpolation process in order to get the correct gradient.
  • the expression for df/dx can be derived. Let f 0 (i, j, k) and f(i, j, k) be the reference image and reconstructed image at iteration n indexed by the integer set
  • f (z, j, k) f 0 (i + x ⁇ z, j, k), j + x 2 (i, j,k), k + x 3 (z, j, k)) (5)
  • Jacobian matrix df/dx l is a diagonal matrix.
  • f(i,j,k) f 0 (l,m,n)(l-z 1 )(l-z 2 )(l-Z 3 ) + f 0 (l+l,m,n)z 1 (l-z 2 )(l-z 3 )
  • df ( ⁇ ' , j,k)/dx 1 ( ⁇ ' , j, k) [f 0 (/ + 1, m, n ) - f 0 (/, m, n )] (1 - z 2 )(l - z 3 )
  • Equation (10) is the exact derivative given the continuous patient geometry obtained from trilinear interpolation in Eq. (7).
  • any reasonable kind of interpolation can be used here, e.g. , tricubic interpolation.
  • the Jacobian matrix df/dx l does not have a nice diagonal form anymore, but may be block diagonal.
  • di/dx is a linear combination of the spatial gradients of the reference image evaluated at the neighboring eight grid points, weighted by the appropriate fractional parts of the DVF.
  • dt/dx contains only local information at the level of individual voxels. It is the eigenvectors that effectively combine all the local information into global information and lead to correct update for the PCA coefficients.
  • the PCA coefficients are initialized to those obtained for the last projection.
  • the results of PCA coefficients initialized to those obtained for the last projection are shown in FIGS. 3A, 3B, 4A, 4B, 4C, 4D, 4E, 4F and 5.
  • an algorithm that incorporates respiratory motion prediction considers the last few projections and initializes the PCA coefficients to their predicted values, taking into account the respiratory dynamics. This might become important when the imaging frequency is low.
  • respiratory motion prediction models There is a vast amount of literature on respiratory motion prediction models that can be used. In this document, as an example, a simple linear prediction model is used, which predicts the current sample using a linear combination of several previous samples, i.e. ,
  • w k0 (t) is the initial guess for the k-th PCA coefficients for the current projection
  • w k * (t - 1) is the optimized k-th PCA coefficients for previous /-th projections.
  • PC A coefficient has a prediction model associated with it since they have different dynamics.
  • the respiratory motion prediction incorporating process and results are shown with respect to FIGS. 2A, 2B, 2C and 6A-11F.
  • the training set i.e. , 4DCT
  • the training set can be used to build the prediction model (alternatively, the first few breathing cycles during the CBCT scan may be used to build the prediction model).
  • the sampling rate for 4DCT and CBCT scan is different, the PCA coefficients can first be interpolated from the training set and re- sampled to have the same frequency as in CBCT scan. For example, for a patient with a 4-sec breathing period, the sampling rate for 4DCT is 2.5 Hz if 10 phases were reconstructed. On the other hand, the sampling rate for CBCT projections is faster, usually on the order of 10 Hz for a typical 1-min scan with around 600 projections.
  • the model fitting process can be performed separately for each of the PCA coefficients since they are assumed to be independent in the PCA motion model.
  • the prediction model can be used to obtain a "good" initial guess for the PCA coefficients, so that the efficiency of the algorithm may be improved.
  • the linear model used here is sufficient.
  • the DVF found by Eq. (2) is a pull-back DVF, which is defined on the coordinate of the new image and describes how each voxel in the new image moves.
  • the DVF cannot be used directly to calculate the tumor position in the new image unless the DVF is rigid everywhere.
  • the inverse i.e. , the push-forward DVF should be obtained.
  • the inverse can be defined on the coordinate of the reference image and can describe how each voxel in the reference image moves.
  • an efficient fixed-point algorithm can be adapted for deformation inversion, which can be about 10 times faster and yet 10 times more accurate than other gradient-based iterative algorithms.
  • deformation inversion procedure can be applied on only one voxel, which corresponds to the tumor center of volume in the reference image.
  • the above described PCA lung motion model based algorithm can be modified to incorporate respiratory motion prediction.
  • the accuracy and efficiency of the respiratory motion prediction incorporating algorithm for 3D tumor localization were evaluated on 1) a digital respiratory phantom, 2) a physical respiratory phantom, and 3) five lung cancer patients. Described in this document are these evaluation cases that include both regular and irregular breathing patterns that are different from the training dataset.
  • the average 3D tumor localization error is less than 1 mm which does not seem to be affected by amplitude change, period change, or baseline shift.
  • the average computation time for 3D tumor localization from each projection ranges between 0.19 and 0.26 seconds, for both regular and irregular breathing, which is about a 10% improvement over first algorithm described above.
  • an average tumor localization error below 1 mm was achieved with an average computation time of 0.13 and 0.16 seconds on the same GPU card, for regular and irregular breathing, respectively.
  • the average tumor localization error is below 2 mm in both the axial and tangential directions.
  • the average computation time on the same GPU card ranges between 0.26 and 0.34 seconds.
  • the accuracy of this algorithm in 3D tumor localization is described to be on the order of 1 mm on average and 2 mm at 95 percentile for both digital and physical phantoms, and within 2 mm on average and 4 mm at 95 percentile for lung cancer patients.
  • the results also indicate that the accuracy is not affected by the breathing pattern, be it regular or irregular.
  • High computational efficiency can be achieved on GPU, requiring 0.1 - 0.3 s for each x-ray projection.
  • FIGS. 2A, 2B and 2C are flow charts that summarize an exemplary respiratory motion prediction incorporating process 200 for volumetric image reconstruction and tumor localization based on a single x-ray projection.
  • a GPU can be implemented to perform the process 200 as described below.
  • the process 200 is similar to the process 100 except for incorporating respiratory motion prediction.
  • the process 200 can include using 4DCT from treatment simulation as a priori knowledge, deformable image registration can be performed between a reference phase and other phases, resulting in a set of deformation vector fields (DVFs) (210).
  • a parametric PCA lung motion model can be obtained (220).
  • the set of DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from the PCA lung motion model (230).
  • the GPU can perform image reconstruction using the obtained parametric PC A lung motion model (240).
  • the GPU can identify a set of optimal PC A coefficients such that the projection of the corresponding volumetric image matches with the measured x-ray projection (242).
  • the PCA coefficients are initialized to predicted values of last few projections and taking into account the respiratory dynamics (244).
  • the PCA coefficients are varied until the computed projection image matches measured one of the new volumetric images (246).
  • new DVFs can be generated, which, when applied on the reference image, can lead to new volumetric images.
  • a volumetric image can be construed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one.
  • the 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image (250).
  • FIG. 2C is another flow chart illustrating addition processes for performing image reconstruction using the obtained parametric PCA lung motion model that incorporates respiratory motion prediction.
  • a GPU can load image data including the projection matrix (P) which computes the projection image of reference image f, measured projection image (y), and reference image (fo) (260).
  • the GPU can initialize the variable w according to equation 11 (262). This takes account of the respiratory motion.
  • the GPU can update a and b according to equation (3).
  • the GPU can compute gradient J/w (264).
  • the GPU can apply Armijo's rule to find ⁇ (265).
  • the GPU can determine whether the found value for ⁇ is less than 0.1 (266). This is to determine whether the step size for the current iteration is sufficiently small (e.g., 0.1). Each PCA coefficient is preconditioned so that their standard deviation is 10 in the training set, and thus a step size of 0.1 can be considered sufficiently small and do not alter the results significantly.
  • the process terminates and the GPU can compute the inverse DVF (270).
  • the inverse DVF can be used to generate the reconstructed image (271), and the 3D location of tumor can be derived by applying the inverted DVF on its position in the reference image (250).
  • the GPU can upgrade w according to equation 4. Then the GPU can update x and f (268). The GPU can also terminate the process when the maximum number of iteration is reached (e.g., 10) (269). When the maximum number of iteration has been reached (269- Yes), the GPU can compute the inverse DVF (270) and output the reconstructed image (271). The 3D location of tumor can be derived by applying the inverted DVF on its position in the reference image (250).
  • GPUs graphic processing units
  • Various programming environments and Graphics Cards can be used.
  • compute unified device architecture (CUD A) was used as the programming environment and an NVIDIA Tesla CI 060 GPU card was used as the Graphics Card.
  • Other examples of programming environment can include GPU programming languages such as OpenCL and other similar programming languages.
  • the PCA lung motion model based algorithm was tested using a non-uniform rational B-spline (NURBS) based cardiac-torso (NCAT) phantom.
  • NURBS non-uniform rational B-spline
  • NCAT cardiac-torso
  • This mathematical phantom is based on data from the Visible Human Project, and is very flexible, maintaining a high level of anatomical realism (e.g., a beating heart, detailed bronchial trees, etc.).
  • the respiratory motion was developed based on basic knowledge of respiratory mechanics.
  • the NCAT phantom was used instead of real clinical data because with the NCAT phantom, the ground truth volumetric image and the 3D tumor location are available at any given time of the breathing. This is important for an objective assessment of the accuracy of our algorithm.
  • a dynamic NCAT phantom composed of 10 breathing phases was generated as the training data, with a 3D tumor motion magnitude of 1.6 cm and a breathing period of 4 seconds.
  • the dimension of each volumetric image is: 256x256x 120 (voxel size: 2x2x2.5 mm ).
  • the end of exhale (EOE) phase was used as the reference image and deformable image registration (DIR) between the EOE phase and all other phases was performed.
  • the DIR algorithm used here is a fast demons algorithm implemented on GPU. Then PCA was performed on the nine DVFs from DIR and three PCA coefficients and eigenvectors were kept in the PCA lung motion model.
  • a volumetric image was reconstructed and derived the 3D tumor location from each of 360 cone beam projections, which are generated from the NCAT phantom and uniformly distributed over one full gantry rotation.
  • the phantom has a 4 s breathing period and a 50% increase in breathing amplitude (3D tumor motion magnitude: 2.4 cm) relative to the training data.
  • the gantry rotation lasted one minute, resulting in 15 breathing cycles and 24 projections per cycle.
  • 24 volumetric images at 24 breathing phases were obtained. These 24 volumetric images were used as ground truth test images to evaluate the accuracy of the reconstructed images.
  • the imager has a physical size of 40x30 cm . For efficiency considerations, every measured projection image was down- sampled to a resolution of 200x 150 (pixel size: 2x2 mm ).
  • the localization accuracy can be quantified by the 3D root-mean-square (RMS) error.
  • the accuracy of the described algorithm can be limited by the accuracy of training DVFs derived with the demons algorithm.
  • the potential accuracy that can be achieved is the accuracy of the DIR between the ground truth test image and the reference image using demons. Accordingly, the reference image is deformed to the ground truth test image using the same demons algorithm and the relative image error of the deformed image is computed against the ground truth test image. This error, termed as deformation error, in contrast to the reconstruction error of the reconstructed image, is used as the benchmark to evaluate our algorithm. Similarly, the 3D RMS deformation error can be computed.
  • FIGS. 3 A, 3B, 4A, 4B, 4C, 4D, 4E, 4F, and 5 show the data for the PCA lung motion model based process.
  • FIGS. 3A and 3B show the "measured" projection image, simulated from the NCAT phantom at the end of the inhale phase, so that it is maximally different from the reference image.
  • FIG. 3A shows a "measured" projection of the test image at a right posterior oblique (RPO) angle (300).
  • FIG. 3B shows an objective value (310) and the relative image reconstruction error (320) at each iteration given the image in FIG. 3 A. Ten iterations were performed and further iterations were found to have little influence on the results.
  • the relative image reconstruction error is initially 35% and approaches to the deformation error (10.9% compared with 8.3%) after 9 iterations. A relative image error below 10% is usually visually acceptable.
  • FIGS. 4A, 4B, 4C, 4D, 4E and 4F show the sagittal and coronal views of the absolute difference images between the reference image and three other images, namely the ground truth test image, the image reconstructed by the described algorithm, and the deformed reference image using demons.
  • the 3D RMS tumor localization error is about 0.9 mm.
  • FIGS. 4A, 4B and 4C show: sagittal view of the absolute difference image between: ground truth test and reference images (400), ground truth test image and image reconstructed using a single projection, (410), and ground truth test image and the deformed reference image using demons (420).
  • Tumor is a round object near the center of the slice.
  • FIGS. 4D, 4E and 4F show the coronal view (430, 440 and 450 respectively) of the same absolute difference image as in FIGS. 4A, 4Band 4C. Tumor is a round object in the right lung.
  • the average 3D tumor localization error is 0.8 mm ⁇ 0.5 mm and is not affected by projection angles (see FIG. 5).
  • the average relative image deformation error is 5.4% ⁇ 2.2% and 3D RMS tumor localization error from deformation is 0.2 mm ⁇ 0.1 mm.
  • the periodic pattern in the relative image reconstruction error of the described algorithm is similar to the breathing pattern. This may be due to the accuracy of the reconstructed image being dependent on the difference between the current patient geometry and the reference one. So if the difference is large, then the image error will also be large, and vice versa. There does not appear to be such a pattern in the 3D localization error.
  • FIG. 5 shows on the top row: the relative image error between the ground truth test image and reference image (500) compared to the relative image error between an image reconstructed using the described algorithm and reference image (502) as a function of cone beam projection angle.
  • FIG. 5 shows the 3D localization error between the ground truth test image and reference image (510) compared to the 3D localization error between an image reconstructed using the described algorithm and reference image (512) as a function of cone beam projection angle.
  • the PCA coefficients are initialized to those at its previous frame considering the high-frequency image acquisition (6 Hz) relative to breathing.
  • the image reconstruction and tumor localization for each projection can be achieved in less than 0.4 seconds using an
  • NVIDIA Tesla C1060 GPU card Particularly, depending on the difference between the test image and reference image, the algorithm converges between 0.17 seconds and 0.35 seconds (average 0.24 seconds). Compared with an implementation on MATLAB 7.7 running on a PC with a Quad 2.67 GHz CPU, which takes around 15 minutes to converge, the GPU version can achieve a speedup factor of more than 2500.
  • the dynamic NCAT phantom generated for testing purposes has a regular breathing pattern. Additionally, a dynamic NCAT phantom can be generated with irregular breathing patterns for testing purposes.
  • the accuracy of the algorithm can be tested on clinical data. Factors to consider for the algorithm can include: 1) the x-ray energies used to generate the training volumetric images (such as 4DCT) and cone beam projection images may be different; and 2) there will be scattering and other physical effects that may degrade the quality of the cone beam projection images in patient data. In both cases, a linear relationship between the image intensity of the computed and measured projection images may not be accurate. Some preprocessing (e.g. , a nonlinear transform) may be needed.
  • the respiratory motion prediction incorporating algorithm was first tested using a non-uniform rational B-spline (NURBS) based cardiac-torso (NCAT) phantom.
  • NURBS non-uniform rational B-spline
  • NCAT cardiac-torso
  • Three fundamental breathing parameters were considered: namely amplitude, period, and baseline shift relative to the training dataset.
  • regular breathing the same breathing parameters were first used as those in the training set and generated testing volumetric images. Then the breathing pattern was systematically changed such that each time, only one of the three parameters was changed while the other two were kept the same as those in the training set. In doing so, the effects of each of the three breathing parameters can be evaluated.
  • the different breathing parameters considered include: amplitude (diaphragm motion: 1 cm and 3 cm), period (3 s and 5 s), and baseline shift (1 cm toward the EOE phase and EOI phase).
  • amplitude diaphragm motion: 1 cm and 3 cm
  • period 3 s and 5 s
  • baseline shift (1 cm toward the EOE phase and EOI phase.
  • RPM Realtime Position Management
  • the dynamic NCAT phantom motion can be generated by specifying the diaphragm SI motion.
  • the scaled RPM signal was then used as a substitute for the diaphragm SI motion of the NCAT phantom.
  • the breathing signal from RPM has a varying breathing amplitude (between 1.0 and 2.5 cm), period (between 3.5 and 6.2 s), and a baseline shift of about 1 cm.
  • 360 cone beam projections were simulated using the Siddon's algorithm from angles that are uniformly distributed over one full gantry rotation (i.e., the angular spacing between consecutive projections is 1°). For instance, the cone beam projection at angle 180° corresponds to the dynamic phantom at 30 sec, and the projection at angle 360° corresponds to the dynamic phantom at 60 sec, and so on. Since the gantry rotation lasts 60 seconds, the sampling rate of the cone beam projections is 6 Hz.
  • Quantum noise was not added in the simulated x-ray projections for the digital phantom cases. However, intrinsic quantum noise is present in all physical phantom and patient projections and no noise reduction was attempted on those images.
  • the imager has a physical size of 40x30 cm . For efficiency considerations, every measured projection image was down-sampled to a resolution of 200x150 (pixel size: 2x2 mm ).
  • the respiratory motion prediction incorporating algorithm was also tested on a simple physical respiratory phantom.
  • the phantom consisted of a cork block resting on a platform that can be programmed to undergo translational motion in one dimension. This platform was used to simulate the SI respiratory motion of a lung tumor. Inside the cork block were embedded several tissue-like objects including a 2.5 cm water balloon which was used as the target for localization.
  • 4DCT of the physical phantom was acquired using a GE four-slice LightSpeed CT scanner (GE Medical Systems, Milwaukee, WI, USA) and the RPM system. During the 4DCT scan, the physical phantom moved according to a sinusoidal function along the SI direction, with a peak-to-peak amplitude of 1.0 cm. In order to calculate the correct projection matrix and image for the algorithm, the CT image corresponding to the EOE phase in 4DCT was re- sampled in space to have the same spatial resolution as CBCT and then rigidly registered to the CBCT. Since only rigid registration was performed at this stage, in reality, paired kV radiographs may also be used to achieve this task during patient setup.
  • Cone-beam projections were acquired using Varian On-Board Imager 1.4 in full-fan mode with 100 kVp, 80 mA and 25 ms exposure time.
  • the x-ray imager has a physical size of 40x30 cm and each projection image has a resolution of 1024x768.
  • two CBCT scans were performed for the physical phantom: one with regular breathing, and the other with irregular breathing. For the case of regular breathing, the phantom moved according to a sinusoidal function along the SI direction, with an increased peak-to-peak amplitude of 1.5 cm.
  • the phantom moved according to the same RPM signal used for the digital phantom as described before, with the peak-to-peak amplitude scaled to 1.5 cm.
  • 359 bone beam x-ray projections were acquired over an arc of around 200° with a frequency of about 10.7 Hz.
  • the CBCT scans for the physical phantom lasted for about 36 s, so only the first 36 s of the RPM signal was used during the scan. Since the motion of the physical phantom during the CBCT scan is known, it was used as the ground truth to evaluate the accuracy of target localization.
  • the respiratory motion prediction incorporating algorithm was evaluated with five patient data sets. Both 4DCT and CBCT were acquired using the same imaging systems as for the physical phantom. The cone beam projections were taken with the imaging system in half- fan mode with 110 kVp, 20 mA and 20 ms exposure time. Similarly to the physical phantom, the CT at the EOE phase was re- sampled in space and then rigidly registered to the CBCT according to the bones. The same preprocessing on each of the raw cone beam projections of the patient was used as for the physical phantom, namely, cutoff, downsampling, and a logarithm transform, except for the bow-tie filter correction, which was based on the half-fan mode. For patient 2, however, since the computed projections of the 4DCT did not have sufficient longitudinal coverage compared with the cone beam projections, an additional 3 cm was cut off from the raw cone beam projections in both superior and inferior directions.
  • the estimated 3D tumor location was projected onto the 2D imager and compared with that manually defined by a clinician.
  • the patients chosen had tumors that were visible to the clinician in some of the cone beam projections. From the clinician-defined contour, the tumor centroid position was calculated for each projection and used as the ground truth to evaluate the respiratory motion prediction incorporating algorithm.
  • the tumor localization error was calculated along the axial and tangential directions, both scaled back to the mean tumor position.
  • the axial direction is defined to be along the axis of rotation on the imager and the tangential direction is perpendicular to the axis of rotation on the imager and tangential to the gantry rotation.
  • the tumor motion along the axial direction on the imager is roughly a scaled version of its SI motion in the patient coordinate system.
  • the tumor motion along the tangential direction is a mixture of its AP and LR motions depending on the imaging angle.
  • the tumor motion along the tangential direction can be roughly a scaled version of either the AP motion (for imaging angles near LR or RL) or LR motion (for imaging angles near AP or PA).
  • the uncertainty in the definition of ground truth can be large, and may depend on the angle at which a projection is taken.
  • a rough estimate of the uncertainty in the ground truth based on comparing contours drawn from two observers suggests that the error in centroid position is about 1-2 mm on average.
  • Table 1 summarizes all the testing cases for the respiratory motion prediction incorporating algorithm, including 8 digital phantom cases, 2 physical phantom cases, and 5 lung cancer patient cases.
  • the table lists the relevant breathing parameters that were used during the CBCT scan for tumor localization purposes.
  • the table also includes information on the tumor size, location, as well as motion characteristics as measured in 4DCT for the 5 lung cancer patients.
  • the breathing parameters in the training dataset are: 2 cm amplitude, 4 s period, and 0 cm baseline shift; for cases 2 through 7, all breathing parameters are the same as in training dataset except the one indicated in the table. All parameters for lung cancer patients were measured in 4DCT.
  • Table 2 summarizes the localization accuracy and computational time for all testing cases. For the description of each case, refer to Table 1. For Cases 1 through 10 (digital and physical phantoms), the localization errors were computed in 3D; for Cases 11 through 15 (lung cancer patients), the localization errors were computed on the axial and tangential directions with respect to the imager.
  • FIGS. 6A, 6B, 6C, 6D, 6E, 6F, 6G, and 6H show the SI tumor position estimated by the respiratory motion prediction incorporating algorithm as well as the ground truth for the digital respiratory phantom for Cases 1 through 8 (600, 610, 620, 630, 640, 650, 660, and 670).
  • the numerical results for localization accuracy and computational time are summarized in Table 2.
  • the average 3D tumor localization error for regular breathing with different breathing parameters is less than 1 mm, which does not seem to be affected by amplitude change, period change, or baseline shift. The same holds true for irregular breathing.
  • the average computation time for both regular and irregular breathing ranges between 0.19 and 0.26 seconds using an NVIDIA Tesla C1060 GPU card. Particularly, the average computation time is 0.22 seconds in the case of regular breathing with a breathing amplitude of 3 cm. This is about 10% less than that using the PCA coefficients from the previous projection. In the case of irregular breathing, the average computation time is 0.26 + 0.06 seconds. The slight increase in computation time for irregular breathing was mainly due to a decreased accuracy of the prediction model used for parameter initialization.
  • FIGS. 7A and 7B show the estimated and ground truth tumor position in the SI direction for both regular (700) and irregular (710) breathing.
  • the numerical results for localization accuracy and computational time are summarized in Table 2 (Cases 9 and 10).
  • the average 3D tumor localization error for both regular and irregular breathing is less than 1 mm.
  • Results for the five lung cancer patients are shown in FIGS. 8A, 8B, 8C, 8D, 8E, 8F, 8G, 8H, 81 and 8J, where the tumor positions on the axial (800, 820, 840, 860 and 880) and tangential directions (810, 830, 850, 870 and 890) are shown separately.
  • the solid circles represent the tumor positions estimated by the respiratory motion prediction incorporating algorithm and the ground truth is shown as the solid lines.
  • the two arrows (a and b) in 800 indicate the two angles where the image reconstruction results are shown in FIGS. 9A, 9B, 9C, 9D, 9E, 9F.
  • FIGS. 9A, 9B, 9C, 9D, 9E, and 9F show the image reconstruction and tumor localization results for patient 1 at an EOE phase and an EOI phase (indicated in FIG. 8, Case 11), as well as the corresponding cone beam x-ray projections. Although it is hard to see the tumor on the projections, it is clearly visible in the coronal and sagittal slices of the reconstructed images.
  • FIG. 9A shows the raw cone beam projection (900)
  • FIGS. 9C and 9E show the corresponding coronal (920) and sagittal (940) views of the reconstructed image at angle - 146.5° at an EOE phase (arrow "a" in FIG. 8).
  • FIG. 9A shows the raw cone beam projection (900)
  • FIGS. 9C and 9E show the corresponding coronal (920) and sagittal (940) views of the reconstructed image at angle - 146.5° at an EOE phase (arrow "a" in FIG. 8).
  • FIG. 9A shows the
  • FIG. 9B shows the raw cone beam projection (910)
  • FIGS. 9D and 9F show the corresponding coronal (930) and sagittal (950) views of the reconstructed image at angle - 74.5° at an EOI phase (arrow "b" in FIG. 8).
  • the arrows indicate where the tumor was.
  • Solid lines represent the estimated tumor SI position at the current phase
  • the dashes lines represent the estimated tumor SI position at the other phase.
  • the respiratory motion prediction incorporating algorithm to localize 3D tumor positions in near real-time from a single x-ray image.
  • the respiratory motion prediction incorporating algorithm was systematically tested on digital and physical respiratory phantoms as well as lung cancer patients.
  • the average 3D tumor localization error is below 1 mm in the case of digital and physical phantoms.
  • the average tumor localization error is below 2 mm in both the axial and tangential directions.
  • the tumor localization error can arise from two different processes: 1), the PCA lung motion model; and 2), the 2D to 3D deformable registration between the projection and the reference image.
  • the ground truth of the 3D lung motion were used in the digital NCAT phantom and solved for the optimal PCA coefficients by directly minimizing the mean square error between the left and right sides in Eq. (1).
  • This procedure bypassed the 2D to 3D registration process, and thus gives the tumor localization error solely due to the PCA lung motion model. In Case 1, this error is about 0.28 mm over all the projections.
  • the error due to the 2D to 3D registration process can be obtained, which is about 0.75 mm. Therefore, the overall tumor localization error of about 0.8 mm may be mainly attributed to the 2D to 3D registration process. In comparison, the error introduced by the PCA lung motion model is insignificant.
  • FIGS. 11 A, 11B, 11C, 11D, HE and 11F show scatter plots between the image intensities of the preprocessed cone beam projection and of the computed projection of the reconstructed image at two different projection angles for patient 1, corresponding to the two projections shown in FIGS. 9A, 9B, 9C, 9D, 9E and 9F. Specifically, FIGS.
  • FIGS. 11A and 11B show measured cone bean projections after preprocessing of the image intensities (logarithm transform followed by reversion of the sign) (1100 and 1110).
  • FIGS. 11C and 11D show computed projections of the reconstructed image (1120 and 1130).
  • FIGS. HE and 11F show scatter plots between the intensities of the images shown in FIGS. 11A, 11B, 11C and 1 ID (1140 and 1150).
  • FIGS. 11A, 11C and HE correspond to the EOE phase in FIGS. 8A, 8B, 8C, 8D, 8E, 8F, 8G, 8H, 81 and 8J (arrow "a").
  • FIGS. 11B, 11D and 11F correspond to the EOI phase in FIGS.
  • a linear model is sufficient in both cases: more than 90% of the variance can be explained by a simple linear model. However, in the right subplot, because of more apparent interference from the treatment couch in the cone beam projection (which is absent in the computed projections), especially near the right edge, the linear model is less accurate than in the left subplot. This effect can be mitigated by pre- scanning the treatment couch and adding it to the reference image so that the couch will be also present in the computed projections.
  • a respiratory motion prediction incorporating algorithm has been described for 3D tumor localization from a single x-ray projection image by incorporating respiratory motion prediction.
  • the localization error can be identified to be less than 1 mm on average and around 2 mm at 95 percentile for both digital and physical phantoms, and within 2 mm on average and 4 mm at 95 percentile for five lung cancer patients on both axial and tangential directions.
  • the localization accuracy is not affected by the breathing pattern, be it regular or irregular.
  • the 3D localization can be achieved on a sub-second scale on GPU, requiring approximately 0.1 - 0.3 s for each x-ray projection.
  • FIG. 12 is a block diagram of an exemplary system 1200 for implementing real-time volumetric image reconstruction and 3D tumor localization for lung cancer radiotherapy.
  • the system 1200 can be implemented as a computing system that includes at least a central processing unit (CPU) 1210, a graphics processing unit (GPU) 1220, an output unit 1230 and a memory/storage unit 1240. These system components can communicate with each other over an interconnect system such as a bus 1250.
  • the GPU 1220 can perform the PCA lung motion model based process (see FIGS. 1A and IB) and the PCA lung motion model based process that incorporates respiratory motion prediction (see FIGS.
  • the GPU 1220 can load the needed data from the memory/storage 1240 and output the results of the processes through the output unit 1230.
  • Various programming environments and Graphics Cards can be used.
  • compute unified device architecture (CUD A) was used as the programming environment and an NVIDIA Tesla C1060 GPU card was used as the Graphics Card.
  • CCD A compute unified device architecture
  • NVIDIA Tesla C1060 GPU card was used as the Graphics Card.
  • the programming environment can include GPU programming languages such as OpenCL and other similar programming languages.
  • the memory/storage unit 1240 can include various volatile and non-volatile memory devices including the various types of read only memories (ROMs), random access memories (RAMs), hard disks, Flash memories, etc.
  • the output unit 1230 can include various output devices including liquid crystal displays, printers, storage devices, etc.
  • Implementations of the subject matter and the functional operations described in this specification can be implemented in various imaging and radiation therapy systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus.
  • the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them.
  • data processing apparatus encompasses all apparatus, devices, and machines for processing data, including by way of example a
  • the apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • Computer readable media suitable for storing computer program instructions and data include all forms of non volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • the described PCA lung motion model based algorithm and respiratory motion prediction incorporating algorithm can also use surface image, surface marker, implanted marker, diaphragm, lung volume, and air flow to reconstruct a volumetric image and to derive the 3D tumor position.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Medicinal Chemistry (AREA)
  • Mathematical Optimization (AREA)
  • Chemical & Material Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Dentistry (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

La présente invention concerne des techniques et des systèmes de mise en œuvre d'une reconstruction d'images volumétriques et d'une localisation 3D d'une tumeur en temps réel sur la base d'une unique image obtenue par projection de rayons X à des fins de radiothérapie du cancer du poumon. Selon un aspect, un procédé comprend la superposition d'images déformables entre une phase de référence et les N-1 phases restantes pour un ensemble d'images volumétriques d'un patient lors de N phases de respiration en vue de l'obtention d'un ensemble de N-1 champs de vecteurs de déformation (DVF). L'ensemble de DVF peut être efficacement représenté par quelques vecteurs propres et quelques coefficients obtenus à partir d'une APC. En outre, les coefficients d'APC sont modifiés en vue de la génération de nouveaux DVF qui, lorsqu'ils sont appliqués sur une image de référence, donnent de nouvelles images volumétriques.
PCT/US2011/033133 2010-04-19 2011-04-19 Reconstruction d'images volumétriques et localisation 3d d'une tumeur en temps réel sur la base d'une unique image obtenue par projection de rayons x à des fins de radiothérapie du cancer du poumon WO2011133606A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US32570010P 2010-04-19 2010-04-19
US61/325,700 2010-04-19

Publications (2)

Publication Number Publication Date
WO2011133606A2 true WO2011133606A2 (fr) 2011-10-27
WO2011133606A3 WO2011133606A3 (fr) 2011-12-22

Family

ID=44834774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/033133 WO2011133606A2 (fr) 2010-04-19 2011-04-19 Reconstruction d'images volumétriques et localisation 3d d'une tumeur en temps réel sur la base d'une unique image obtenue par projection de rayons x à des fins de radiothérapie du cancer du poumon

Country Status (1)

Country Link
WO (1) WO2011133606A2 (fr)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013163391A1 (fr) * 2012-04-25 2013-10-31 The Trustees Of Columbia University In The City Of New York Système de lumière structurée chirurgicale
CN103714318A (zh) * 2013-12-13 2014-04-09 谭玉波 三维人脸配准方法
CN104657333A (zh) * 2015-02-11 2015-05-27 中国海洋大学 基于gpu的动态二维矢量场流线可视化算法
CN104899914A (zh) * 2015-06-26 2015-09-09 东北大学 基于流线生长法的纹理生成方法
CN105640582A (zh) * 2016-03-02 2016-06-08 中国人民解放军第四军医大学 一种深部组织x射线激发多光谱断层成像系统及方法
RU2706983C2 (ru) * 2015-01-28 2019-11-21 Электа, Инк. Трехмерные локализация и отслеживание для адаптивной лучевой терапии
US10803987B2 (en) 2018-11-16 2020-10-13 Elekta, Inc. Real-time motion monitoring using deep neural network
US10835761B2 (en) 2018-10-25 2020-11-17 Elekta, Inc. Real-time patient motion monitoring using a magnetic resonance linear accelerator (MR-LINAC)
US11083913B2 (en) 2018-10-25 2021-08-10 Elekta, Inc. Machine learning approach to real-time patient motion monitoring
CN113499091A (zh) * 2021-08-19 2021-10-15 四川大学华西医院 一种患者体表和体内肿瘤运动相关性及肿瘤内部动度的预测方法和系统

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6565080B2 (ja) * 2015-08-11 2019-08-28 東芝エネルギーシステムズ株式会社 放射線治療装置、その作動方法及びプログラム

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070086636A1 (en) * 2005-10-17 2007-04-19 Keall Paul J Method and system for using computed tomography to test pulmonary function
WO2008135730A1 (fr) * 2007-05-03 2008-11-13 Ucl Business Plc Procédé d'enregistrement d'images
US20090034819A1 (en) * 2007-07-30 2009-02-05 Janne Ilmari Nord Systems and Methods for Adapting a Movement Model Based on an Image
US20090253980A1 (en) * 2008-04-08 2009-10-08 General Electric Company Method and apparatus for determining the effectiveness of an image transformation process

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070086636A1 (en) * 2005-10-17 2007-04-19 Keall Paul J Method and system for using computed tomography to test pulmonary function
WO2008135730A1 (fr) * 2007-05-03 2008-11-13 Ucl Business Plc Procédé d'enregistrement d'images
US20090034819A1 (en) * 2007-07-30 2009-02-05 Janne Ilmari Nord Systems and Methods for Adapting a Movement Model Based on an Image
US20090253980A1 (en) * 2008-04-08 2009-10-08 General Electric Company Method and apparatus for determining the effectiveness of an image transformation process

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013163391A1 (fr) * 2012-04-25 2013-10-31 The Trustees Of Columbia University In The City Of New York Système de lumière structurée chirurgicale
CN103714318A (zh) * 2013-12-13 2014-04-09 谭玉波 三维人脸配准方法
US10987522B2 (en) 2015-01-28 2021-04-27 Elekta, Inc. Three dimensional localization and tracking for adaptive radiation therapy
RU2706983C2 (ru) * 2015-01-28 2019-11-21 Электа, Инк. Трехмерные локализация и отслеживание для адаптивной лучевой терапии
CN104657333A (zh) * 2015-02-11 2015-05-27 中国海洋大学 基于gpu的动态二维矢量场流线可视化算法
CN104899914A (zh) * 2015-06-26 2015-09-09 东北大学 基于流线生长法的纹理生成方法
CN105640582A (zh) * 2016-03-02 2016-06-08 中国人民解放军第四军医大学 一种深部组织x射线激发多光谱断层成像系统及方法
US10835761B2 (en) 2018-10-25 2020-11-17 Elekta, Inc. Real-time patient motion monitoring using a magnetic resonance linear accelerator (MR-LINAC)
CN113168688A (zh) * 2018-10-25 2021-07-23 医科达有限公司 使用磁共振线性加速器(mr线性加速器)的实时患者运动监测
US11083913B2 (en) 2018-10-25 2021-08-10 Elekta, Inc. Machine learning approach to real-time patient motion monitoring
US11491348B2 (en) 2018-10-25 2022-11-08 Elekta, Inc. Real-time patient motion monitoring using a magnetic resonance linear accelerator (MRLINAC)
US11547874B2 (en) 2018-10-25 2023-01-10 Elekta, Inc. Machine learning approach to real-time patient motion monitoring
US10803987B2 (en) 2018-11-16 2020-10-13 Elekta, Inc. Real-time motion monitoring using deep neural network
US11342066B2 (en) 2018-11-16 2022-05-24 Elekta, Inc. Real-time motion monitoring using deep neural network
CN113499091A (zh) * 2021-08-19 2021-10-15 四川大学华西医院 一种患者体表和体内肿瘤运动相关性及肿瘤内部动度的预测方法和系统
CN113499091B (zh) * 2021-08-19 2023-08-15 四川大学华西医院 一种患者体表和体内肿瘤运动相关性及肿瘤内部动度的预测方法和系统

Also Published As

Publication number Publication date
WO2011133606A3 (fr) 2011-12-22

Similar Documents

Publication Publication Date Title
US11954761B2 (en) Neural network for generating synthetic medical images
WO2011133606A2 (fr) Reconstruction d'images volumétriques et localisation 3d d'une tumeur en temps réel sur la base d'une unique image obtenue par projection de rayons x à des fins de radiothérapie du cancer du poumon
Li et al. 3D tumor localization through real‐time volumetric x‐ray imaging for lung cancer radiotherapy
JP7350422B2 (ja) 複合撮像スライスを用いた適応型放射線治療
RU2675678C1 (ru) Управление движением в линейном ускорителе (linac), управляемом с помощью mri
Marchant et al. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods
US10512441B2 (en) Computed tomography having motion compensation
WO2014144019A1 (fr) Procedes, systemes et supports lisibles par ordinateur pour alignement deformable en 2d/3d en temps reel utilisant un apprentissage de mesures
Capostagno et al. Task-driven source–detector trajectories in cone-beam computed tomography: II. Application to neuroradiology
US10388036B2 (en) Common-mask guided image reconstruction for enhanced four-dimensional cone-beam computed tomography
US20150174428A1 (en) Dose deformation error calculation method and system
Christoffersen et al. Registration-based reconstruction of four-dimensional cone beam computed tomography
Shao et al. Real-time liver tumor localization via a single x-ray projection using deep graph neural network-assisted biomechanical modeling
EP3468668A1 (fr) Suivi de tissu mou à l'aide d'un rendu de volume physiologique
Dhou et al. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models
Zhang An unsupervised 2D–3D deformable registration network (2D3D-RegNet) for cone-beam CT estimation
Zhang et al. Reducing scan angle using adaptive prior knowledge for a limited-angle intrafraction verification (LIVE) system for conformal arc radiotherapy
EP2742483B1 (fr) Procédé de traitement d'image
Lappas et al. Automatic contouring of normal tissues with deep learning for preclinical radiation studies
CN109414234A (zh) 用于从先前生成的3d数据集生成2d投影的系统和方法
Chou et al. Claret: A fast deformable registration method applied to lung radiation therapy
Park et al. Common-mask guided image reconstruction (c-MGIR) for enhanced 4D cone-beam computed tomography
Birkner et al. Analysis of the rigid and deformable component of setup inaccuracies on portal images in head and neck radiotherapy
Holler A Method for Predicting Dose Changes for HN Treatment Using Surface Imaging
Sun et al. CT Reconstruction from Few Planar X-Rays with Application Towards Low-Resource Radiotherapy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11772591

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11772591

Country of ref document: EP

Kind code of ref document: A2