EP4329875A1 - Real-time anatomic position monitoring for radiotherapy treatment - Google Patents
Real-time anatomic position monitoring for radiotherapy treatmentInfo
- Publication number
- EP4329875A1 EP4329875A1 EP22796941.7A EP22796941A EP4329875A1 EP 4329875 A1 EP4329875 A1 EP 4329875A1 EP 22796941 A EP22796941 A EP 22796941A EP 4329875 A1 EP4329875 A1 EP 4329875A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- dimensional image
- radiotherapy
- features
- image data
- interest
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001959 radiotherapy Methods 0.000 title claims abstract description 229
- 238000012544 monitoring process Methods 0.000 title claims abstract description 31
- 230000033001 locomotion Effects 0.000 claims abstract description 135
- 238000000034 method Methods 0.000 claims abstract description 108
- 238000010801 machine learning Methods 0.000 claims abstract description 72
- 238000012549 training Methods 0.000 claims abstract description 71
- 230000009466 transformation Effects 0.000 claims abstract description 46
- 238000012545 processing Methods 0.000 claims description 96
- 238000003384 imaging method Methods 0.000 claims description 60
- 238000000605 extraction Methods 0.000 claims description 30
- 238000003860 storage Methods 0.000 claims description 30
- 238000002591 computed tomography Methods 0.000 claims description 27
- 239000013598 vector Substances 0.000 claims description 25
- 230000005291 magnetic effect Effects 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 17
- 210000003484 anatomy Anatomy 0.000 claims description 15
- 238000013519 translation Methods 0.000 claims description 10
- 230000009467 reduction Effects 0.000 claims description 8
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000000844 transformation Methods 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 238000002560 therapeutic procedure Methods 0.000 abstract description 18
- 230000015654 memory Effects 0.000 description 41
- 230000005855 radiation Effects 0.000 description 35
- 238000004891 communication Methods 0.000 description 31
- 238000004422 calculation algorithm Methods 0.000 description 22
- 230000000875 corresponding effect Effects 0.000 description 19
- 206010028980 Neoplasm Diseases 0.000 description 14
- 238000005259 measurement Methods 0.000 description 11
- 238000013459 approach Methods 0.000 description 10
- 238000012384 transportation and delivery Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 238000000513 principal component analysis Methods 0.000 description 8
- 238000013473 artificial intelligence Methods 0.000 description 7
- 230000002829 reductive effect Effects 0.000 description 7
- 210000000056 organ Anatomy 0.000 description 6
- 230000029058 respiratory gaseous exchange Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 238000007408 cone-beam computed tomography Methods 0.000 description 5
- 230000000670 limiting effect Effects 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 238000002059 diagnostic imaging Methods 0.000 description 4
- 230000003068 static effect Effects 0.000 description 4
- 238000002604 ultrasonography Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000013170 computed tomography imaging Methods 0.000 description 3
- 229940079593 drug Drugs 0.000 description 3
- 239000003814 drug Substances 0.000 description 3
- 238000002786 image-guided radiation therapy Methods 0.000 description 3
- 230000036961 partial effect Effects 0.000 description 3
- 238000002600 positron emission tomography Methods 0.000 description 3
- 238000007637 random forest analysis Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000007796 conventional method Methods 0.000 description 2
- 230000002596 correlated effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000002594 fluoroscopy Methods 0.000 description 2
- 238000002599 functional magnetic resonance imaging Methods 0.000 description 2
- 238000012417 linear regression Methods 0.000 description 2
- 238000002595 magnetic resonance imaging Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000001208 nuclear magnetic resonance pulse sequence Methods 0.000 description 2
- 238000002203 pretreatment Methods 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 238000012285 ultrasound imaging Methods 0.000 description 2
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 1
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 238000005481 NMR spectroscopy Methods 0.000 description 1
- 238000012879 PET imaging Methods 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000000576 coating method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 239000002872 contrast media Substances 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000013535 dynamic contrast enhanced MRI Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000000799 fluorescence microscopy Methods 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 150000002500 ions Chemical class 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 210000000920 organ at risk Anatomy 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- APTZNLHMIGJTEW-UHFFFAOYSA-N pyraflufen-ethyl Chemical compound C1=C(Cl)C(OCC(=O)OCC)=CC(C=2C(=C(OC(F)F)N(C)N=2)Cl)=C1F APTZNLHMIGJTEW-UHFFFAOYSA-N 0.000 description 1
- 238000002673 radiosurgery Methods 0.000 description 1
- 238000000611 regression analysis Methods 0.000 description 1
- 230000000241 respiratory effect Effects 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- WFKWXMTUELFFGS-UHFFFAOYSA-N tungsten Chemical compound [W] WFKWXMTUELFFGS-UHFFFAOYSA-N 0.000 description 1
- 229910052721 tungsten Inorganic materials 0.000 description 1
- 239000010937 tungsten Substances 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/05—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves
- A61B5/055—Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/113—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb occurring during breathing
- A61B5/1135—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb occurring during breathing by monitoring thoracic expansion
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61N—ELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
- A61N5/00—Radiation therapy
- A61N5/10—X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
- A61N5/1048—Monitoring, verifying, controlling systems and methods
- A61N5/1049—Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2576/00—Medical imaging apparatus involving image processing or analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Definitions
- REAL-TIME ANATOMIC POSITION MONITORING FOR RADIOTHERAPY TREATMENT PRIORITY CLAIM [0001] This application claims the benefit of priority to: United States Patent Application No.17/302,254, filed April 28, 2021, and titled “REAL-TIME ANATOMIC POSITION MONITORING IN RADIOTHERAPY USING MACHINE LEARNING REGRESSION”; and United States Patent Application No. 17/302,254, filed April 28, 2021, and titled “REAL-TIME ANATOMIC POSITION MONITORING FOR RADIOTHERAPY TREATMENT CONTROL; each of which is incorporated herein by reference in its entirety.
- Embodiments of the present disclosure pertain generally to medical image and artificial intelligence processing techniques used in connection with a radiation therapy planning and treatment system.
- the present disclosure pertains to using machine learning technologies to estimate anatomic position and movement of a human subject during a radiation therapy session, and provide a control of a radiotherapy machine based on such estimated position and movement.
- Radiation therapy (or “radiotherapy”) can be used to treat cancers or other ailments in mammalian (e.g., human and animal) tissue.
- One such radiotherapy technique is provided using a Gamma Knife, by which a patient is irradiated by a large number of low-intensity gamma rays that converge with high intensity and high precision at a target (e.g., a tumor).
- a linear accelerator LINAC
- a tumor is irradiated by high-energy particles (e.g., electrons, protons, ions, high-energy photons, and the like).
- the placement and dose of the radiation beam must be accurately controlled to ensure the tumor receives the prescribed radiation and the placement of the beam should be such as to minimize damage to the surrounding healthy tissue, often called the organ(s) at risk (OARs).
- Some imaging techniques have been developed to estimate the relative motion of an object contained in a specified region of interest, i.e. relative to a reference volume, which contains auxiliary information such as contoured regions of interest or the dose plan.
- the underlying 3D patient motion may be estimated (inferred) from instantaneous partial measurements, using from 2D images acquired in real-time.
- Some of these estimation techniques use 2D kV projections or 2D MRI slices to determine an estimate of movement in two- dimensional planes, but are limited because 2D images are not able to fully track the movement of the various objects in three dimensions.
- a radiotherapy treatment performed by a radiotherapy machine may be adapted or modified, using a relative motion estimation of a region of interest.
- Adapting or modifying the radiotherapy treatment may include one or more of: providing a command to control a radiotherapy beam that is being provided or is planned to be provided by the radiotherapy machine; changing a position of a radiotherapy beam from the radiotherapy machine, based on the relative motion estimation; changing a shape of a radiotherapy beam from the radiotherapy machine, based on the relative motion estimation; gating a radiotherapy beam (e.g., stopping an output of the radiotherapy beam, or starting an output of the radiotherapy beam), based on the relative motion estimation.
- Other variations or operations for radiotherapy treatment control may also be triggered or affected by the resulting motion estimation.
- operations for monitoring anatomic position include: obtaining three-dimensional image data corresponding to the subject, the three-dimensional image data including: a reference volume that represents the patient anatomy in three dimensions, and at least one region of interest defined within the three dimensions; obtaining two-dimensional image data corresponding to the subject, the two-dimensional image data captured during the radiotherapy treatment session, and the two-dimensional image data capturing at least a portion of the region of interest; extracting features from the two-dimensional image data; providing the extracted features as input to a machine learning regression model, the machine learning regression model trained to estimate a spatial transformation in the three dimensions of the reference volume from features extracted from two- dimensional image data; and obtaining, from output of the machine learning regression model, a relative motion estimation of the at least one region of interest, with the relative motion estimation indicating motion of the at least one region of interest relative to the reference volume, as estimated from the extracted features.
- the two-dimensional image data (e.g., captured in real time) comprises a first two-dimensional image captured at a first orientation and a second two-dimensional image captured at a second orientation.
- the first two-dimensional image is captured from a first plane
- the second two-dimensional image is captured from a second plane that is orthogonal to the first plane.
- the first two-dimensional image may be captured at a first time during the radiotherapy treatment session and the second two-dimensional image may be captured at a second time during the radiotherapy treatment session.
- the second time may occur within 300 milliseconds after the first time.
- features extracted from the two- dimensional image data include a first set of features extracted from the first two- dimensional image and a second set of features extracted from the second two- dimensional image.
- the first set of features and the second set of features may be combined into a multi-dimensional feature vector, and the machine learning regression model is trained to process the multi-dimensional feature vector as input.
- the extracting of the first set of features and the second set of features may include extracting respective features within the at least one region of interest.
- the extracting of the respective features within the at least one region of interest may include performing deformable image registration, and performing dimensionality reduction techniques.
- the three-dimensional image data is captured prior to the radiotherapy treatment session, and the three-dimensional image data comprises a three-dimensional magnetic resonance (MR) volume or a three-dimensional computed tomography (CT) volume.
- the first and second two-dimensional images may be kilovoltage (kV) x-ray projection images, and extracting the first set of features and the second set of features comprises extracting fiducial positions from the respective kV x-ray projection images.
- a training process may include training the machine learning regression model prior to the radiotherapy treatment session, with the training further including fitting the regression model with a mapping identified between pairs of image transformation parameters and corresponding multi- orientation features (e.g., extracted from the volumes captured prior to the radiotherapy treatment session).
- the two-dimensional image data includes magnetic resonance (MR) imaging data, as the reference volume is acquired with a first MR pulse acquisition sequence and the two-dimensional image data is acquired with a second MR pulse acquisition sequence.
- MR magnetic resonance
- use of multiple imaging contrasts may include capturing an intermediate three- dimensional reference volume using the second MR pulse acquisition sequence, prior to the radiotherapy treatment session; and performing a registration of the intermediate three-dimensional reference volume to the reference volume; the relevant training of the machine learning regression model includes use of this registration, and analysis of the extracted features includes use this registration.
- use of multiple imaging contrasts includes obtaining image templates from additional two-dimensional image data corresponding to the subject, the additional two-dimensional image data obtained using the second MR pulse acquisition sequence prior to the radiotherapy treatment session; performing a registration of the image templates to the reference volume, to determine an offset between the image templates and the reference volume; and modifying the three-dimensional image data based on the offset, such that the machine learning regression model is trained to use regression with the modified three-dimensional image data.
- extracting features from the two-dimensional image data may include use of the image templates as registration targets for feature extraction; further, the relative motion estimation of the at least one region of interest may include use of the offset.
- further operations may include performing a radiotherapy treatment with a radiotherapy machine, using the relative motion estimation of the region of interest.
- Performing the radiotherapy treatment may include one or more of: changing a position of a radiotherapy beam from the radiotherapy machine, based on the relative motion estimation; changing a shape of a radiotherapy beam from the radiotherapy machine, based on the relative motion estimation; gating a radiotherapy beam (e.g., stopping an output of the radiotherapy beam, or starting an output of the radiotherapy beam), based on the relative motion estimation.
- Other variations or operations may also be triggered or affected by the resulting motion estimation.
- FIG. 1 illustrates a radiotherapy system, according to some examples.
- FIG. 2A illustrates a radiation therapy system having radiation therapy output configured to provide a therapy beam, according to some examples.
- FIG. 2A illustrates a radiation therapy system having radiation therapy output configured to provide a therapy beam, according to some examples.
- FIG. 2B illustrates a system including a combined radiation therapy system and an imaging system, such as a cone beam computed tomography (CBCT) imaging system, according to some examples.
- FIG. 3 illustrates a partially cut-away view of a system including a combined radiation therapy system and an imaging system, such as a nuclear magnetic resonance (MR) imaging (MRI) system, according to some examples.
- FIG. 4 illustrates anatomic position monitoring operations, according to some examples.
- FIG. 5 illustrates a treatment workflow for performing anatomic position monitoring, using results of a trained machine learning regression model, according to some examples.
- FIG. 6 illustrates a training workflow for an anatomic position monitoring algorithm, implemented with a machine learning regression model, according to some examples.
- FIG. 7 illustrates feature extraction using deformable registration and principal component analysis, according to some examples.
- FIG. 8 illustrates a corrective procedure using registration for feature extraction, to account for offsets due to different contrast images, according to some examples.
- FIG. 9 illustrates a regression machine learning workflow for use in estimating patient motion during a radiotherapy session, according to some examples.
- FIG.10 illustrates a flowchart for a method of training a regression machine learning model for generating estimated motion in a region of interest, according to some examples.
- FIG. 11 illustrates a flowchart for a method of using a trained regression machine learning model for estimating movement in a region of interest, according to some examples.
- FIG.12 illustrates a flowchart for a method performed by an image processing computing system in performing training and treatment workflows, according to some examples.
- FIG. 13 illustrates an exemplary block diagram of a machine on which one or more of the methods as discussed herein can be implemented. DETAILED DESCRIPTION [0030] In the following detailed description, reference is made to the accompanying drawings which form a part hereof, and which is shown by way of illustration-specific embodiments in which the present disclosure may be practiced.
- This APM technique includes the analysis of 2D images, captured on an ongoing basis, with a trained regression model.
- This trained regression model generates estimated transformation parameters that are used to infer the true 3D motion of a specified region of interest (and related anatomical structure).
- features are extracted from one or more 2D images captured of a patient, and analyzed with the trained regression model.
- This machine learning regression model is trained on different types and characteristics of image transformation and image features, generated from a 3D reference volume of imaging data captured from the same patient.
- the extracted features are used to estimate an image transformation describe the movement of a region of interest within the 3D volume. The movement of this region of interest, within a 3D space, may be used for a variety of radiotherapy treatment adaptations.
- IGRT image guided radiation therapy
- CT computed tomography
- CBCT cone beam CT
- MR magnetic resonance
- PET positron-emission tomography
- a CBCT-enabled LINAC linear accelerator
- a MR-LINAC device may consist of a LINAC integrated directly with a magnetic resonance (MR) scanner.
- MR magnetic resonance
- the following APM methods and implementations provide use of a machine learning regressor model to analyze movement from 2D images captured from one or multiple planes. Specifically, a regressor model is trained to learn the relationship between features of the instantaneous 2D image(s) and the relative motion parameters—relative to a 3D reference volume. If more than one 2D plane of acquisition is used, the model can straightforwardly learn to map the multi-view information to such relative motion parameters. [0036]
- the technical benefits of the following APM techniques include improved accuracy in the delivery of radiotherapy treatment dosage from a radiotherapy machine, and the evaluation of less data or user inputs to produce or perform more accurate radiotherapy machine treatment plans.
- FIG. 1 illustrates a radiotherapy system 100 adapted for using machine learning models for assisting anatomic position monitoring.
- the anatomic position monitoring may be used to determine a patient state to enable the radiotherapy system 100 to provide radiation therapy to a patient based on specific aspects of captured medical imaging data.
- the radiotherapy system includes an image processing computing system 110 which hosts patient state processing logic 120.
- the image processing computing system 110 may be connected to a network (not shown), and such network may be connected to the Internet.
- a network can connect the image processing computing system 110 with one or more medical information sources (e.g., a radiology information system (RIS), a medical record system (e.g., an electronic medical record (EMR) / electronic health record (EHR) system), an oncology information system (OIS)), one or more image data sources 150, an image acquisition device 170, and a treatment device 180 (e.g., a radiation therapy device).
- RIS radiology information system
- EMR electronic medical record
- EHR electronic health record
- OIS oncology information system
- image data sources 150 e.g., an image acquisition device 170
- an image acquisition device 170 e.g., a radiation therapy device
- a treatment device 180 e.g., a radiation therapy device.
- the image processing computing system 110 can be configured to perform image patient state operations by executing instructions or data from the patient state processing logic 120, as part of operations to generate and customize radiation therapy treatment plans to be used by the treatment device 180.
- the image processing computing system 110 may include processing circuitry 112, memory 114, a storage device 116, and other hardware and software-operable features such as a user interface 140, communication interface, and the like.
- the storage device 116 may store computer-executable instructions, such as an operating system, radiation therapy treatment plans (e.g., original treatment plans, adapted treatment plans, or the like), software programs (e.g., radiotherapy treatment plan software, artificial intelligence implementations such as machine learning models, deep learning models, and neural networks, etc.), and any other computer-executable instructions to be executed by the processing circuitry 112.
- the processing circuitry 112 may include a processing device, such as one or more general-purpose processing devices such as a microprocessor, a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), or the like. More particularly, the processing circuitry 112 may be a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction Word (VLIW) microprocessor, a processor implementing other instruction sets, or processors implementing a combination of instruction sets.
- CISC complex instruction set computing
- RISC reduced instruction set computing
- VLIW very long instruction Word
- the processing circuitry 112 may also be implemented by one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), a System on a Chip (SoC), or the like. As would be appreciated by those skilled in the art, in some examples, the processing circuitry 112 may be a special- purpose processor, rather than a general-purpose processor.
- ASIC application specific integrated circuit
- FPGA field programmable gate array
- DSP digital signal processor
- SoC System on a Chip
- the processing circuitry 112 may include one or more known processing devices, such as a microprocessor from the PentiumTM, CoreTM, XeonTM, or Itanium® family manufactured by IntelTM, the TurionTM, AthlonTM, SempronTM, OpteronTM, FXTM, PhenomTM family manufactured by AMDTM, or any of various processors manufactured by Sun Microsystems.
- the processing circuitry 112 may also include graphical processing units such as a GPU from the GeForce®, Quadro®, Tesla® family manufactured by NvidiaTM, GMA, IrisTM family manufactured by IntelTM, or the RadeonTM family manufactured by AMDTM.
- the processing circuitry 112 may also include accelerated processing units such as the Xeon PhiTM family manufactured by IntelTM.
- processors are not limited to any type of processor(s) otherwise configured to meet the computing demands of identifying, analyzing, maintaining, generating, and/or providing large amounts of data or manipulating such data to perform the methods disclosed herein.
- processor may include more than one processor, for example, a multi-core design or a plurality of processors each having a multi-core design.
- the processing circuitry 112 can execute sequences of computer program instructions, stored in memory 114, and accessed from the storage device 116, to perform various operations, processes, methods that will be explained in greater detail below.
- the memory 114 may comprise read-only memory (ROM), a phase-change random access memory (PRAM), a static random access memory (SRAM), a flash memory, a random access memory (RAM), a dynamic random access memory (DRAM) such as synchronous DRAM (SDRAM), an electrically erasable programmable read-only memory (EEPROM), a static memory (e.g., flash memory, flash disk, static random access memory) as well as other types of random access memories, a cache, a register, a compact disc read-only memory (CD-ROM), a digital versatile disc (DVD) or other optical storage, a cassette tape, other magnetic storage device, or any other non-transitory medium that may be used to store information including image, data, or computer executable instructions (e.g., stored in any format) capable of being accessed by the processing circuitry 112, or any other type of computer device.
- ROM read-only memory
- PRAM phase-change random access memory
- SRAM static random access memory
- DRAM dynamic random access memory
- the computer program instructions can be accessed by the processing circuitry 112, read from the ROM, or any other suitable memory location, and loaded into the RAM for execution by the processing circuitry 112.
- the storage device 116 may constitute a drive unit that includes a machine-readable medium on which is stored one or more sets of instructions and data structures (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein (including, in various examples, the patient state processing logic 120 and the user interface 140).
- the instructions may also reside, completely or at least partially, within the memory 114 and/or within the processing circuitry 112 during execution thereof by the image processing computing system 110, with the memory 114 and the processing circuitry 112 also constituting machine-readable media.
- the memory 114 or the storage device 116 may constitute a non- transitory computer-readable medium.
- the memory 114 or the storage device 116 may store or load instructions for one or more software applications on the computer-readable medium.
- Software applications stored or loaded with the memory 114 or the storage device 116 may include, for example, an operating system for common computer systems as well as for software- controlled devices.
- the image processing computing system 110 may also operate a variety of software programs comprising software code for implementing the patient state processing logic 120 and the user interface 140.
- the memory 114 and the storage device 116 may store or load an entire software application, part of a software application, or code or data that is associated with a software application, which is executable by the processing circuitry 112.
- the memory 114 or the storage device 116 may store, load, or manipulate one or more radiation therapy treatment plans, imaging data, patient state data, dictionary entries, artificial intelligence model data, labels, and mapping data, etc. It is contemplated that software programs may be stored not only on the storage device 116 and the memory 114 but also on a removable computer medium, such as a hard drive, a computer disk, a CD-ROM, a DVD, a HD-DVD, a Blu-Ray DVD, USB flash drive, a SD card, a memory stick, or any other suitable medium; such software programs may also be communicated or received over a network.
- the image processing computing system 110 may include a communication interface, network interface card, and communications circuitry.
- An example communication interface may include, for example, a network adaptor, a cable connector, a serial connector, a USB connector, a parallel connector, a high-speed data transmission adaptor (e.g., such as fiber optic, USB 3.0, thunderbolt, and the like), a wireless network adaptor (e.g., such as a IEEE 802.11/Wi-Fi adapter), a telecommunication adapter (e.g., to communicate with 3G, 4G/LTE, and 5G, networks and the like), and the like.
- a communication interface may include one or more digital and/or analog communication devices that permit a machine to communicate with other machines and devices, such as remotely located components, via a network.
- the network may provide the functionality of a local area network (LAN), a wireless network, a cloud computing environment (e.g., software as a service, platform as a service, infrastructure as a service, etc.), a client-server, a wide area network (WAN), and the like.
- LAN local area network
- WAN wide area network
- network may be a LAN or a WAN that may include other systems (including additional image processing computing systems or image-based components associated with medical imaging or radiotherapy operations).
- the image processing computing system 110 may obtain image data 160 from the image data source 150, for hosting on the storage device 116 and the memory 114.
- the software programs operating on the image processing computing system 110 may convert or transform medical images of one format (e.g., MRI) to another format (e.g., CT), such as by producing synthetic images, such as a pseudo-CT image.
- the software programs may register or associate a patient medical image (e.g., a CT image or an MR image) with that patient’s dose distribution of radiotherapy treatment (e.g., also represented as an image) so that corresponding image voxels and dose voxels are appropriately associated.
- the software programs may visualize, hide, emphasize, or de-emphasize some aspect of anatomical features, patient measurements, patient state information, or dose or treatment information, within medical images.
- the storage device 116 and memory 114 may store and host data to perform these purposes, including the image data 160, patient data, and other data required to create and implement a radiation therapy treatment plan and associated patient state estimation operations.
- the processing circuitry 112 may be communicatively coupled to the memory 114 and the storage device 116, and the processing circuitry 112 may be configured to execute computer executable instructions stored thereon from either the memory 114 or the storage device 116.
- the processing circuitry 112 may execute instructions to cause medical images from the image data 160 to be received or obtained in memory 114, and processed using the patient state processing logic 120.
- the image processing computing system 110 may receive image data 160 from the image acquisition device 170 or image data sources 150 via a communication interface and network to be stored or cached in the storage device 116.
- the processing circuitry 112 may also send or update medical images stored in memory 114 or the storage device 116 via a communication interface to another database or data store (e.g., a medical facility database).
- another database or data store e.g., a medical facility database.
- one or more of the systems may form a distributed computing/simulation environment that uses a network to collaboratively perform the embodiments described herein (such as in an edge computing environment).
- such network may be connected to the Internet to communicate with servers and clients that reside remotely on the Internet.
- the processing circuitry 112 may utilize software programs (e.g., a treatment planning software) along with the image data 160 and other patient data to create a radiation therapy treatment plan.
- the image data 160 may include 2D or 3D volume imaging, such as from a CT or MR.
- the processing circuitry 112 may utilize aspects of AI such as machine learning, deep learning, and neural networks to generate or control various aspects of the treatment plan, including in response to an estimated patient state or patient movement as discussed in the following examples.
- such software programs may utilize patient state processing logic 120 to implement a patient state determination workflow 130, using the techniques further discussed herein.
- the processing circuitry 112 may subsequently then modify and transmit the executable radiation therapy treatment plan via a communication interface and the network to the treatment device 180, where the radiation therapy plan will be used to treat a patient with radiation via the treatment device, consistent with results of the patient state determination workflow 130.
- Other outputs and uses of the software programs and the patient state determination workflow 130 may occur with use of the image processing computing system 110.
- the processing circuitry 112 may execute a software program that invokes the patient state processing logic 120 to implement functions including aspects of image processing and registration, feature extraction, machine learning model processing, and the like.
- the image data 160 may include one or more MRI images (e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric MRI, 4D cine MRI, etc.), functional MRI images (e.g., fMRI, DCE-MRI, diffusion MRI), Computed Tomography (CT) images (e.g., 2D CT, Cone beam CT, 3D CT, 4D CT), ultrasound images (e.g., 2D ultrasound, 3D ultrasound, 4D ultrasound), Positron Emission Tomography (PET) images, X-ray images, fluoroscopic images, radiotherapy portal images, Single-Photo Emission Computed Tomography (SPECT) images, computer generated synthetic images (e.g., pseudo-CT images) and the like.
- MRI images e.g., 2D MRI, 3D MRI, 2D streaming MRI, 4D MRI, 4D volumetric MRI, 4D cine MRI, etc.
- functional MRI images e.g.,
- the image data 160 may also include or be associated with auxiliary information, such as segmentations/contoured images, or dose images.
- the image data 160 may be received from the image acquisition device 170 and stored in one or more of the image data sources 150 (e.g., a Picture Archiving and Communication System (PACS), a Vendor Neutral Archive (VNA), a medical record or information system, a data warehouse, etc.).
- the image acquisition device 170 may comprise a MRI imaging device, a CT imaging device, a PET imaging device, an ultrasound imaging device, a fluoroscopic device, a SPECT imaging device, an integrated Linear Accelerator and MRI imaging device, or other medical imaging devices for obtaining the medical images of the patient.
- the image data 160 may be received and stored in any type of data or any type of format (e.g., in a Digital Imaging and Communications in Medicine (DICOM) format) that the image acquisition device 170 and the image processing computing system 110 may use to perform operations consistent with the disclosed embodiments.
- the image acquisition device 170 may be integrated with the treatment device 180 as a single apparatus (e.g., an MRI device combined with a linear accelerator, also referred to as an “MR-LINAC”, as shown and described in FIG. 3 below).
- MR-LINAC can be used, for example, to precisely determine a location of a target organ or a target tumor in the patient, so as to direct radiation therapy accurately according to the radiation therapy treatment plan to a predetermined target.
- a radiation therapy treatment plan may provide information about a particular radiation dose to be applied to each patient.
- the radiation therapy treatment plan may also include other radiotherapy information, such as beam angles, dose-histogram-volume information, the number of radiation beams to be used during therapy, the dose per beam, and the like.
- the image processing computing system 110 may communicate with an external database through a network to send/receive a plurality of various types of data related to image processing and radiotherapy operations.
- an external database may include machine data that is information associated with the treatment device 180, the image acquisition device 170, or other machines relevant to radiotherapy or medical procedures.
- Machine data information may include radiation beam size, arc placement, beam on and off time duration, machine parameters, segments, multi-leaf collimator (MLC) configuration, gantry speed, MRI pulse sequence, and the like.
- the external database may be a storage device and may be equipped with appropriate database administration software programs. Further, such databases or data sources may include a plurality of devices or systems located either in a central or a distributed manner.
- the image processing computing system 110 can collect and obtain data, and communicate with other systems, via a network using one or more communication interfaces, which are communicatively coupled to the processing circuitry 112 and the memory 114.
- a communication interface may provide communication connections between the image processing computing system 110 and radiotherapy system components (e.g., permitting the exchange of data with external devices).
- the communication interface may in some examples have appropriate interfacing circuitry from an output device 142 or an input device 144 to connect to the user interface 140, which may be a hardware keyboard, a keypad, or a touch screen through which a user may input information into the radiotherapy system 100.
- the output device 142 may include a display device which outputs a representation of the user interface 140 and one or more aspects, visualizations, or representations of the medical images.
- the output device 142 may include one or more display screens that display medical images, interface information, treatment planning parameters (e.g., contours, dosages, beam angles, labels, maps, etc.) treatment plans, a target, localizing a target or tracking a target, patient state estimations (e.g., a 3D volume), or any related information to the user.
- the input device 144 connected to the user interface 140 may be a keyboard, a keypad, a touch screen or any type of device that a user may input information to the radiotherapy system 100.
- the output device 142, the input device 144, and features of the user interface 140 may be integrated into a single device such as a smartphone or tablet computer, e.g., Apple iPad®, Lenovo Thinkpad®, Samsung Galaxy®, etc.
- a virtual machine can be software that functions as hardware. Therefore, a virtual machine can include at least one or more virtual processors, one or more virtual memories, and one or more virtual communication interfaces that together function as hardware.
- the image processing computing system 110, the image data sources 150, or like components may be implemented as a virtual machine or within a cloud-based virtualization environment.
- the patient state processing logic 120 or other software programs may cause the computing system to communicate with the image data sources 150 to read images into memory 114 and the storage device 116, or store images or associated data from the memory 114 or the storage device 116 to and from the image data sources 150.
- the image data source 150 may be configured to store and provide a plurality of images (e.g., 3D MRI, 4D MRI, 2D MRI slice images, CT images, 2D Fluoroscopy images, X-ray images, raw data from MR scans or CT scans, Digital Imaging and Communications in Medicine (DICOM) metadata, etc.) that the image data source 150 hosts, from image sets in image data 160 obtained from one or more patients via the image acquisition device 170, including in real-time settings, defined further below.
- the image data source 150 or other databases may also store data to be used by the patient state processing logic 120 when executing a software program that performs patient state estimation operations, or when creating, monitoring, or modifying radiation therapy treatment plans.
- various databases may store machine learning or other AI models, including the algorithm parameters, weights, or other data constituting the model learned by the network and the resulting predicted or estimated data.
- the image processing computing system 110 thus may obtain and/or receive the image data 160 (e.g., 2D MRI slice images, CT images, 2D Fluoroscopy images, X-ray images, 3D MRI images, 4D MRI images, etc.) from the image data source 150, the image acquisition device 170, the treatment device 180 (e.g., a MR-LINAC), or other information systems, in connection with performing image patient state estimation as part of treatment or diagnostic operations.
- the image data 160 e.g., 2D MRI slice images, CT images, 2D Fluoroscopy images, X-ray images, 3D MRI images, 4D MRI images, etc.
- the treatment device 180 e.g., a MR-LINAC
- the image acquisition device 170 can be configured to acquire one or more images of the patient’s anatomy relevant to a region of interest (e.g., a target organ, a target tumor or both).
- Each image typically a 2D image or slice, can include one or more parameters (e.g., a 2D slice thickness, an orientation, an origin and field of view, etc.).
- the image acquisition device 170 can acquire a 2D slice in any orientation.
- an orientation of the 2D slice can include a sagittal orientation, a coronal orientation, or an axial orientation.
- the processing circuitry 112 can adjust one or more parameters, such as the thickness and/or orientation of the 2D slice, to include the target organ and/or target tumor.
- 2D slices can be determined from information such as a 3D MRI volume. Such 2D slices can be acquired by the image acquisition device 170 in “real-time” while a patient is undergoing radiation therapy treatment, for example, when using the treatment device 180 (with “real- time” meaning, in an example, acquiring the data in 10 milliseconds or less).
- real-time may include a timeframe within (e.g., up to) 300 milliseconds.
- real-time may include a time period fast enough for a clinical problem being solved by techniques described herein. In this example, real-time may vary depending on target speed, radiotherapy margins, lag, response time of a treatment device, etc.
- the patient state processing logic 120 in the image processing computing system 110 is depicted as implementing a patient state determination workflow 130 with various aspects of monitoring and estimation of a patient state provided by models or algorithms.
- the patient state determination workflow 130 uses a real-time image input stream 132 (e.g., 2D partial measurements, such as from a CT or MR), which is analyzed by anatomic position monitoring 136 functions to estimate a patient state.
- the patient state determination workflow 130 uses a real-time sensor data stream 134 (e.g., breathing belt measurements, other external, non-image sensor measurements) which is analyzed by anatomic position monitoring 136 functions to estimate or refine the patient state.
- the patient state determination workflow 130 further involves aspects of anatomic position monitoring 136, such as determined within the trained regression model discussed in further examples below.
- the data provided from anatomic position monitoring 136 may be used for producing or controlling a patient state estimation 138.
- the patient state estimation 138 may produce data that is used to control the treatment device 180 or other aspects of the radiotherapy session.
- FIG.2A illustrates a radiation therapy device 202 that may include a radiation source, such as an X-ray source or a linear accelerator, a couch 216, an imaging detector 214, and a radiation therapy output 204.
- the radiation therapy device 202 may be configured to emit a radiation beam 208 to provide therapy to a patient.
- the radiation therapy output 204 can include one or more attenuators or collimators, such as an MLC.
- a MLC may be used for shaping, directing, or modulating an intensity of a radiation therapy beam to the specified target locus within the patient.
- the leaves of the MLC for instance, can be automatically positioned to define an aperture approximating a tumor cross-section or projection, and cause modulation of the radiation therapy beam.
- the leaves can include metallic plates, such as comprising tungsten, with a long axis of the plates oriented parallel to a beam direction and having ends oriented orthogonally to the beam direction.
- a “state” of the MLC can be adjusted adaptively during a course of radiation therapy treatment, such as to establish a therapy beam that better approximates a shape or location of the tumor or other target locus.
- a patient can be positioned in a region 212 and supported by the treatment couch 216 to receive a radiation therapy dose, according to a radiation therapy treatment plan.
- the radiation therapy output 204 can be mounted or attached to a gantry 206 or other mechanical support.
- One or more chassis motors may rotate the gantry 206 and the radiation therapy output 204 around couch 216 when the couch 216 is inserted into the treatment area.
- gantry 206 may be continuously rotatable around couch 216 when the couch 216 is inserted into the treatment area. In another example, gantry 206 may rotate to a predetermined position when the couch 216 is inserted into the treatment area.
- the gantry 206 can be configured to rotate the therapy output 204 around an axis (“A”). Both the couch 216 and the radiation therapy output 204 can be independently moveable to other positions around the patient, such as moveable in transverse direction (“T”), moveable in a lateral direction (“L”), or as rotation about one or more other axes, such as rotation about a transverse axis (indicated as “R”).
- a controller communicatively connected to one or more actuators may control the couch 216 movements or rotations in order to properly position the patient in or out of the radiation beam 208 according to a radiation therapy treatment plan.
- Both the couch 216 and the gantry 206 are independently moveable from one another in multiple degrees of freedom, which allows the patient to be positioned such that the radiation beam 208 can target the tumor precisely.
- the MLC may be integrated and included within gantry 206 to deliver the radiation beam 208 of a certain shape.
- the coordinate system (including axes A, T, and L) shown in FIG. 2A can have an origin located at an isocenter 210.
- the isocenter can be defined as a location where the central axis of the radiation beam 208 intersects the origin of a coordinate axis, such as to deliver a prescribed radiation dose to a location on or within a patient.
- the isocenter 210 can be defined as a location where the central axis of the radiation beam 208 intersects the patient for various rotational positions of the radiation therapy output 204 as positioned by the gantry 206 around the axis A.
- the gantry angle corresponds to the position of gantry 206 relative to axis A, although any other axis or combination of axes can be referenced and used to determine the gantry angle.
- Gantry 206 may also have an attached imaging detector 214.
- the imaging detector 214 is preferably located opposite to the radiation source, and in an example, the imaging detector 214 can be located within a field of the radiation beam 208.
- the imaging detector 214 can be mounted on the gantry 206 (preferably opposite the radiation therapy output 204), such as to maintain alignment with the radiation beam 208.
- the imaging detector 214 rotates about the rotational axis as the gantry 206 rotates.
- the imaging detector 214 can be a flat panel detector (e.g., a direct detector or a scintillator detector). In this manner, the imaging detector 214 can be used to monitor the radiation beam 208 or the imaging detector 214 can be used for imaging the patient’s anatomy, such as portal imaging.
- the control circuitry of the radiation therapy device 202 may be integrated within the radiotherapy system 100 or remote from it.
- one or more of the couch 216, the therapy output 204, or the gantry 206 can be automatically positioned, and the therapy output 204 can establish the radiation beam 208 according to a specified dose for a particular therapy delivery instance.
- a sequence of therapy deliveries can be specified according to a radiation therapy treatment plan, such as using one or more different orientations or locations of the gantry 206, couch 216, or therapy output 204.
- the therapy deliveries can occur sequentially, but can intersect in a desired therapy locus on or within the patient, such as at the isocenter 210.
- FIG.2B illustrates a radiation therapy device 202 that may include a combined LINAC and an imaging system, such as a CT imaging system.
- the radiation therapy device 202 can include an MLC (not shown).
- the CT imaging system can include an imaging X-ray source 218, such as providing X-ray energy in a kiloelectron-Volt (keV) energy range.
- the imaging X-ray source 218 can provide a fan-shaped and/or a conical radiation beam 208 directed to an imaging detector 222, such as a flat panel detector.
- the radiation therapy device 202 can be similar to the system described in relation to FIG.
- the X-ray source 218 can provide a comparatively-lower-energy X-ray diagnostic beam, for imaging.
- the radiation therapy output 204 and the X-ray source 218 can be mounted on the same rotating gantry 206, rotationally separated from each other by 90 degrees.
- two or more X-ray sources can be mounted along the circumference of the gantry 206, such as each having its own detector arrangement to provide multiple angles of diagnostic imaging concurrently.
- multiple radiation therapy outputs 204 can be provided.
- FIG. 3 depicts a radiation therapy system 300 that can include combining a radiation therapy device 202 and an imaging system, such as a magnetic resonance (MR) imaging system (e.g., known in the art as an MR- LINAC) consistent with the disclosed examples.
- system 300 may include a couch 216, an image acquisition device 320, and a radiation delivery device 330.
- System 300 delivers radiation therapy to a patient in accordance with a radiotherapy treatment plan.
- image acquisition device 320 may correspond to image acquisition device 170 in FIG.1 that may acquire origin images of a first modality (e.g., an MRI image) or destination images of a second modality (e.g., an CT image).
- a first modality e.g., an MRI image
- destination images of a second modality e.g., an CT image
- Couch 216 may support a patient (not shown) during a treatment session.
- couch 216 may move along a horizontal translation axis (labelled “I”), such that couch 216 can move the patient resting on couch 216 into and/or out of system 300.
- Couch 216 may also rotate around a central vertical axis of rotation, transverse to the translation axis.
- couch 216 may have motors (not shown) enabling the couch 216 to move in various directions and to rotate along various axes.
- a controller (not shown) may control these movements or rotations in order to properly position the patient according to a treatment plan.
- image acquisition device 320 may include an MRI machine used to acquire 2D or 3D MRI images of the patient before, during, and/or after a treatment session.
- Image acquisition device 320 may include a magnet 321 for generating a primary magnetic field for magnetic resonance imaging.
- the magnetic field lines generated by operation of magnet 321 may run substantially parallel to the central translation axis I.
- Magnet 321 may include one or more coils with an axis that runs parallel to the translation axis I.
- the one or more coils in magnet 321 may be spaced such that a central window 323 of magnet 321 is free of coils.
- the coils in magnet 321 may be thin enough or of a reduced density such that they are substantially transparent to radiation of the wavelength generated by radiotherapy device 330.
- Image acquisition device 320 may also include one or more shielding coils, which may generate a magnetic field outside magnet 321 of approximately equal magnitude and opposite polarity in order to cancel or reduce any magnetic field outside of magnet 321.
- radiation source 331 of radiation delivery device 330 may be positioned in the region where the magnetic field is cancelled, at least to a first order, or reduced.
- Image acquisition device 320 may also include two gradient coils 325 and 326, which may generate a gradient magnetic field that is superposed on the primary magnetic field.
- Coils 325 and 326 may generate a gradient in the resultant magnetic field that allows spatial encoding of the protons so that their position can be determined.
- Gradient coils 325 and 326 may be positioned around a common central axis with the magnet 321 and may be displaced along that central axis. The displacement may create a gap, or window, between coils 325 and 326.
- magnet 321 can also include a central window 323 between coils, the two windows may be aligned with each other.
- image acquisition device 320 may be an imaging device other than an MRI, such as an X-ray, a CT, a CBCT, a spiral CT, a PET, a SPECT, an optical tomography, a fluorescence imaging, ultrasound imaging, radiotherapy portal imaging device, or the like.
- Radiation delivery device 330 may include the radiation source 331, such as an X-ray source or a LINAC, and an MLC 332. Radiation delivery device 330 may be mounted on a chassis 335.
- chassis motors may rotate the chassis 335 around the couch 216 when the couch 216 is inserted into the treatment area.
- the chassis 335 may be continuously rotatable around the couch 216, when the couch 216 is inserted into the treatment area.
- Chassis 335 may also have an attached radiation detector (not shown), preferably located opposite to radiation source 331 and with the rotational axis of the chassis 335 positioned between the radiation source 331 and the detector.
- the device 330 may include control circuitry (not shown) used to control, for example, one or more of the couch 216, image acquisition device 320, and radiotherapy device 330. The control circuitry of the radiation delivery device 330 may be integrated within the system 300 or remote from it.
- FIG. 2A, FIG. 2B, and FIG. 3 generally illustrate examples of a radiation therapy device configured to provide radiotherapy treatment to a patient, including a configuration where a radiation therapy output can be rotated around a central axis (e.g., an axis “A”). Other radiation therapy output configurations can be used.
- a radiation therapy output can be mounted to a robotic arm or manipulator having multiple degrees of freedom.
- the therapy output can be fixed, such as located in a region laterally separated from the patient, and a platform supporting the patient can be used to align a radiation therapy isocenter with a specified target locus within the patient.
- radiotherapy treatment techniques involve an estimation of the relative motion of a specific object contained in a specified region of interest, relative to a reference volume which contains auxiliary information such as contoured regions of interest or the dose plan.
- FIG. 4 provides a high-level view of APM operations.
- the goal of APM is to produce a real-time relative motion estimation 440 of an object contained in a region of interest, relative to its position in a known 3D reference space.
- the relative motion estimation 440 then can be used to adjust the radiotherapy treatment and cause radiotherapy treatment changes 450 that are directed to one or more regions of interest within the 3D reference space. It will be understood that a variety of techniques for adjusting or modifying the location, type, amount, or characteristics of radiotherapy treatment based on motion may be utilized, based upon the identification of the anatomic position and an estimate of relative motion.
- the operations in FIG. 4, in more detail, illustrate how reference information 410 for a human subject may be correlated to movement changes that are identified from real-time information 420 for the human subject.
- the reference information 410 may include imaging data from a 3D reference volume 412 (e.g., produced from an MRI or CT scan), and a definition of a region of interest 414 (e.g., a mask or area defining a target organ, a target tumor or both).
- the real-time information 420 may include 2D imaging data 422 (e.g., produced from 2D MR images or kV projection imaging), collected over time from a single or multiple orientations (e.g., a first image captured at a coronal plane, and a second image captured at a sagittal plane).
- 2D imaging data 422 e.g., produced from 2D MR images or kV projection imaging
- Other forms of real-time information 420 may include position monitoring signals (e.g., a signal from a breathing belt, sensor data, etc.) captured from observed patient body movement.
- an APM algorithm 430 Based on input data of the 3D reference volume 412, an accompanying tracking region of interest 414, and real-time information 420 (e.g., instantaneous, ongoing) relating to the patient (e.g., 2D imaging data 422 captured on an ongoing basis), an APM algorithm 430 analyzes the real-time information 420 to determine movement relative to the reference information 410.
- the APM algorithm 430 may be provided by a trained machine learning model 435, such as a trained regression model or other artificial intelligence algorithm implementation, which estimates motion in a 3D space based on analysis of the real-time information 420.
- the APM algorithm 430 uses the trained model 435 to generate a relative motion estimation 440 in the form of transformation parameters that describe the motion of the tracked region relative to the reference volume 412.
- the relative motion estimation 440 may be processed to produce radiotherapy treatment changes 450 that dynamically gate the radiotherapy beam (e.g., turn the beam on or off in real-time), or dynamically effect a change in direction, shape, position, intensity, amount, or type of a beam in the radiotherapy treatment.
- radiotherapy treatment changes 450 may include control of a radiotherapy beam, such as starting or stopping radiotherapy treatment output, or turning a radiotherapy beam on or off, based on movement caused by patient breathing.
- the following paragraphs provide examples of a treatment workflow adapted for performing APM 430 with use of the trained model 435, including a specific example of a regression model which can analyze individual 2D images captured in real-time during a radiotherapy treatment session.
- the following paragraphs also provide examples of a training workflow adapted for developing the trained model 435.
- the following treatment workflow process may be performed and repeated many times (e.g., on an ongoing, real-time basis, to monitor for patient movement) as part of a radiotherapy treatment session for a single patient.
- the following training workflow process may be performed a single time or multiple times (e.g., a single time in an offline training setting, although the training workflow may be modified for online training as additional reference information is obtained).
- FIG. 5 provides a high-level illustration of a treatment workflow for performing APM 430, using results of a trained machine learning regression model 540.
- This treatment workflow includes the capture and processing of real- time data in the form of multiple real-time 2D images, feature extraction from the multiple real-time 2D images, and analysis of the extracted features with the machine learning regression model 540.
- the machine learning regression model 540 is trained to produce a data output, in the form of spatial transformation parameters which describe relative motion estimation 550.
- the real-time data includes 2D images captured in real-time from a patient, using two different planes of acquisition (2D image 501 captured at a first orientation or plane, and 2D image 502 captured at a second orientation or plane). Using these two 2D images of respective orientations, feature extraction is performed on each image independently, including feature extraction operations 511 within the region of interest performed on the first orientation image and feature extraction operations 512 within the region of interest performed on the second orientation image.
- images from multiple planes of acquisition are illustrated in this example, it will be understood that the techniques are also applicable to one or more images obtained from one plane of acquisition, or one or more images obtained from more than two planes of acquisition.
- the real-time acquired 2D images 501, 502 each acquired from a different plane of acquisition, such as from coronal and sagittal planes—is followed by separate instances of feature extraction.
- feature extraction is performed on each image independently with extraction operations 511, 512.
- Feature extraction may involve performing image processing steps (not shown in FIG.5), such as deformable image registration, and may additionally be followed by dimensionality reduction techniques (such as principal component analysis) for algorithmic and/or computational efficiency (e.g., to reduce the number of features before regression analysis).
- dimensionality reduction techniques such as principal component analysis
- algorithmic and/or computational efficiency e.g., to reduce the number of features before regression analysis.
- fiducial positions could be extracted and used as features.
- feature extraction may be performed within and/or extracted from a limited region of interest (ROI) provided alongside the 3D reference volume.
- ROI region of interest
- Other types of feature identification and extraction may be used.
- the extracted features are concatenated or combined into multi-orientation features 530 (e.g., a multi-dimensional vector, representing features in multiple orientations).
- the machine learning regression model 540 analyzes the multi-orientation features 530 as input, to estimate the motion relative to some reference (e.g., relative to 3D reference volume of the patient, provided in reference information 410 used to train the model 540).
- the output of the machine learning regression model 540 may include estimated spatial transformation parameters which represent relative motion estimation 550 (relative to the anatomy depicted in the 3D reference volume, indicating motion provided from translation and/or rotation in the three dimensions).
- the workflow referenced within FIG. 5 can be performed independently for different structures, using different regions of interest for the tumors and/or organs at risk.
- different features may be extracted for different anatomical structures; likewise, different trained regression models may be trained and used to analyze motion of different anatomical structures.
- 2D images with different planes of acquisition may be acquired sequentially and not in parallel at the same time, which may result in relative shifts in the observed anatomy between the 2D images acquired in different planes.
- the discrepancy may be ignored if the discrepancy is sufficiently small.
- suitable prediction algorithms can be used to synchronize (in time) the content of multiple 2D feature vectors.
- feature synchronization 532 may be applied using a long short-term memory (LSTM) model, to forecast the features describing the contents of one imaging plane such that it coincides with features of the other imaging plane (e.g., obtained 200 ms later).
- LSTM long short-term memory
- Such feature synchronization 532 may yield a synchronized multi-orientation feature vector that is used in the set of multi- orientation features 530 and provided as input to the model 540.
- a training data pair 660 includes (1) a known set of spatial transformation parameters 625 (defining a transformation 620 applied to reference information 610 including a reference volume 612 and region of interest 614), and (2) a set of multi-orientation features 650 summarizing the joint appearance of the patient anatomy as observed from the two 2D images 631, 632 given the known set of spatial transformation parameters 625.
- This process is repeated to obtain the training data set 670 (a collection of training pairs), from which a machine learning regressor will be trained with a training process 680.
- This training process 680 enables the model to analyze a multi-orientation feature set (as model input) and generate a corresponding motion parameter set (as model output).
- image data in the 3D reference volume 612 e.g., image data which includes and designates the region of interest 614
- image data is then resampled to the particular specifications of each plane of 2D image acquisition (“sliced”), yielding two 2D images 631, 632 in different orientations.
- a particular instance of motion may be parameterized by a particular translation vector, a rigid or affine transformation, or even a full deformation vector field.
- the training workflow is also compatible with a wide range of choices concerning the particular machine learning algorithm used for regression (e.g., both linear and non-linear models).
- the local motion of the tracked structure is characterized using a 3D translation vector (e.g., providing the x-, y- and z- components of the translation relative to the 3D reference volume 612).
- the 2D image is deformably registered to a common 2D target image, followed by principal component analysis (PCA) on the resultant 2D deformation vector field (DVF) within the provided in-plane 2D region of interest to extract a minimal set of informative features.
- the 2D target images e.g., images 631, 632
- which serve as the target images for each slice orientation during 2D deformable registration, can be obtained by slicing the 3D reference volume 612 using the specifications of each imaging plane of acquisition.
- two PCA models may be used, one per imaging plane of acquisition.
- FIG. 7 depicts additional detail of feature extraction using deformable registration and principal component analysis (operation 640 corresponding to operations 641, 642 depicted in FIG. 6). Given an input 2D image 630, the 2D image 630 is deformably registered 643 to its corresponding 2D target image (e.g., registered to an image in the same imaging plane of acquisition).
- the present technique may be adapted to support differences in image contrast between the reference volume and the real- time images.
- the 3D reference volume that is obtained at the time of radiotherapy treatment may be acquired using any one of a multitude of MR pulse acquisition sequences, e.g., T1-weighted, T2-weighted, proton density or contrast-agent enhanced images, depending on the specific clinical requirements.
- the set of 2D images used for training may have different characteristics compared to the instantaneous 2D images acquired throughout treatment, and a na ⁇ ve application of the treatment workflow (e.g., portrayed in FIG. 5) may result in poor tracking.
- a na ⁇ ve application of the treatment workflow e.g., portrayed in FIG. 5
- one of the following approaches may be applied.
- robust contrast-invariant registration algorithms can be used for feature extraction (e.g., feature extraction operations 640, 641, 642 as discussed within FIGS. 6 and 7.
- an intermediate 3D reference volume with the same contrast as the instantaneous 2D images, can be acquired. This intermediate reference volume can then be registered to the primary 3D reference volume (either automatically using a registration algorithm of choice, or with user guidance), and subsequently used in the training and test workflows.
- a preparation step that includes the acquisition of instantaneous 2D images can be used to create 2D template images with the same contrast as those acquired during the test-time workflow.
- standard mono-contrast deformable registration can be used for feature extraction.
- Such 2D templates may be created using a variety of template-building approaches.
- the primary 3D reference volume can be registered (either automatically using a registration algorithm of choice, or with user guidance) to the two 2D templates, yielding an offset and an updated 3D reference volume and ROI (as discussed below with reference to FIG. 8). Then, during the training workflow, the updated 3D reference volume and ROI can be used. During the treatment workflow, the same-contrast 2D templates can be used as the registration targets for feature extraction. Finally, the concatenation of the previously estimated offset with the relative motion estimates (i.e. the output of the machine learning regressor) yields the desired motion estimates, i.e. relative to the primary (non-updated) reference volume.
- FIG.8 depicts a corrective procedure using registration for feature extraction, to account for possible offsets between the 3D reference volume 810 and 2D templates 822 having the same contrast.
- the 3D reference volume 810 is (automatically or with user guidance) registered 820 to the two same-contrast 2D templates, yielding an offset 830 that is resampled 840 into an updated 3D reference volume 850.
- the updated 3D reference volume 850 is then used in a training workflow 860 (e.g., the training workflow discussed with reference to FIG. 6).
- a treatment workflow 870 e.g., the treatment workflow discussed with reference to FIG.5
- the same-contrast 2D templates 822 can be used as registration targets for feature extraction from the 2D images.
- FIG. 9 illustrates a regression machine learning workflow 900 for use in estimating patient motion during a radiotherapy session.
- the machine learning workflow 900 includes a training workflow 901 and an estimation workflow 911 to perform training and estimation operations, respectively.
- the workflow 900 provides another view of data processing occurring with the training and treatment aspects depicted in FIG. 8. It will be understood that the training workflow 901 may incorporate the training aspects discussed with FIGS. 6 and 7, above, and the estimation workflow may incorporate the motion estimation aspects discussed with FIG. 5, above.
- training engine 904 generates training inputs from transformed image data (e.g., motion-transformed reference image data 902), to produce features 908 for training.
- Feature transformation and determination 906 determines one or more image and motion features 908 from the reference data input, such as with use of the transformation workflow depicted in FIG.6.
- the image and motion features 908 provide a set of the information input and include information determined to be indicative of a particular outcome.
- the machine learning algorithm 910 e.g., a regression algorithm
- produces a trained model 920 e.g., a regression model
- the regression model 920 thus learns the relationship between features of the simulated image data (2D image(s)) and the relative motion parameters (relative to a 3D reference volume).
- newly captured data 912 e.g., a 2D image of a patient captured in real time
- the estimation engine 914 operates to identify a region of interest (if applicable) and use a feature determination engine 916 to determine image features of the newly captured data 912 that are relevant to a corresponding patient state.
- the feature determination engine 916 produces image features 918, which are input into the regression model 920.
- the training workflow 901 may operate in an offline manner to train the regression model 920, such that weights of the regression model 920 are learned during training and fixed. Then, during the estimation workflow 911, the image features 918 are input into the trained regression model 920, which internally uses the fixed weights to produce the motion estimation 930.
- the estimation engine 914 may be designed to operate in an online manner. It should be noted that the regression model 920 may be periodically updated via additional training or user feedback (e.g., additional, changed, or removed measurements or patient states).
- the machine learning algorithm 910 may be selected from among many different potential supervised machine learning algorithms.
- supervised learning algorithms include artificial neural networks, Bayesian networks, instance-based learning, support vector machines, decision trees (e.g., Iterative Dichotomiser 3, C4.5, Classification and Regression Tree (CART), Chi- squared Automatic Interaction Detector (CHAID), and the like), random forests, linear classifiers, quadratic classifiers, k-nearest neighbor, linear regression, logistic regression, and hidden Markov models.
- decision trees e.g., Iterative Dichotomiser 3, C4.5, Classification and Regression Tree (CART), Chi- squared Automatic Interaction Detector (CHAID), and the like
- random forests e.g., linear classifiers, quadratic classifiers, k-nearest neighbor, linear regression, logistic regression, and hidden Markov models.
- a representation of the regression model is illustrated in block 922, showing an example linear regression. If a linear regressor is used, the model parameters (e.g., weights or coefficients) represent the importance of each of the corresponding features
- the machine learning algorithm 910 trains the model 920 as described herein, based on how motion represented by image transformation correspond to image data.
- the machine learning algorithm 910 implements a regression problem (e.g., linear, polynomial, regression trees, kernel density estimation, support vector regression, random forests implementations, or the like).
- the resulting training parameters define the regression model (a generator) as a correspondence motion model for the chosen machine learning algorithm.
- this training may be performed separately for every possible gantry angle (e.g., with a one degree increment), since x-ray acquisition orientation may be constrained to an orthogonal angle with respect to the treatment beam.
- control may be given to a clinician on the 2D acquisition plane for position or orientation. Repeating cross- validation on training data with different choice of 2D planes can reveal which 2D planes yield best surrogate information for a given patient/tumor site.
- FIG. 10 illustrates a flowchart 1000 of a method of training a regression machine learning model for generating estimated motion in a region of interest, incorporating the techniques discussed above. For instance, the following features of flowchart 1000 may be integrated or adapted with the training discussed with reference to FIG. 6.
- Operation 1010 includes obtaining three-dimensional image data corresponding to a human subject for radiotherapy treatment (e.g., the image data including the reference volume and at least one region of interest(s) to track).
- a reference volume represents the patient anatomy in three dimensions, and the at least one region of interest is defined within the three dimensions.
- Operation 1020 follows, which includes identifying image transformation parameters defining a spatial transformation (e.g., rotation and/or translation).
- the spatial transformation is applied to the reference volume (imaging data).
- operation 1040 which includes performing slicing on the transformed reference volume and region of interest, to produce two-dimensional synthetic images for training.
- Operation 1050 follows with extracting respective sets of features from the two-dimensional synthetic images.
- the feature extraction includes generating multi-orientation feature vectors, based on the extracted sets of features.
- Operation 1060 includes training a machine learning regression model with the pairs of image transformation parameters and corresponding features (e.g., pairs of multi-orientation feature vectors and corresponding spatial transformations, that were obtained from the two-dimensional synthetic images). Operations 1020-1050 are repeated, as necessary, for generating a set of training data which can be used to train (or fit) the regressor model.
- Operation 1070 concludes the flowchart 1000 by providing a trained machine learning regression model for use with a radiotherapy treatment session, such as is discussed with reference to the model usage examples herein. [0113] FIG.
- Operation 1110 begins with obtaining three-dimensional image data corresponding to a human subject, at a tracking region of interest (prior to radiotherapy session). This is followed by operation 1120, involving training a machine learning regression model based on the three-dimensional image data corresponding to the subject. For instance, operations 1110, 1120 may be expanded into further training actions as depicted with reference to flowchart 1000 or the training functions in FIG. 6.
- Operation 1130 includes obtaining real-time, two-dimensional image data corresponding to subject, captured on an ongoing basis during a radiotherapy session.
- the two-dimensional image data may capture at least a portion of the region of interest, and may include a first two-dimensional image captured at a first orientation and a second two-dimensional image captured at a second orientation (with additional orientations and images also possible).
- the first two-dimensional image is captured at a first time during the radiotherapy treatment session and the second two-dimensional image is captured at a second time during the radiotherapy treatment session (e.g., within 300 milliseconds, or according to another time duration which enables real-time motion processing).
- Operation 1140 includes converting two- dimensional image data to match a contrast of three-dimensional image data. For instance, this may incorporate the features of FIG. 8 or the accompanying examples, which discusses techniques applicable where the three-dimensional reference volume is acquired with a first MR pulse acquisition sequence, but the two-dimensional image data is acquired with a second, different MR pulse acquisition sequence.
- Operation 1150 includes extraction of the features from the real- time, two-dimensional image data. In an example, an extracted first set of features from a first image and a second set of features from a second image are combined into a multi-dimensional feature vector. The features may be extracted within a region of interest or other designated areas of the image(s).
- Operation 1160 includes analysis of extracted features with the trained machine learning regression model (e.g., trained in operation 1120), that has been trained to estimate transformation parameters describing the relative motion of the region of interest. This relative motion is relative to the region of interest imaged in the original three-dimensional image data.
- the trained machine learning regression model may accept the multi-dimensional feature vector as input, and produce values indicating a spatial transformation of the extracted features as output.
- Operation 1170 provides the output from the trained machine learning regression model, the output indicating indicates a relative motion estimation of the region of interest in the anatomy of the human subject.
- FIG. 12 is a flowchart 1200 illustrating example operations for performing training and treatment workflows (including those depicted among FIGS. 4 to 11), according to various examples. These operations may be implemented at processing hardware of the image processing computing system 110, for instance.
- image processing computing system 110 obtains (or captures, or causes an imaging modality to capture) three-dimensional image data, including radiotherapy constraints and targets, corresponding to a human subject. As discussed above, this may be obtained prior to radiotherapy treatment, and include three-dimensional magnetic resonance (MR) volume or a three-dimensional computed tomography (CT) volume.
- MR magnetic resonance
- CT computed tomography
- the image processing computing system 110 obtains (or captures, or causes an imaging modality to capture) two-dimensional image data, on an ongoing basis, to capture movement of the subject with multi- orientation images.
- the real-time two-dimensional imaging data is pre-processed for use with the model, such as to extract features from multi- orientation two-dimensional images.
- the image processing computing system 110 uses a trained regression model (trained such as discussed with reference to FIG. 10) to estimate spatial transformation from extracted features, and generate estimated real-time movement (such as discussed with reference to FIG. 11).
- the image processing computing system 110 identifies a movement state of subject, based on the estimated real-time movement.
- image processing computing system 110 directs or controls radiation therapy, using a treatment machine, to the radiation therapy target according to the identified movement state. It will be understood that a variety of existing approaches for modifying or adapting radiotherapy treatment may occur based on the controlled therapy or identified movement state, once correctly estimated. [0126] The processes depicted in flowcharts 1000, 1100, 1200 with FIGS.
- FIG. 13 illustrates a block diagram of an example of a machine 1300 on which one or more of the methods as discussed herein can be implemented. In one or more examples, one or more items of the image processing computing system 110 can be implemented by the machine 1300.
- the machine 1300 operates as a standalone device or may be connected (e.g., networked) to other machines.
- the image processing computing system 110 can include one or more of the items of the machine 1300.
- the machine 1300 may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
- the machine may be a personal computer (PC), server, a tablet, smartphone, a web appliance, edge computing device, a network router, switch or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine.
- the example machine 1300 includes processing circuitry or processor 1302 (e.g., a CPU, a graphics processing unit (GPU), an ASIC, circuitry, such as one or more transistors, resistors, capacitors, inductors, diodes, logic gates, multiplexers, buffers, modulators, demodulators, radios (e.g., transmit or receive radios or transceivers), sensors 1321 (e.g., a transducer that converts one form of energy (e.g., light, heat, electrical, mechanical, or other energy) to another form of energy), or the like, or a combination thereof), a main memory 1304 and a static memory 1306, which communicate with each other via a bus 1308.
- processing circuitry or processor 1302 e.g., a CPU, a graphics processing unit (GPU), an ASIC
- circuitry such as one or more transistors, resistors, capacitors, inductors, diodes, logic gates, multiplexers, buffers, modulators, demodulators, radios
- the machine 1300 may further include a video display device 1310 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)).
- the machine 1300 also includes an alphanumeric input device 1312 (e.g., a keyboard), a user interface (UI) navigation device 1314 (e.g., a mouse), a disk drive or mass storage unit 1316, a signal generation device 1318 (e.g., a speaker), and a network interface device 1320.
- a video display device 1310 e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)
- the machine 1300 also includes an alphanumeric input device 1312 (e.g., a keyboard), a user interface (UI) navigation device 1314 (e.g., a mouse), a disk drive or mass storage unit 1316, a signal generation device 1318 (e.g., a speaker), and a network interface device 1320.
- the disk drive unit 1316 includes a machine-readable medium 1322 on which is stored one or more sets of instructions and data structures (e.g., software) 1324 embodying or utilized by any one or more of the methodologies or functions described herein.
- the instructions 1324 may also reside, completely or at least partially, within the main memory 1304 and/or within the processor 1302 during execution thereof by the machine 1300, the main memory 1304 and the processor 1302 also constituting machine-readable media.
- the machine 1300 as illustrated includes an output controller 1328.
- the output controller 1328 manages data flow to/from the machine 1300.
- the output controller 1328 is sometimes called a device controller, with software that directly interacts with the output controller 1328 being called a device driver.
- machine-readable medium 1322 is shown in an example to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions or data structures.
- the term “machine-readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding or carrying instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure, or that is capable of storing, encoding or carrying data structures utilized by or associated with such instructions.
- the term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media.
- machine-readable media include non-volatile memory, including by way of example semiconductor memory devices, e.g., Erasable Programmable Read-Only Memory (EPROM), EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
- EPROM Erasable Programmable Read-Only Memory
- EEPROM Electrically Erasable Programmable Read-Only Memory
- flash memory devices e.g., electrically Memory (EEPROM), and flash memory devices
- magnetic disks such as internal hard disks and removable disks
- magneto-optical disks e.g., CD-ROM and DVD-ROM disks.
- CD-ROM and DVD-ROM disks CD-ROM and DVD-ROM disks.
- the instructions 1324 may further be transmitted or received over a communications network 1326 using a transmission medium.
- the instructions 1324 may be transmitted using the network interface device 1320 and any one of a number of well-known transfer protocols
- Examples of communication networks include a LAN, a WAN, the Internet, mobile telephone networks, Plain Old Telephone (POTS) networks, and wireless data networks (e.g., Wi-Fi and 4G/5G data networks).
- POTS Plain Old Telephone
- the term "transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding or carrying instructions for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such software.
- the usage in the incorporated reference(s) should be considered supplementary to that of this document; for irreconcilable inconsistencies, the usage in this document controls.
- the terms “a,” “an,” “the,” and “said” are used when introducing elements of aspects of the disclosure or in the embodiments thereof, as is common in patent documents, to include one or more than one or more of the elements, independent of any other instances or usages of “at least one” or “one or more.”
- the term “or” is used to refer to a nonexclusive or, such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated.
- the computer-executable instructions may be organized into one or more computer-executable components or modules. Aspects of the disclosure may be implemented with any number and organization of such components or modules. For example, aspects of the disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments of the disclosure may include different computer- executable instructions or components having more or less functionality than illustrated and described herein. [0139] Method examples (e.g., operations and functions) described herein can be machine or computer-implemented at least in part (e.g., implemented as software code or instructions).
- Some examples can include a computer-readable medium or machine-readable medium encoded with instructions operable to configure an electronic device to perform methods as described in the above examples.
- An implementation of such methods can include software code, such as microcode, assembly language code, a higher-level language code, or the like (e.g., “source code”).
- software code can include computer-readable instructions for performing various methods (e.g., “object” or “executable code”).
- the software code may form portions of computer program products.
- Software implementations of the embodiments described herein may be provided via an article of manufacture with the code or instructions stored thereon, or via a method of operating a communication interface to send data via a communication interface (e.g., wirelessly, over the internet, via satellite communications, and the like).
- the software code may be tangibly stored on one or more volatile or non-volatile computer-readable storage media during execution or at other times.
- These computer-readable storage media may include any mechanism that stores information in a form accessible by a machine (e.g., computing device, electronic system, and the like), such as, but are not limited to, floppy disks, hard disks, removable magnetic disks, any form of magnetic disk storage media, CD- ROMS, magnetic-optical disks, removable optical disks (e.g., compact disks and digital video disks), flash memory devices, magnetic cassettes, memory cards or sticks (e.g., secure digital cards), RAMs (e.g., CMOS RAM and the like), recordable/non-recordable media (e.g., read only memories (ROMs)), EPROMS, EEPROMS, or any type of media suitable for storing electronic instructions, and the like.
- ROMs read only memories
- Such computer-readable storage medium is coupled to a computer system bus to be accessible by the processor and other parts of the OIS.
- the computer-readable storage medium may have encoded a data structure for treatment planning, wherein the treatment plan may be adaptive.
- the data structure for the computer-readable storage medium may be at least one of a Digital Imaging and Communications in Medicine (DICOM) format, an extended DICOM format, an XML format, and the like.
- DICOM is an international communications standard that defines the format used to transfer medical image-related data between various types of medical equipment.
- DICOM RT refers to the communication standards that are specific to radiation therapy.
- the method of creating a component or module can be implemented in software, hardware, or a combination thereof.
- a communication interface includes any mechanism that interfaces to any of a hardwired, wireless, optical, and the like, medium to communicate to another device, such as a memory bus interface, a processor bus interface, an Internet connection, a disk controller, and the like.
- the communication interface can be configured by providing configuration parameters and/ or sending signals to prepare the communication interface to provide a data signal describing the software content.
- the communication interface can be accessed via one or more commands or signals sent to the communication interface.
- the present disclosure also relates to a system for performing the operations herein.
- This system may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
- the order of execution or performance of the operations in embodiments of the disclosure illustrated and described herein is not essential, unless otherwise specified. That is, the operations may be performed in any order, unless otherwise specified, and embodiments of the disclosure may include additional or fewer operations than those disclosed herein. For example, it is contemplated that executing or performing a particular operation before, contemporaneously with, or after another operation is within the scope of aspects of the disclosure.
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Theoretical Computer Science (AREA)
- Biophysics (AREA)
- Heart & Thoracic Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Physiology (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- High Energy & Nuclear Physics (AREA)
- Multimedia (AREA)
- Dentistry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Fuzzy Systems (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Radiation-Therapy Devices (AREA)
Abstract
Description
Claims
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/302,254 US11679276B2 (en) | 2021-04-28 | 2021-04-28 | Real-time anatomic position monitoring for radiotherapy treatment control |
US17/302,252 US20220347493A1 (en) | 2021-04-28 | 2021-04-28 | Real-time anatomic position monitoring in radiotherapy using machine learning regression |
PCT/US2022/071772 WO2022232749A1 (en) | 2021-04-28 | 2022-04-18 | Real-time anatomic position monitoring for radiotherapy treatment |
Publications (2)
Publication Number | Publication Date |
---|---|
EP4329875A1 true EP4329875A1 (en) | 2024-03-06 |
EP4329875A4 EP4329875A4 (en) | 2024-09-04 |
Family
ID=83847420
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP22796941.7A Pending EP4329875A4 (en) | 2021-04-28 | 2022-04-18 | Real-time anatomic position monitoring for radiotherapy treatment |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP4329875A4 (en) |
WO (1) | WO2022232749A1 (en) |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3932303B2 (en) * | 2005-05-13 | 2007-06-20 | 独立行政法人放射線医学総合研究所 | Organ dynamics quantification method, apparatus, organ position prediction method, apparatus, radiation irradiation method, apparatus, and organ abnormality detection apparatus |
WO2016144914A1 (en) * | 2015-03-06 | 2016-09-15 | Duke University | Systems and methods for automated radiation treatment planning with decision support |
US20170337682A1 (en) * | 2016-05-18 | 2017-11-23 | Siemens Healthcare Gmbh | Method and System for Image Registration Using an Intelligent Artificial Agent |
WO2020077198A1 (en) * | 2018-10-12 | 2020-04-16 | Kineticor, Inc. | Image-based models for real-time biometrics and marker-less motion tracking in imaging applications |
US10835761B2 (en) * | 2018-10-25 | 2020-11-17 | Elekta, Inc. | Real-time patient motion monitoring using a magnetic resonance linear accelerator (MR-LINAC) |
US11103729B2 (en) * | 2019-08-13 | 2021-08-31 | Elekta ltd | Automatic gating with an MR linac |
-
2022
- 2022-04-18 WO PCT/US2022/071772 patent/WO2022232749A1/en active Application Filing
- 2022-04-18 EP EP22796941.7A patent/EP4329875A4/en active Pending
Also Published As
Publication number | Publication date |
---|---|
EP4329875A4 (en) | 2024-09-04 |
WO2022232749A1 (en) | 2022-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11491348B2 (en) | Real-time patient motion monitoring using a magnetic resonance linear accelerator (MRLINAC) | |
US11547874B2 (en) | Machine learning approach to real-time patient motion monitoring | |
JP7307822B2 (en) | Prediction of control points for radiotherapy using projection images | |
US20230302297A1 (en) | Patient imaging for dynamic online adaptive radiotherapy | |
EP4259278A1 (en) | Automatic contour adaptation using neural networks | |
US11679276B2 (en) | Real-time anatomic position monitoring for radiotherapy treatment control | |
US20220347493A1 (en) | Real-time anatomic position monitoring in radiotherapy using machine learning regression | |
EP4101502A1 (en) | Feature-space clustering for physiological cycle classification | |
US11989851B2 (en) | Deformable image registration using deep learning | |
US20230126640A1 (en) | Real-time motion monitoring using deep learning | |
EP4279126A1 (en) | Temporal prediction in anatomic position monitoring using artificial intelligence modeling | |
US20230285776A1 (en) | Dynamic adaptation of radiotherapy treatment plans | |
US20240311956A1 (en) | Quality factor using reconstructed images | |
EP4329875A1 (en) | Real-time anatomic position monitoring for radiotherapy treatment | |
US20240242813A1 (en) | Image quality relative to machine learning data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20231121 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Free format text: PREVIOUS MAIN CLASS: A61N0005100000 Ipc: G06T0007246000 |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20240805 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: A61B 5/113 20060101ALI20240730BHEP Ipc: A61B 5/055 20060101ALI20240730BHEP Ipc: G06N 20/00 20190101ALI20240730BHEP Ipc: G06N 3/08 20230101ALI20240730BHEP Ipc: A61B 34/10 20160101ALI20240730BHEP Ipc: A61B 5/00 20060101ALI20240730BHEP Ipc: A61N 5/10 20060101ALI20240730BHEP Ipc: G06T 7/246 20170101AFI20240730BHEP |