WO2022011617A1 - 一种利用光学体表运动信号合成实时图像的方法及系统 - Google Patents

一种利用光学体表运动信号合成实时图像的方法及系统 Download PDF

Info

Publication number
WO2022011617A1
WO2022011617A1 PCT/CN2020/102208 CN2020102208W WO2022011617A1 WO 2022011617 A1 WO2022011617 A1 WO 2022011617A1 CN 2020102208 W CN2020102208 W CN 2020102208W WO 2022011617 A1 WO2022011617 A1 WO 2022011617A1
Authority
WO
WIPO (PCT)
Prior art keywords
body surface
image
time
real
optical body
Prior art date
Application number
PCT/CN2020/102208
Other languages
English (en)
French (fr)
Inventor
张艺宝
李晨光
黄宇亮
吴昊
刘宏嘉
Original Assignee
北京肿瘤医院(北京大学肿瘤医院)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京肿瘤医院(北京大学肿瘤医院) filed Critical 北京肿瘤医院(北京大学肿瘤医院)
Priority to PCT/CN2020/102208 priority Critical patent/WO2022011617A1/zh
Priority to CN202080001304.3A priority patent/CN112154483A/zh
Priority to US17/037,591 priority patent/US11748927B2/en
Publication of WO2022011617A1 publication Critical patent/WO2022011617A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/103Treatment planning systems
    • A61N5/1037Treatment planning systems taking into account the movement of the target, e.g. 4D-image based planning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/103Treatment planning systems
    • A61N5/1039Treatment planning systems using functional images, e.g. PET or MRI
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1049Monitoring, verifying, controlling systems and methods for verifying the position of the patient with respect to the radiation beam
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1064Monitoring, verifying, controlling systems and methods for adjusting radiation treatment in response to monitoring
    • A61N5/1065Beam adjustment
    • A61N5/1067Beam adjustment in real time, i.e. during treatment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/100764D tomography; Time-sequential 3D tomography
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Definitions

  • the present invention relates to the technical field of radiotherapy equipment, and in particular, to a method and system for synthesizing real-time images using optical body surface motion signals.
  • radiotherapy precisely projects the lethal radiation dose to the tumor target area, and uses advanced technologies such as intensity modulation to form a rapid dose drop at the outer edge, so as to kill the tumor and protect the adjacent normal cells. organization. Radiation dose off-target is one of the main causes of tumor control failure and vital organ damage.
  • IGRT image-guided radiotherapy
  • Respiratory motion is a major risk of dose miss-targeting in thoracoabdominal radiotherapy.
  • the amplitude and frequency of respiratory movement are determined by many factors, such as the patient's gender and age, mental state, related diseases, radiotherapy position fixation devices, and respiratory movement management measures. Influences such as deformation and motion trajectories are the main threats to the accurate dose delivery of radiotherapy in the chest and abdomen. In severe cases, it may cause recurrence and metastasis due to insufficient irradiation of the target area, or cause serious injury or even life-threatening due to the wrong injection of normal organs into the high-dose area of the field. .
  • Respiratory movements can change the location and shape of tumors in the chest and abdomen, which can lead to off-target radiation doses, resulting in recurrence or damage to normal organs.
  • Localized 4D CT and MRI, as well as traditional 4D cone beam CT guided before treatment cannot reflect the real-time dynamics of the target area during treatment; external indirect signals such as respiratory gating and optical body surface cannot directly display the internal structure during treatment; real-time fluoroscopic imaging Increases radiation risk; MR-accelerators that are not yet widespread have limitations in price and compatibility. Therefore, it is of great scientific significance and clinical value to develop a universal real-time imaging technology based on a standard platform.
  • the main purpose of the present invention is to overcome the above-mentioned defects of the prior art, and to provide a method and system for synthesizing real-time images using optical body surface motion signals, so as to solve the problem that the position and shape of the thoracic and abdominal tumors can be changed due to breathing motion that cannot be solved in the prior art. , causing off-target radiation doses leading to recurrence or normal organ damage.
  • One of the embodiments of the present invention provides a system for synthesizing real-time images using optical body surface motion signals, including: an acquisition unit for acquiring images of a patient before treatment and real-time optical body surface data during treatment; a synthesis unit for The acquired images of the patient before treatment and the real-time optical body surface data during the treatment are synthesized into a real-time image synchronized with the changes of the optical body surface motion signal according to a certain mapping relationship.
  • the acquired image of the patient before treatment is a directly acquired 4D image before treatment, or a 4D image reconstructed from a 3D image obtained before treatment of the patient.
  • the real-time optical body Table data are standardized.
  • a deep learning network model unit with the mapping relationship is also included, which is used to acquire the image of the patient in a certain phase i before treatment and the optical body surface data of the phase j during the treatment, and input the deep learning
  • the network model outputs an image of time phase j, and the images of continuous time phase j are combined to obtain a real-time dynamic four-dimensional image synchronized with the change of the optical body surface motion signal.
  • One of the embodiments of the present invention provides a method for synthesizing a real-time image using an optical body surface motion signal, to obtain an image of a patient before treatment and real-time optical body surface data during treatment;
  • the real-time optical body surface data is synthesized according to a certain mapping relationship into a real-time image synchronized with the changes of the optical body surface motion signal.
  • the acquired image of the patient before treatment is a directly acquired 4D image before treatment, or a 4D image reconstructed from a 3D image obtained before treatment of the patient.
  • the real-time optical body Table data are standardized.
  • the mapping relationship is obtained through a deep learning network model, and the image of a certain phase i before treatment of the patient and the optical body surface data of the phase j during the treatment are obtained, and the deep learning network model is input.
  • the image of phase j and the image of continuous phase j are combined to obtain a real-time dynamic four-dimensional image that is synchronized with the change of the motion signal of the optical body surface.
  • One of the embodiments of the present invention provides a deep learning network model training system for synthesizing real-time images using optical body surface motion signals.
  • the mapping relationship of in-vivo structural data; the deep learning network model is trained on a four-dimensional image data set, and the input of the deep learning network model is a four-dimensional image of a certain phase i and an optical body surface data or body surface contour of another phase j data, the output of the deep learning network model is an image of time j.
  • One of the embodiments of the present invention provides a deep learning network model training method for synthesizing real-time images using optical body surface motion signals.
  • the deep learning network model is trained on the four-dimensional image data set.
  • the input of the deep learning network model is the four-dimensional image of a certain phase i and the optical body surface data or body surface contour data of another phase j, and the output of the deep learning network model is Image of phase j.
  • the present invention utilizes deep learning technology to mine and analyze the four-dimensional images of a large number of patients before treatment, such as: 4D-CBCT, 4D-CT, 4D-MR and other in-vivo motion images and optical surface breathing during treatment
  • the characteristics of motion signals are correlated and dynamic mapping relationships are established, and then real-time 4D cone beam CT is synthesized by using pre-treatment cone beam CT and synchronous optical body surface motion signals during treatment.
  • a dynamic mapping model of three-dimensional image anatomical information and optical body surface motion signal characteristics is established.
  • Real-time, non-invasive, dynamic visual monitoring of internal structures of moving target areas in vivo Provide a real-time monitoring solution for moving target areas with lower economic cost, wider application range of patients, and better compatibility with existing sites and traditional quality control equipment, improve the accuracy and timeliness of radiotherapy in moving target areas, and reduce the risk of off-target areas. Risk of tumor recurrence and metastasis and radiation damage to normal organs.
  • FIG. 1 is a schematic diagram of a system for synthesizing real-time images using optical body surface motion signals according to some embodiments of the present invention
  • FIG. 2 is a schematic diagram of a method for synthesizing a real-time image using an optical body surface motion signal according to some embodiments of the present invention
  • 3 is a two-dimensional height map of an optical body surface signal according to some embodiments of the present invention.
  • FIG. 4 is a three-dimensional body surface Mask diagram of an optical body surface signal according to some embodiments of the present invention.
  • Fig. 5 is the contour of the body surface reconstructed by the optical body surface system
  • Figure 6 is the outer contour of the body surface delineated by the CT image
  • Fig. 7 position information of the patient's heart relative to the body surface in this phase
  • Figure 8 The position information of the patient's left lung relative to the body surface in this phase.
  • system is a method used to distinguish different components, elements, parts, parts or assemblies at different levels. However, other words may be replaced by other expressions if they serve the same purpose.
  • system and unit may be implemented by software or hardware, and may be a physical or virtual name having the functional part.
  • a system for synthesizing real-time images using optical body surface motion signals includes: an acquisition unit for acquiring images of a patient before treatment and real-time optical body surface data during treatment; a synthesis unit for synthesizing The acquired images of the patient before treatment and the real-time optical body surface data during the treatment are synthesized according to a certain mapping relationship into a real-time image synchronized with the changes of the optical body surface motion signal.
  • Pre-treatment images of patients including but not limited to acquiring 4-D images obtained by 4-D cone-beam CT (4D-CBCT) or other means, or raw kilovolts (kV) using 3-D cone-beam CT (3D-CBCT) )
  • Pre-treatment four-dimensional images reconstructed from two-dimensional projections or two-dimensional and three-dimensional images obtained by other means, and four-dimensional computed tomography (4D-CT) or four-dimensional magnetic resonance imaging (4D-MRI) and other time-phase data of four-dimensional images. image.
  • the optical body surface data is patient body surface data collected by an optical body surface imaging system or the like.
  • One embodiment is to divide the body surface into grids, etc., and obtain the coordinate data of each point on the grid, which may include triangular grids, quadrilateral grids, etc., as long as the body surface can be divided to obtain coordinate data.
  • an optical body surface imaging system is used to first project LED light onto the patient's body surface, and then obtain reflected light from the patient's body surface through a camera, and generate dynamic 3D body surface information through real-time reflected light.
  • the 3D body surface reconstruction is based on the principle of triangulation, which can be used to calculate the real-time six-dimensional body surface movement of the patient; translational movement: front and rear, left and right, lift; rotational movement: around the x-axis, around the y-axis, and around the z-axis.
  • the present invention uses the 4D images reconstructed from 4D-CBCT, 4D-CT, 4D-MR or 3D images of the patient before treatment to correlate with the characteristics of the optical body surface 4D respiratory motion signal and establish a dynamic mapping relationship, and then use the treatment Real-time 4D cone beam CT combined with front cone beam CT and intra-therapy synchronous optical body surface motion signals. Real-time, non-invasive, dynamic visual monitoring of internal structures of moving target areas in vivo.
  • the universal accelerator platform Based on the universal accelerator platform, it provides a real-time monitoring solution for moving target areas with lower economic costs, a wider range of patients, and better compatibility with existing sites and traditional quality control equipment, improving the accuracy and timeliness of radiotherapy in moving target areas, reducing Risk of tumor recurrence and metastasis and radiation damage to normal organs due to off-target.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the acquired image of the patient before treatment is a directly acquired four-dimensional image before treatment, or a reconstructed four-dimensional image obtained from a three-dimensional image of the patient before treatment.
  • the acquired four-dimensional images before treatment including but not limited to four-dimensional cone beam CT (4D-CBCT), 4D-CT, 4D-MR, etc., can be directly used to construct pre-treatment multimodal 4D image training set data.
  • 4D-CBCT four-dimensional cone beam CT
  • 4D-CT 4D-CT
  • 4D-MR 4D-MR
  • Three-dimensional cone-beam CT (3D-CBCT) which is more common in clinical practice, also contains a large amount of useful information such as patient anatomy changes.
  • the raw kilovolt (kV) two-dimensional projection of three-dimensional cone beam CT (3D-CBCT) and localization of four-dimensional image data such as four-dimensional computed tomography (4D-CT) or four-dimensional magnetic resonance imaging (4D-MRI) can be used After model training data processing, the pre-treatment 4D images were reconstructed.
  • One of the exemplary embodiments of the present invention uses the instantaneous kV two-dimensional projection raw data of 3D-CBCT to correlate with the motion signal of 4D-CT.
  • the patient's 4D-CT prior anatomical structure is used to make up for the insufficient number of instantaneous kV two-dimensional projections, while retaining the anatomical information on the day of treatment or the closest to the day of treatment reflected by the two-dimensional kV projection.
  • the specific method is: first, take a certain time phase of 4D-CT as the benchmark (I 0 ), then the new image can be expressed as:
  • (I, j, k) is the voxel position
  • D is the deformation of the new image I relative to I 0
  • D x , Dy , and D z are the components of the deformation D in the x, y, and z directions, respectively.
  • W 1 , W 2 and W 3 are the weights of each principal component.
  • DRR represents the digital reconstruction projection of I.
  • the weight of each deformation principal component can be obtained by solving the above formula through gradient descent, and the estimation of the 4D image can be obtained.
  • the B-spline base can be further used to fine-tune the deformation so that I and the two-dimensional projection of the simultaneous phase kV have better consistency.
  • 4D-MRI can also be used to correlate the instantaneous kV two-dimensional projection of 3D-CBCT to obtain 4D images, 4D-MRI has good soft tissue resolution information. Thereby obtaining the true four-dimensional anatomy of the patient close to the patient's treatment process.
  • the real-time optical body surface data is standardized.
  • the CT, MR or CBCT images of patients usually store the voxel distribution on the three-dimensional rectangular coordinate system according to the DICOM standard, while the optical body surface imaging system triangulates the body surface images collected by the sensor, which is triangulated data, and the triangular elements
  • the distribution varies greatly with the size of the patient.
  • the deep learning neural network does not strictly limit the size, orientation, resolution, etc. of the input image, it also requires a unified basic format.
  • the network trained by the present invention requires body surface data as input. In order to facilitate subsequent model training, both the optical body surface data and the body surface contour data extracted from the CBCT image data are converted into standardized data for association.
  • the body surface data is to be standardized and expressed as the two-dimensional height distribution of each point on the body surface relative to the coronal plane where the isocenter is located as shown in Figure 3, or the three-dimensional mask map of the body surface contour as shown in Figure 4, both of which are acceptable. as input data for deep learning neural network.
  • the method of converting DICOM data into standardized data is to write programs for tools that process medical images with similar functions, such as Python-based Pydicom, SimpleITK and Sickit-Image packages, extract the CONTOUR DATA tag in DICOM, and obtain the boundary line of the body surface.
  • the polygon algorithm obtains the points within the boundary line, and counts the number of points in the body surface from the isocenter height along the direction perpendicular to the coronal plane to obtain the two-dimensional height distribution of the body surface, or directly generate a three-dimensional body surface mask map.
  • the method of optical body surface signal standardization is to project the endpoint of the triangular element on the coronal plane where the isocenter is located, obtain the body surface height at the projection location according to the point coordinates, and then interpolate to the entire plane to obtain a two-dimensional height distribution map.
  • the body surface boundary can be demarcated on each fault to further generate a three-dimensional mask map of the body surface.
  • It also includes a deep learning network model unit with the mapping relationship, which is used to obtain the image of a certain time phase i before the patient's treatment and the optical body surface data (j ⁇ i) of the time phase j during the treatment, and input the deep learning network.
  • the model outputs images of time phase j, and the images of successive time phases j are combined to obtain a real-time dynamic four-dimensional image synchronized with the change of the optical body surface motion signal.
  • the 4D-CBCT images scanned by the patient before treatment and the optical body surface data collected in real time during the treatment were normalized as input to the deep learning network model, and a synthetic real-time dynamic 4D-CBCT synchronized with the motion signal of the optical body surface was obtained.
  • the kV two-dimensional projection combined with the 4D image reconstructed by 4D-CT or 4D-MRI is used as the surrogate data of 4D-CBCT to input into the deep learning network model.
  • the deep learning network model is set locally on the treatment side of the system that uses optical body surface motion signals to synthesize real-time images, or on the server side, or on the cloud.
  • the deep learning network model is a trained model, or a network model that continuously self-learns according to the continuously updated 4D-CBCT image and optical body surface data.
  • the invention utilizes the deep learning technology to mine and analyze the 4D-CBCT, 4D-CT, 4D-MR and other in vivo motion images of a large number of patients before treatment and the feature correlation of the optical body surface 4D respiratory motion signal during the treatment, and establishes a dynamic mapping relationship, and further Real-time 4D cone-beam CT images were synthesized using pre-treatment cone-beam CT and intra-treatment synchronized optical body surface motion signals.
  • a method for synthesizing a real-time image using an optical body surface motion signal has a one-to-one correspondence with a system for synthesizing a real-time image using an optical body surface motion signal, and embodiments refer to Embodiments 1 to 2.
  • Embodiment 4 is a diagrammatic representation of Embodiment 4:
  • a deep learning network model training system for synthesizing real-time images using optical body surface motion signals, comprising a deep learning network model unit for training a deep learning network model to obtain a mapping relationship between patient body surface and in-vivo structural data; the deep learning The network model is trained on a four-dimensional image dataset.
  • the input of the deep learning network model is the four-dimensional image of a certain phase i and the optical body surface data or body surface contour data of another phase j
  • the output of the deep learning network model is the phase j.
  • Image As shown in Figure 5-8, it shows the localized 4D CT image of a patient in a certain time phase, and the corresponding real-time body surface information obtained by the optical body surface system during treatment.
  • Figure 5 is the reconstruction obtained by the optical body surface system.
  • Body surface contour Figure 6 shows the body surface contour delineated by CT images
  • Figures 7 and 8 respectively show the position information of the patient's heart and left lung relative to the body surface in this phase.
  • the present invention utilizes the prior anatomical structure of the patient's 4D-CT (if there is 4D-MRI can also be included), and the anatomical information on the day of treatment reflected by the kV two-dimensional projection collected during CBCT scanning, to construct a multimodal 4D before treatment Fused image training set.
  • 4D-CT, 4D-MR, and 4D-CBCT can be directly used to construct pre-treatment multimodal 4D image training set data.
  • 3D-CBCT which is more common in clinic, also contains a large amount of useful information such as changes in patient anatomy.
  • this embodiment uses the instantaneous kV two-dimensional projection raw data of CBCT. Correlate with motion signals of 4D-CT.
  • the patient's 4D-CT prior anatomical structure is used to make up for the insufficient number of instantaneous kV two-dimensional projections, while retaining the anatomical information on the day of treatment reflected by the kV two-dimensional projections.
  • a specific method for reconstructing a 3D image into a 4D image may adopt the method in the second embodiment.
  • the optical body surface data is to divide the body surface into grids, etc., and obtain the coordinate data of each point on the grid.
  • the outer contour data of the body surface is the outer contour data of the skin on a three-dimensional or four-dimensional image.
  • the CT, MR or CBCT images of patients usually store the voxel distribution on the three-dimensional Cartesian coordinate system according to the DICOM standard, while the optical body surface system triangulates the body surface images collected by the sensor, which is triangulated data, and the triangular elements are divided into triangulation data.
  • the distribution varies greatly with the size of the patient.
  • the deep learning neural network does not strictly limit the size, orientation, resolution, etc. of the input image, it also requires a unified basic format.
  • the network trained by the present invention requires body surface data as input. In order to facilitate subsequent model training, the optical body surface signal and the body surface contour data extracted from the CBCT image data are converted into standardized data for correlation.
  • the body surface data is to be standardized and expressed as the two-dimensional height distribution of each point on the body surface relative to the coronal plane where the isocenter is located as shown in Figure 3, or the three-dimensional mask map of the body surface contour as shown in Figure 4, both of which are acceptable. as input data for deep learning neural network.
  • the method of converting DICOM data into standardized data is to write programs for tools that process medical images with similar functions, such as Python-based Pydicom, SimpleITK and Sickit-Image packages, extract the CONTOUR DATA tag in DICOM, and obtain the boundary line of the body surface.
  • the polygon algorithm obtains the points within the boundary line, and counts the number of points in the body surface from the isocenter height along the direction perpendicular to the coronal plane to obtain the two-dimensional height distribution of the body surface, or directly generate a three-dimensional body surface mask map.
  • the method of optical body surface signal standardization is to project the endpoint of the triangular element on the coronal plane where the isocenter is located, obtain the body surface height at the projection location according to the point coordinates, and then interpolate to the entire plane to obtain a two-dimensional height distribution map.
  • the body surface boundary can be demarcated on each fault to further generate a three-dimensional mask map of the body surface.
  • Direct reconstruction of the 4D optical surface of the skin can obtain more abundant real-time dynamic spatial data.
  • the systems such as Catalyst (C-Rad, Sweden) used in this example can achieve sub-millimeter-level body surface placement accuracy and long-term stability, and support 6 Nonrigid registration of degrees of freedom.
  • the combination of a large number of patient 4D body surface motion data with other multimodal 4D in vivo dynamic images through deep learning technology can establish body surface-in vivo synchronization.
  • Dynamic mapping models provide richer and more accurate association features.
  • Deep learning network model construction and application transform the deep learning model of the 4D fusion image data set, add a layer of input, that is, the normalized data of body surface features, and retrain on the above 4D fusion image data set to establish an image generation model, which combines optical Dynamic mapping of body surface motion signals to in vivo anatomical structures contained in fused images.
  • the input of the deep learning network model is the fusion image of a certain phase (such as phase i) and the 3D body surface mask map or 2D body surface height map of another phase j (j ⁇ i), and the output is the 3D-CBCT of phase j Figure (mainly to ensure the accurate location of the target area), the loss function adopts the squared deviation of the L1 regular term, and iteratively optimizes the model parameters according to the gradient of the loss function.
  • the output of the model is a synthetic real-time dynamic 4D-CBCT with the optical body surface motion signal change as the time phase.
  • the deep learning network models used in this embodiment are three sets of progressively progressive models, including U-shaped convolutional neural networks, generative adversarial networks, and deep recurrent neural networks, making full use of the advantages and characteristics of various models.
  • the U-shaped convolutional neural network is a relatively simple image generation network. It adopts a U-shaped architecture and has jump links at the corresponding Downsampling and Upsampling layers at both ends, so that the model can make full use of the multi-scale features of the image.
  • the output is an image of the same size as the input image.
  • Generative Adversarial Network uses two convolutional neural networks as image generator and classifier respectively, and improves the prediction effect of the generator through the antagonistic effect of the two.
  • the trained U-shaped convolutional neural network is used as the generator of the GAN model, and a convolutional neural network is added as the classifier to improve the prediction effect of the model.
  • the deep recurrent neural network can be further used to mine the correlation between different time and phase, and solve the problem of time series signal correlation.
  • the output at a certain moment will be used as one of the inputs at the next moment or several moments later and affect its output, so as to extract the self-correlation information of the time series signal.
  • the body surface data can be input as a periodic signal, and the image can be used as one of the inputs at the next moment.
  • the CBCT image output at each moment will be used as one of the inputs at the next moment.
  • Embodiment 5 is a diagrammatic representation of Embodiment 5:
  • a deep learning network model training method that uses optical body surface motion signals to synthesize real-time images corresponds to a deep learning network model training system that uses optical body surface motion signals to synthesize real-time images.
  • Embodiment 6 For details, see Embodiment 6.
  • the technical solution of the present invention obtains the synthetic real-time dynamic 4D-CBCT with the optical body surface motion signal change as the time phase.
  • the result is verified by the following methods, which is accurate, real-time and reliable, and can be applied to conventional radiotherapy.
  • the applicant is a large-scale modern tertiary-grade A-level tumor specialized hospital integrating clinical, scientific research, teaching and prevention.
  • the number of outpatient visits of Peking University Cancer Hospital in 2019 reached 730,000.
  • the technical solution of the present invention is preliminarily supported by the pre-experimental data, and has good feasibility as a whole.
  • the present invention trains a deep learning network model by integrating multi-modal medical big data, establishes an individualized dynamic mapping relationship between body surface signal characteristics and the position of a moving target area in the body, and then Using the cone beam CT before treatment and the optical body surface motion signal during treatment, the real-time dynamic 4D-CBCT synchronized with the optical body surface motion signal is synthesized, which is used to guide the monitoring of the moving target area in precise radiotherapy, and provide a medical basis for clinical decision-making. evidence and the scientific method.
  • the 4D-CBCT synthesized by the artificial intelligence technology of the present invention is closer to the real anatomical structure and breathing movement during the treatment of the patient.
  • the synthetic real-time 4D-CBCT can be used as a complementary technology for complex integrated equipment such as MR-accelerators, and can achieve accurate, real-time, non-invasive, visual dynamic tracking and no additional dose monitoring of moving targets during radiotherapy on the existing traditional accelerator platform. , has the advantages of low cost, good compatibility and universality. It is beneficial to clinical promotion and industrial application.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Urology & Nephrology (AREA)
  • Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

一种利用光学体表运动信号合成实时图像的方法及系统,包括:获取单元,用于获取患者治疗前的图像及治疗过程中的实时光学体表数据;合成单元,用于将获取的患者治疗前的图像及治疗过程中的实时光学体表数据按照一定的映射关系合成为与光学体表运动信号变化同步的实时图像。方案具有以下有益效果:该合成实时图像,在现有传统加速器平台上实现放疗过程中对运动靶区的精准、实时、无创、可视化动态追踪和无额外剂量监控,具有成本低廉、兼容性和普适性好等优点。

Description

一种利用光学体表运动信号合成实时图像的方法及系统 技术领域
本发明涉及放射治疗设备技术领域,尤其涉及一种利用光学体表运动信号合成实时图像的方法及系统。
背景技术
快速增长的恶性肿瘤发病率和患者量对我国社会经济发展和卫生医疗保障等各方面造成了沉重负担。作为当代主要抗癌手段之一,放射治疗通过将致死性辐射剂量精确投照到肿瘤靶区,并利用调强等先进技术在其外边缘形成快速剂量跌落,实现杀灭肿瘤的同时保护邻近正常组织。放疗剂量脱靶是肿瘤控制失败和重要器官损伤的主要原因之一。虽然图像引导放疗(IGRT)通过减少患者的摆位误差大幅提高了治疗精度,但实时监测放疗过程中的运动靶区是IGRT亟待突破的技术瓶颈,也是理-工-医-信等多学科领域交叉的共性难题。
呼吸运动是胸腹部放疗剂量脱靶的主要风险。呼吸运动的幅度和频率由患者性别和年龄、精神状态、相关疾病、放疗体位固定装置、呼吸运动管理措施等诸多因素综合决定,其模式存在复杂的个体差异和不确定性,对肿瘤的牵拉形变和运动轨迹等影响是胸腹部放疗剂量精确投照的主要威胁,严重时可能因靶区非足量辐照导致复发转移,或因正常器官误入射野内高剂量区造成严重损伤甚至危及生命。由于屏气治疗、主动呼吸控制(ABC)等技术对患者依从度和耐受性要求较高,自由呼吸模式依然是更多放疗患者的首选方案。在放疗准备阶段评估靶区的运动幅度规律,将临床靶区(CTV)外放一定边界形成内靶区(ITV)以扩大辐照范围,是当前临床避免运动靶区漏照的常规方法。但运动幅度越大,呈指数增长的ITV体积就会包含更多正常组织。将邻近正常组织当作靶区进行辐照所造成的放射性损伤既不利于患者生活质量的保护,也制约了进一步提高靶区处方剂量以改善疗效。
呼吸运动会改变胸腹部肿瘤位置和形状,可引起放疗剂量脱靶导致复发或正常器官损伤。定位四维CT和MRI,以及治疗前引导摆位的传统四维锥束CT均不能反映治疗中靶区实时动态;治疗中呼吸门控和光学体表等外部间接信号无法直接展示内部结构;实时透视成像会增加辐射风险;尚未普及的MR-加速器存在价格和兼容性等限制。因此,基于标准平台研发通用的实时成像技术具有重大科学意义和临床价值。
为了平衡脱靶风险和正常组织保护之间的矛盾,通过图像引导、呼吸门控、靶区追踪等技术手段减少ITV的安全边界是临床的迫切需求。实现对体内运动靶区的实时、无创、可视化监测成为亟待解决的技术问题。
发明内容
本发明的主要目的在于克服上述现有技术的缺陷,提供了一种利用光学体表运动信号合成实时图像的方法及系统,以解决现有技术无法解决的因呼吸运动会改变胸腹部肿瘤位置和形状,引起放疗剂量脱靶导致复发或正常器官损伤的问题。
本发明实施例之一提供一种利用光学体表运动信号合成实时图像的系统,包括:获取单元,用于获取患者治疗前的图像及治疗过程中的实时光学体表数据;合成单元,用于将获取的患者治疗前的图像及治疗过程中的实时光学体表数据按照一定的映射关系合成为与光学体表运动信号变化同步的实时图像。
在一些实施例中,所述获取患者治疗前的图像为直接获取的治疗前的四维图像,或者获取患者治疗前的三维图像重建的四维图像。
在一些实施例中,将获取的患者治疗前的图像及治疗过程中的实时光学体表数据按照一定的映射关系合成为与光学体表运动信号变化同步的实时图像之前,对所述实时光学体表数据进行标准化处理。
在一些实施例中,还包括具有所述映射关系的深度学习网络模型单元,用于获取患者治疗前的某一时相i的图像及治疗过程中的时相j的光学体表数据,输入深度学习网络模型,输出为时相j的图像,连续时相j的图像组合得到与光学体表运动信号变化同步的实时动态四维图像。
本发明实施例之一提供一种利用光学体表运动信号合成实时图像的方法,获取患者治疗前的图像及治疗过程中的实时光学体表数据;将获取的患者治疗前的图像及治疗过程中的实时光学体表数据按照一定的映射关系合成为与光学体表运动信号变化同步的实时图像。
在一些实施例中,所述获取患者治疗前的图像为直接获取的治疗前的四维图像,或者获取患者治疗前的三维图像重建的四维图像。
在一些实施例中,将获取的患者治疗前的图像及治疗过程中的实时光学体表数据按照一定的映射关系合成为与光学体表运动信号变化同步的实时图像之前,对所述实时光学体表数据进行标准化处理。
在一些实施例中,所述映射关系通过深度学习网络模型获得,获取患者治疗前的某一时相i的图像及治疗过程中的时相j的光学体表数据,输入深度学习网络模型,输出时相j的图像,连续时相j的图像组合得到与光学体表运动 信号变化同步的实时动态四维图像。
本发明实施例之一提供一种利用光学体表运动信号合成实时图像的深度学习网络模型训练系统,其特征在于,包括深度学习网络模型单元,用于训练深度学习网络模型,获得患者体表与体内结构数据的映射关系;所述深度学习网络模型在四维影像数据集上进行训练,深度学习网络模型输入为某一时相i的四维图像以及另一时相j的光学体表数据或体表外轮廓数据,深度学习网络模型输出为时相j的图像。
本发明实施例之一提供一种利用光学体表运动信号合成实时图像的深度学习网络模型训练方法,其特征在于,训练深度学习网络模型,获得患者体表与体内结构数据的映射关系;所述深度学习网络模型在四维影像数据集上进行训练,深度学习网络模型输入为某一时相i的四维图像以及另一时相j的光学体表数据或体表外轮廓数据,深度学习网络模型输出为时相j的图像。
本发明的方案有益效果如下:本发明利用深度学习技术挖掘和分析大量患者治疗前的四维图像,比如:4D-CBCT、4D-CT、4D-MR等体内运动图像与治疗中的光学体表呼吸运动信号的特征关联并建立动态映射关系,进而利用治疗前锥束CT和治疗中同步光学体表运动信号合成实时四维锥束CT。以运动靶区追踪为主要目标,基于深度学习技术和多模态医疗大数据,建立三/四维图像解剖信息与光学体表运动信号特征的动态映射模型。实现对体内运动靶区的实时、无创、内部结构动态可视化监测。提供经济成本更低、患者适用范围更广、对现有场地和传统质控设备兼容性更好的运动靶区实时监测方案,提高运动靶区放疗的准确性和时效性,减少因脱靶导致的肿瘤复发转移和正常器官辐射损伤风险。
附图说明
本发明将以示例性实施例的方式进一步说明,这些示例性实施例将通过附图进行详细描述。这些实施例并非限制性的,在这些实施例中,相同的编号表示相同的结构,其中:
图1是根据本发明一些实施例所示的利用光学体表运动信号合成实时图像的系统示意图;
图2是根据本发明一些实施例所示的利用光学体表运动信号合成实时图像的方法示意图;
图3是根据本发明一些实施例所示的光学体表信号的二维高度图;
图4是根据本发明一些实施例所示的光学体表信号的三维体表Mask图;
图5为光学体表系统重建得到的体表轮廓;
图6为CT影像勾画得到的体表外轮廓;
图7该时相患者心脏相对于体表的位置信息;
图8该时相患者左肺部相对于体表的位置信息。
具体实施方式
为了更清楚地说明本发明实施例的技术方案,下面将对实施例描述中所需要使用的附图作简单的介绍。显而易见地,下面描述中的附图仅仅是本发明的一些示例或实施例,各个实施例的技术特征之间可以相互组合,构成实现发明目的的实际方案,对于本领域的普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图将本发明应用于其它类似情景。除非从语言环境中显而易见或另做说明,图中相同标号代表相同结构或操作。
应当理解,本文使用的“系统”、“单元”是用于区分不同级别的不同组件、元件、部件、部分或装配的一种方法。然而,如果其他词语可实现相同的目的,则可通过其他表达来替换所述词语。并且,“系统”、“单元”可以由软件或者硬件实施,可以是实体或虚拟的具有该功能部分的称呼。
本发明中使用了流程图用来说明根据本发明的实施例的系统所执行的操作。应当理解的是,前面或后面操作不一定按照顺序来精确地执行。相反,可以按照倒序或同时处理各个步骤。同时,也可以将其他操作添加到这些过程中,或从这些过程移除某一步或数步操作。各个实施例中的技术方案可以相互组合实现本发明的目的。
实施例一:
一种利用光学体表运动信号合成实时图像的系统,如图1所示,包括:获取单元,用于获取患者治疗前的图像及治疗过程中的实时光学体表数据;合成单元,用于将获取的患者治疗前的图像及治疗过程中的实时光学体表数据按照一定的映射关系合成为与光学体表运动信号变化同步的实时图像。
获取患者治疗前的图像,获取越接近治疗当天的图像效果越好,最优的为治疗前当天采集的图像。
获取患者治疗前的图像,包括不限于采集患者治疗前四维锥形束CT(4D-CBCT)或者其他手段获取的四维图像,或利用三维锥形束CT(3D-CBCT)的原始千伏(kV)二维投影或者以其他手段获取的二维、三维图像与定位四维计算机断层扫描(4D-CT)或四维磁共振成像(4D-MRI)等四维图像的各时相数据重建的治疗前的四维图像。
所述光学体表数据为利用光学体表成像系统等采集的患者体表数据。一种实施例为,把体表划分成网格等,获取网格上各个点的坐标数据,可 以包括三角网格、四边形网格等,只要能划分体表,得出坐标数据即可。
一种实施例为,利用光学体表成像系统,首先将LED光投照到患者体表,然后通过摄像头获取患者体表的反射光,通过实时的反射光来生成动态3D体表信息。其中3D体表重建是基于三角剖分的原理,可以用于计算患者实时的六维体表运动;平移运动:前后、左右、升降;旋转运动:绕x轴、绕y轴,绕z轴。
一个示意性实施例为:
(1)采集患者治疗前四维锥形束CT(4D-CBCT),或利用三维锥形束CT(3D-CBCT)的原始千伏(kV)二维投影与定位四维计算机断层摄影(4D-CT)或四维磁共振成像(4D-MRI)的各时相数据进行数据处理后,重建治疗前4D图像。
(2)治疗过程中实时采集t时刻的光学体表数据,将其标准化为3D体表mask图或2D体表高度图,或者其它标准数据形式。
(3)根据目标患者的治疗前4D-CBCT或重建获得的4D融合影像,加上治疗中实时光学体表标准化数据按照一定的映射关系得到以光学体表运动信号变化为时相的合成实时动态4D-CBCT图像。
本发明利用患者治疗前的4D-CBCT、4D-CT、4D-MR或3D图像重建出的4D图像等体内运动图像与光学体表4D呼吸运动信号的特征关联并建立动态映射关系,进而利用治疗前锥束CT和治疗中同步光学体表运动信号合成实时四维锥束CT。实现对体内运动靶区的实时、无创、内部结构动态可视化监测。基于通用加速器平台提供经济成本更低、患者适用范围更广、对现有场地和传统质控设备兼容性更好的运动靶区实时监测方案,提高运动靶区放疗的准确性和时效性,减少因脱靶导致的肿瘤复发转移和正常器官辐射损伤风险。
实施例二:
所述获取患者治疗前的图像为直接获取的治疗前的四维图像,或者获取患者治疗前的三维图像重建的四维图像。
获取的治疗前的四维图像,包括不限于四维锥形束CT(4D-CBCT)、4D-CT、4D-MR等可直接用于构建治疗前多模态4D图像训练集数据。
临床更常见三维锥形束CT(3D-CBCT)等也包含大量患者解剖结构变化等有用信息可供利用。可用三维锥形束CT(3D-CBCT)的原始千伏(kV)二维投影与定位四维计算机断层摄影(4D-CT)或四维磁共振成像(4D-MRI)等四维图像的各时相数据进行模型训练数据处理后,重建治疗前四维图像。
本发明示意性的实施例之一采用3D-CBCT的瞬时kV二维投影原始数据与4D-CT的运动信号进行关联。利用患者4D-CT先验解剖结构弥补瞬时kV二 维投影的数量不足的问题,同时保留kV二维投影所反映的治疗当天或者最接近治疗当天的解剖信息。具体做法是:首先以4D-CT某一时相为基准(I 0),则新的图像可表示为:
I(I,j,k)=F(I 0,D)=I 0(i+D x(I,j,k),j+D y(I,j,k),k+D z(I,j,k))
其中,(I,j,k)为体素位置,D为新图像Ⅰ相对于I 0的形变,D x、D y、D z分别为形变D在x,y,z方向上的分量。计算4D-CT各时相与I 0间的形变,计算其平均形变
Figure PCTCN2020102208-appb-000001
并进行主成分分析得到这些形变的前三个主成分D 1、D 2和D 3,则Ⅰ相对于I 0的形变D可简化为:
Figure PCTCN2020102208-appb-000002
其中,W 1、W 2和W 3为各主成分的权重。搜寻某一时相的kV二维投影(P),则该时相的4D图像相对于I 0的形变应满足DRR(F(I 0,D))=P
其中,DRR表示Ⅰ的数字重建投影。可通过梯度下降求解上式得到各形变主成分的权重,得到4D影像的估计。后续可进一步采用B-样条基底对形变进行微调使Ⅰ与同时相kV二维投影有更好的一致性。
也可以使用4D-MRI与3D-CBCT的瞬时kV二维投影关联,获取4D影像,4D-MRI具有良好的软组织分辨率信息。从而获得患者的接近患者治疗过程中的真实四维解剖结构。
将获取的患者治疗前的图像及治疗过程中的实时光学体表数据按照一定的映射关系合成为与光学体表运动信号变化同步的实时图像之前,对所述实时光学体表数据进行标准化处理。
患者的CT、MR或CBCT影像通常按照DICOM标准存储三维直角坐标系上的体素分布,而光学体表成像系统则将传感器采集到的体表图像进行三角划分,为三角化数据,且三角元的分布随患者体型不同而有较大变化。深度学习神经网络对输入图像虽然没有严格限制尺寸、方向、分辨率等,但亦要求具有统一的基本格式。本发明训练的网络需要体表数据作为输入,为了方便后续模型训练,将光学体表数据及CBCT影像数据提取的体表外轮廓数据都转化为标准化数据以便进行关联。体表数据拟标准化表示为如图3所示的体表各点相对于等中心所在冠状面的二维高度分布,或者如图4所示的体表轮廓的三维mask图,二者皆为可作为深度学习神经网络的输入数据。
DICOM数据转化为标准化数据的方法为基于Python的Pydicom、SimpleITK及Sickit-Image包等具有类似功能的处理医疗影像的工具编写程序,提取出DICOM中的CONTOUR DATA标签,得到体表的边界线,采用多边形算法得到边界线内的点,沿垂直于冠状面的方向从等中心高度处开始计数体表内点的数量即可得到体表的二维高度分布,或直接生成三维的体表mask图。
光学体表信号标准化的方法为将三角元端点投影到等中心所在冠状平面上,根据点坐标得到投影处体表高度,再插值到整个平面上即可得到二维高度分布图,根据体表高度可在每一断层上标定体表边界,进一步生成体表三维mask图。
还包括具有所述映射关系的深度学习网络模型单元,用于获取患者治疗前的某一时相i的图像及治疗过程中的时相j的光学体表数据(j≠i),输入深度学习网络模型,输出时相j的图像,连续时相j的图像组合得到与光学体表运动信号变化同步的实时动态四维图像。
将患者治疗前扫描的4D-CBCT图像和治疗中实时采集的光学体表数据经标准化后作为深度学习网络模型输入,得到与光学体表运动信号同步的合成实时动态4D-CBCT。对于仅有常规3D-CBCT的患者,则将其kV二维投影结合4D-CT或4D-MRI重建出的4D影像作为4D-CBCT的替代数据输入深度学习网络模型,同样得到与光学体表运动信号同步的合成实时动态4D-CBCT。
所述深度学习网络模型设置在利用光学体表运动信号合成实时图像的系统治疗端本地,或者服务器端,或者云端。所述深度学习网络模型为训练好的模型,或者根据不断更新的4D-CBCT图像和光学体表数据不断持续自学习的网络模型。
本发明利用深度学习技术挖掘和分析大量患者治疗前的4D-CBCT、4D-CT、4D-MR等体内运动图像与治疗中的光学体表4D呼吸运动信号的特征关联并建立动态映射关系,进而利用治疗前锥束CT和治疗中同步光学体表运动信号合成实时四维锥束CT图像。
实施例三:
一种利用光学体表运动信号合成实时图像的方法与一种利用光学体表运动信号合成实时图像的系统具有一一对应关系,实施例参照实施例一至二。
实施例四:
一种利用光学体表运动信号合成实时图像的深度学习网络模型训练系统,包括深度学习网络模型单元,用于训练深度学习网络模型,获得患者体表与体内结构数据的映射关系;所述深度学习网络模型在四维影像数据集上进行训练,深度学习网络模型输入为某一时相i的四维图像以及另一时相j的光学体表数据或体表外轮廓数据,深度学习网络模型输出为时相j的图像。如图5-8所示,展示了一名患者某一时相的定位四维CT影像,及其对应的通过光学体表系 统获取的治疗中实时体表信息,图5为光学体表系统重建得到的体表轮廓,图6为CT影像勾画得到的体表外轮廓,图7和图8分别展示了该时相患者心脏和左肺部相对于体表的位置信息。
深度学习网络模型训练集数据预处理:
(1)挖掘和分析大量患者治疗前的4D-CBCT、4D-CT、4D-MR等体内运动图像;或者充分利用常规3D-CBCT所包含的大量有用数据,并使模型的适用范围更广,本发明利用患者的4D-CT(若有4D-MRI也可一并纳入)先验解剖结构,及CBCT扫描时采集的kV二维投影所反映的治疗当天解剖信息,构建治疗前多模态4D融合影像训练集。
4D-CT、4D-MR和4D-CBCT等可直接用于构建治疗前多模态4D图像训练集数据。临床更常见的3D-CBCT也包含大量患者解剖结构变化等有用信息可供利用,为了克服3D-CBCT较长成像时间对运动的平均化效应,本实施例采用CBCT的瞬时kV二维投影原始数据与4D-CT的运动信号进行关联。利用患者4D-CT先验解剖结构弥补瞬时kV二维投影的数量不足的问题,同时保留kV二维投影所反映的治疗当天解剖信息。具体3D图像重建为4D图像的方法可采用实施例二的方法。
(2)将不同患者的光学体表数据或CBCT影像数据提取的体表外轮廓数据都转化为标准化数据,作为深度学习模型的输入数据。光学体表数据为把体表划分成网格等,获取网格上各个点的坐标数据。体表外轮廓数据为在三维或四维图像上皮肤的外轮廓数据。
患者的CT、MR或CBCT影像通常按照DICOM标准存储三维直角坐标系上的体素分布,而光学体表系统则将传感器采集到的体表图像进行三角划分,为三角化数据,且三角元的分布随患者体型不同而有较大变化。深度学习神经网络对输入图像虽然没有严格限制尺寸、方向、分辨率等,但亦要求具有统一的基本格式。本发明训练的网络需要体表数据作为输入,为了方便后续模型训练,将光学体表信号及CBCT影像数据提取的体表外轮廓数据都转化为标准化数据以便进行关联。体表数据拟标准化表示为如图3所示的体表各点相对于等中心所在冠状面的二维高度分布,或者如图4所示的体表轮廓的三维mask图,二者皆为可作为深度学习神经网络的输入数据。
DICOM数据转化为标准化数据的方法为基于Python的Pydicom、SimpleITK及Sickit-Image包等具有类似功能的处理医疗影像的工具编写程序,提取出DICOM中的CONTOUR DATA标签,得到体表的边界线,采用多边形算法得到边界线内的点,沿垂直于冠状面的方向从等中心高度处开始计数体表内点的数量即可得到体表的二维高度分布,或直接生成三维的体表mask图。光学体表信 号标准化的方法为将三角元端点投影到等中心所在冠状平面上,根据点坐标得到投影处体表高度,再插值到整个平面上即可得到二维高度分布图,根据体表高度可在每一断层上标定体表边界,进一步生成体表三维mask图。
直接重建皮肤4D光学表面可以获得更加丰富的实时动态空间数据,如本实施例使用的Catalyst(C-Rad,Sweden)等系统可达到亚毫米级体表摆位精度和长期稳定性,支持6个自由度的非刚性配准。虽然依旧受制于体表间接信号本身的精度限制和个体差异,但通过深度学习技术将大量患者4D体表运动数据与其他多模态4D体内动态影像相结合,可为建立体表-体内的同步动态映射模型提供更丰富、更准确的关联特征。
深度学习网络模型构建与应用:对4D融合影像数据集深度学习模型进行改造,增加一层输入即体表特征标准化数据,并在上述4D融合影像数据集上重新训练,建立图像生成模型,将光学体表运动信号与融合影像包含的体内解剖结构进行动态映射。
模型训练阶段:
对现有的深度学习图像生成深度学习网络模型进行改造,增加一层输入即体表标准化数据,并在4D融合影像数据集上重新训练。深度学习网络模型输入为某一时相(如时相i)的融合影像以及另一时相j(j≠i)的3D体表mask图或2D体表高度图,输出为时相j的3D-CBCT图(主要确保靶区位置准确),损失函数采用带有L1正则项的离差平方和,跟据损失函数的梯度迭代优化模型参数。模型输出为以光学体表运动信号变化为时相的合成实时动态4D-CBCT。
本实施例采用的深度学习网络模型为逐渐递进的三套模型,包括U-型卷积神经网络、生成式对抗网络以及深度循环神经网络,充分利用各种模型的优势和特点。其中U-型卷积神经网络是较简单的图像生成网络,采用U型架构,在两端对应的Downsampling和Upsampling层存在跳转链接,使模型能更充分地利用图像的多尺度特征,模型的输出为与输入图像相同大小的图像。生成对抗网络(GAN)是采用两个卷积神经网络分别作为图像生成器及分类器,通过两者的拮抗作用提高生成器的预测效果。本发明将训练完U-型卷积神经网络作为GAN模型的生成器,外加一个卷积神经网络作为分类器以提升模型预测效果。在此基础上,可进一步利用深度循环神经网络挖掘不同时相间的关联,解决时序信号相关问题。在该模型中,某一时刻的输出会作为下一时刻或数个时刻之后的输入之一并影响其则输出,从而提取时序信号的自关联信息。通过前两个模型的训练得到较完善的图像生成网络后,可将体表数据作为周期信号输入,像则会作为下一时刻的输入之一。而每一时刻输出的CBCT图像则会作为下一时刻的输入之一。
实施例五:
一种利用光学体表运动信号合成实时图像的深度学习网络模型训练方法,如图5所示,因与一种利用光学体表运动信号合成实时图像的深度学习网络模型训练系统一一对应。具体说明见实施例六。
本发明的技术方案获得以光学体表运动信号变化为时相的合成实时动态4D-CBCT通过以下方式进行结果验证,精准、实时可靠,可以适用于常规放射治疗中。
(1)使用4DXCAT数字体膜,通过设置软件中的不同参数,来模拟患者的正常/异常各种呼吸模式解剖结构及运动变化,生成对应于不同成像模态(4D-CT、4D-MRI和4D-CBCT)的虚拟患者影像并应用于模型中,初步验证模型的准确性。
(2)同步采集仿真运动模体的光学体表信号和4D-CBCT数据。将相同模体的4D-CT和4D-MRI等数据作为模型输入,将模型输出的合成4D-CBCT与已知同时相4D-CBCT进行比较以评估方法准确性。调整模体的呼吸时相设置。该仿真运动模体还可输入不同患者的真实呼吸模式以更好模拟复杂的临床场景,且积累的数据可作为模型训练集的补充。
(3)在仿真模体的运动插件内植入导航金标(Beacon),以Calypso实时定位系统追踪到的Beacon射频波位置,或透视成像获取的实时金标位置(模拟无射频波发射功能的普通金标植入)为参照,评估合成4D-CBCT对运动靶区位置监测的准确性和灵敏度。
将上述验证结果作为反馈数据,进一步对模型进行调试和迭代优化。
申请人作为集临床、科研、教学和预防为一体的大型现代化三级甲等肿瘤专科医院,北京大学肿瘤医院2019年门诊量达73万人次,在为患者提供一流肿瘤诊疗服务的同时,以恶性肿瘤发病机制及转化研究教育部重点实验室为依托,组建了先进的多模态医疗数据库和跨学科研究协作平台,为本项目提供充足的病例和数据资源用于模型的训练和验证。经过预实验数据初步支持了本发明的技术方案,整体具备良好的可行性。基于500例治疗过程中使用了光学体表引导系统的胸部放疗患者数据,使用患者的定位四维CT和光学体表实时获取的体表信息,验证了体表与体内结构运动具有良好的映射关系,准确度在95%以上,实现了通过体表运动映射体内解剖结构变化的预期效果。
本发明实施例可能带来的有益效果包括但不限于:本发明通过融合多模态医疗大数据训练深度学习网络模型,建立体表信号特征和体内运动靶区位置的个体化动态映射关系,进而利用治疗前的锥束CT和治疗中的光学体表运动信号,合成与光学体表运动信号同步的实时动态4D-CBCT,用于指导精确放疗中 的运动靶区监控,为临床决策提供医学循证和科学方法。相比传统的4D-影像,本发明利用人工智能技术合成的4D-CBCT更接近患者治疗过程中的真实解剖结构和呼吸运动。该合成实时4D-CBCT可作为MR-加速器等复杂集成设备的互补性技术,在现有传统加速器平台上实现放疗过程中对运动靶区的精准、实时、无创、可视化动态追踪和无额外剂量监控,具有成本低廉、兼容性和普适性好等优点。有利于临床推广和产业应用。
需要说明的是,不同实施例可能产生的有益效果不同,在不同的实施例里,可能产生的有益效果可以是以上任意一种或几种的组合,也可以是其他任何可能获得的有益效果。
上文已对基本概念做了描述,显然,对于本领域技术人员来说,上述详细披露仅仅作为示例,而并不构成对本发明的限定。
此外,除非权利要求中明确说明,本发明所述处理元素和序列的顺序、数字字母的使用、或其他名称的使用,并非用于限定本发明流程和方法的顺序。尽管上述披露中通过各种示例讨论了一些目前认为有用的发明实施例,但应当理解的是,该类细节仅起到说明的目的,附加的权利要求并不仅限于披露的实施例,相反,权利要求旨在覆盖所有符合本发明实施例实质和范围的修正和等价组合。
同理,应当注意的是,为了简化本发明披露的表述,从而帮助对一个或多个发明实施例的理解,前文对本发明实施例的描述中,有时会将多种特征归并至一个实施例、附图或对其的描述中。但是,这种披露方法并不意味着本发明对象所需要的特征比权利要求中提及的特征多。
最后,应当理解的是,本发明中所述实施例仅用以说明本发明实施例的原则。其他的变形也可能属于本发明的范围。因此,作为示例而非限制,本发明实施例的替代配置可视为与本发明的教导一致。相应地,本发明的实施例不仅限于本发明明确介绍和描述的实施例。

Claims (10)

  1. 一种利用光学体表运动信号合成实时图像的系统,其特征在于,包括:获取单元,用于获取患者治疗前的图像及治疗过程中的实时光学体表数据;合成单元,用于将获取的患者治疗前的图像及治疗过程中的实时光学体表数据按照一定的映射关系合成为与光学体表运动信号变化同步的实时图像。
  2. 根据权利要求1所述的系统,其特征在于,所述获取患者治疗前的图像为直接获取的治疗前的四维图像,或者获取患者治疗前的三维图像重建的四维图像。
  3. 根据权利要求1所述的系统,其特征在于,将获取的患者治疗前的图像及治疗过程中的实时光学体表数据按照一定的映射关系合成为与光学体表运动信号变化同步的实时图像之前,对所述实时光学体表数据进行标准化处理。
  4. 根据权利要求1或2或3所述的系统,其特征在于,还包括具有所述映射关系的深度学习网络模型单元,用于获取患者治疗前的某一时相i的图像及治疗过程中的时相j的光学体表数据,输入深度学习网络模型,输出为时相j的图像,连续时相j的图像组合得到与光学体表运动信号变化同步的实时动态四维图像。
  5. 一种利用光学体表运动信号合成实时图像的方法,其特征在于,获取患者治疗前的图像及治疗过程中的实时光学体表数据;将获取的患者治疗前的图像及治疗过程中的实时光学体表数据按照一定的映射关系合成为与光学体表运动信号变化同步的实时图像。
  6. 根据权利要求5所述的方法,其特征在于,所述获取患者治疗前的图像为直接获取的治疗前的四维图像,或者获取患者治疗前的三维图像重建的四维图像。
  7. 根据权利要求5所述的方法,其特征在于,将获取的患者治疗前的图像及治疗过程中的实时光学体表数据按照一定的映射关系合成为与光学体表运动信号变化同步的实时图像之前,对所述实时光学体表数据进行标准化处理。
  8. 根据权利要求5或6或7所述的方法,其特征在于,所述映射关系通过深度学习网络模型获得,获取患者治疗前的某一时相i的图像及治疗过程中的时相j的光学体表数据,输入深度学习网络模型,输出时相j的图像,连续时相j的图像组合得到与光学体表运动信号变化同步的实时动态四维图像。
  9. 一种利用光学体表运动信号合成实时图像的深度学习网络模型训练系统, 其特征在于,包括深度学习网络模型单元,用于训练深度学习网络模型,获得患者体表与体内结构数据的映射关系;所述深度学习网络模型在四维影像数据集上进行训练,深度学习网络模型输入为某一时相i的四维图像以及另一时相j的光学体表数据或体表外轮廓数据,深度学习网络模型输出为时相j的图像。
  10. 一种利用光学体表运动信号合成实时图像的深度学习网络模型训练方法,其特征在于,训练深度学习网络模型,获得患者体表与体内结构数据的映射关系;所述深度学习网络模型在四维影像数据集上进行训练,深度学习网络模型输入为某一时相i的四维图像以及另一时相j的光学体表数据或体表外轮廓数据,深度学习网络模型输出为时相j的图像。
PCT/CN2020/102208 2020-07-15 2020-07-15 一种利用光学体表运动信号合成实时图像的方法及系统 WO2022011617A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2020/102208 WO2022011617A1 (zh) 2020-07-15 2020-07-15 一种利用光学体表运动信号合成实时图像的方法及系统
CN202080001304.3A CN112154483A (zh) 2020-07-15 2020-07-15 一种利用光学体表运动信号合成实时图像的方法及系统
US17/037,591 US11748927B2 (en) 2020-07-15 2020-09-29 Method and system for synthesizing real-time image by using optical surface motion signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/102208 WO2022011617A1 (zh) 2020-07-15 2020-07-15 一种利用光学体表运动信号合成实时图像的方法及系统

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/037,591 Continuation US11748927B2 (en) 2020-07-15 2020-09-29 Method and system for synthesizing real-time image by using optical surface motion signals

Publications (1)

Publication Number Publication Date
WO2022011617A1 true WO2022011617A1 (zh) 2022-01-20

Family

ID=73887387

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/102208 WO2022011617A1 (zh) 2020-07-15 2020-07-15 一种利用光学体表运动信号合成实时图像的方法及系统

Country Status (3)

Country Link
US (1) US11748927B2 (zh)
CN (1) CN112154483A (zh)
WO (1) WO2022011617A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11403641B2 (en) * 2019-06-28 2022-08-02 Paypal, Inc. Transactional probability analysis on radial time representation
WO2022165812A1 (zh) * 2021-02-07 2022-08-11 北京肿瘤医院(北京大学肿瘤医院) 一种利用光学体表运动信号合成实时图像的系统
US20230377724A1 (en) * 2022-05-19 2023-11-23 Elekta Limited Temporal prediction in anatomic position monitoring using artificial intelligence modeling
CN115154930B (zh) * 2022-07-11 2024-08-06 苏州雷泰医疗科技有限公司 基于独立滑环的低剂量cbct摆位方法及设备
WO2024108409A1 (zh) * 2022-11-23 2024-05-30 北京肿瘤医院(北京大学肿瘤医院) 一种基于四维体表呼吸信号的非接触式四维成像方法和系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060074292A1 (en) * 2004-09-30 2006-04-06 Accuray, Inc. Dynamic tracking of moving targets
CN101623198A (zh) * 2008-07-08 2010-01-13 深圳市海博科技有限公司 动态肿瘤实时跟踪方法
CN101628154A (zh) * 2008-07-16 2010-01-20 深圳市海博科技有限公司 基于预测的图像引导跟踪方法
CN101972515A (zh) * 2010-11-02 2011-02-16 华中科技大学 图像和呼吸引导的辅助放疗床垫系统
CN110490851A (zh) * 2019-02-15 2019-11-22 腾讯科技(深圳)有限公司 基于人工智能的乳腺图像分割方法、装置及系统
CN110974372A (zh) * 2020-01-03 2020-04-10 上海睿触科技有限公司 一种手术过程中病人运动位置实时跟踪装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10806947B2 (en) * 2013-03-12 2020-10-20 General Electric Company Methods and systems to determine respiratory phase and motion state during guided radiation therapy
EP3630286A4 (en) * 2017-05-30 2021-03-03 RefleXion Medical, Inc. PROCESS FOR IMAGE-GUIDED RADIATION THERAPY IN REAL-TIME
US10803987B2 (en) * 2018-11-16 2020-10-13 Elekta, Inc. Real-time motion monitoring using deep neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060074292A1 (en) * 2004-09-30 2006-04-06 Accuray, Inc. Dynamic tracking of moving targets
CN101623198A (zh) * 2008-07-08 2010-01-13 深圳市海博科技有限公司 动态肿瘤实时跟踪方法
CN101628154A (zh) * 2008-07-16 2010-01-20 深圳市海博科技有限公司 基于预测的图像引导跟踪方法
CN101972515A (zh) * 2010-11-02 2011-02-16 华中科技大学 图像和呼吸引导的辅助放疗床垫系统
CN110490851A (zh) * 2019-02-15 2019-11-22 腾讯科技(深圳)有限公司 基于人工智能的乳腺图像分割方法、装置及系统
CN110974372A (zh) * 2020-01-03 2020-04-10 上海睿触科技有限公司 一种手术过程中病人运动位置实时跟踪装置

Also Published As

Publication number Publication date
US11748927B2 (en) 2023-09-05
CN112154483A (zh) 2020-12-29
US20220020189A1 (en) 2022-01-20

Similar Documents

Publication Publication Date Title
WO2022011617A1 (zh) 一种利用光学体表运动信号合成实时图像的方法及系统
CN107530552B (zh) 用于自适应放射治疗的运动靶的三维定位
US11904182B2 (en) Research and development of augmented reality in radiotherapy
US12014519B2 (en) Partial deformation maps for reconstructing motion-affected treatment dose
Garau et al. A ROI-based global motion model established on 4DCT and 2D cine-MRI data for MRI-guidance in radiation therapy
TW201249405A (en) System for facilitating operation of treatment delivery system and method for controlling operation of treatment delivery system
de Muinck Keizer et al. Fiducial marker based intra-fraction motion assessment on cine-MR for MR-linac treatment of prostate cancer
WO2022165812A1 (zh) 一种利用光学体表运动信号合成实时图像的系统
Li Advances and potential of optical surface imaging in radiotherapy
Mishra et al. Adaptation and applications of a realistic digital phantom based on patient lung tumor trajectories
Alam et al. Medical image registration: Classification, applications and issues
Huang et al. Deep learning‐based synthetization of real‐time in‐treatment 4D images using surface motion and pretreatment images: A proof‐of‐concept study
US11679276B2 (en) Real-time anatomic position monitoring for radiotherapy treatment control
US20220347493A1 (en) Real-time anatomic position monitoring in radiotherapy using machine learning regression
US20230285776A1 (en) Dynamic adaptation of radiotherapy treatment plans
Nie et al. Feasibility of MR-guided radiotherapy using beam-eye-view 2D-cine with tumor-volume projection
Ranjbar et al. Development and prospective in‐patient proof‐of‐concept validation of a surface photogrammetry+ CT‐based volumetric motion model for lung radiotherapy
Kim et al. Development of an optical‐based image guidance system: Technique detecting external markers behind a full facemask
Hindley et al. A patient-specific deep learning framework for 3D motion estimation and volumetric imaging during lung cancer radiotherapy
CN113041515A (zh) 三维图像引导运动器官定位方法、系统及存储介质
CN112997216B (zh) 一种定位图像的转化系统
WO2022198554A1 (zh) 三维图像引导运动器官定位方法、系统及存储介质
Finnegan et al. Cardiac substructure delineation in radiation therapy–A state‐of‐the‐art review
Price et al. 4D cone beam CT phase sorting using high frequency optical surface measurement during image guided radiotherapy
KR20140054783A (ko) 움직임 보상된 pet영상과 ct 및 mr 영상과의 4차원 융합 디스플레이 기술

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20945039

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20945039

Country of ref document: EP

Kind code of ref document: A1