WO2022198553A1 - Procédé et système de positionnement guidé par image tridimensionnelle, et support d'enregistrement - Google Patents

Procédé et système de positionnement guidé par image tridimensionnelle, et support d'enregistrement Download PDF

Info

Publication number
WO2022198553A1
WO2022198553A1 PCT/CN2021/082938 CN2021082938W WO2022198553A1 WO 2022198553 A1 WO2022198553 A1 WO 2022198553A1 CN 2021082938 W CN2021082938 W CN 2021082938W WO 2022198553 A1 WO2022198553 A1 WO 2022198553A1
Authority
WO
WIPO (PCT)
Prior art keywords
patient
image
virtual
tissue
tumor target
Prior art date
Application number
PCT/CN2021/082938
Other languages
English (en)
Chinese (zh)
Inventor
申国盛
李强
刘新国
戴中颖
金晓东
贺鹏博
Original Assignee
中国科学院近代物理研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院近代物理研究所 filed Critical 中国科学院近代物理研究所
Priority to PCT/CN2021/082938 priority Critical patent/WO2022198553A1/fr
Publication of WO2022198553A1 publication Critical patent/WO2022198553A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the invention relates to a method, a system and a storage medium for three-dimensional image-guided placement based on artificial intelligence technology and a DR system in radiotherapy, and relates to the field of image-guided placement of radiotherapy patients.
  • DR image-based guidance systems require the use of two intersecting DR imaging equipment at large angles (close to or equal to 90 degrees orthogonal) or C-arm-connected rotating DR equipment to generate two large-angle intersecting DR images, which are consistent with patient treatment planning CT.
  • the digital reconstructed radiography (DRR) generated by the image is registered to obtain the offset of the patient's positioning position and guide the patient's positioning, but there is no real three-dimensional (3D) positioning guidance.
  • CBCT and orbital CT imaging systems will add additional radiation dose to patients, increasing the risk of complications for patients, and CBCT and orbital CT imaging systems are expensive, and the obtained CBCT image density resolution is not high. Accuracy and speed are not high.
  • the purpose of the present invention is to provide a method, system and storage medium for 3D image guidance positioning based on artificial intelligence technology and DR system that can achieve accurate 3D image guidance and obtain patient positioning information.
  • the present invention adopts the following technical solutions:
  • the present invention provides a three-dimensional image-guided positioning method, comprising:
  • the patient's 3D-CT image set is automatically segmented into tissues and organs and tumor target areas by automatic segmentation algorithm, and the contour data of the tissues and organs and tumor target areas of the patient's treatment plan are reconstructed through the tissue-organ model reconstruction algorithm;
  • the artificial intelligence network algorithm is used to generate the patient's virtual 3D-CT image set
  • the patient's virtual 3D-CT image set is automatically segmented into virtual tissues and organs and tumor target areas by an automatic segmentation algorithm, and the patient's virtual tissues and organs and tumor target area contour data are reconstructed through the tissue-organ model reconstruction algorithm;
  • the real-time DR image of the patient is acquired by using a DR imaging device.
  • the DR imaging device includes a set of X-ray sources and a corresponding imaging flat panel;
  • the X-ray source is installed on the top of the treatment room, and the imaging plate is installed on the floor portion of the treatment room, and each uses a small-angle track to move; or,
  • the X-ray source and the imaging plate are connected by a C-arm for small-angle movement.
  • the artificial intelligence network algorithm is obtained through training and verification, including:
  • CT image data set part of the data in the established data set is used as a training data set, and the other part is used as a verification data set, a neural network model is constructed for training and verification, and the weights and parameters of the artificial intelligence network are obtained through continuous iteration through operations, and then the trained artificial intelligence network model.
  • the automatic segmentation algorithm adopts a deep learning-based convolutional neural network model, which can automatically segment tissues, organs and tumor target areas according to the input CT images.
  • tissue and organ reconstruction algorithm can reconstruct the 3D models of all or specified tissues and organs, and can render and display different tissues and organs in different colors and modes, which is convenient for users to observe and distinguish operations.
  • the registration employs a tissue-organ registration algorithm for manual and/or automatic 3D model registration.
  • the present invention also provides a three-dimensional image-guided positioning system, the system comprising:
  • the organ reconstruction unit is configured to automatically segment the patient's 3D-CT image set into tissues, organs and tumor target areas by using an automatic segmentation algorithm, and reconstruct the contours of the tissues, organs and tumor target areas of the patient's treatment plan through the tissue-organ model reconstruction algorithm data;
  • the virtual image generation unit based on the real-time DR image of the patient, uses the artificial intelligence network algorithm to generate the virtual 3D-CT image set of the patient;
  • the virtual organ reconstruction unit uses an automatic segmentation algorithm to automatically segment the patient's virtual 3D-CT image set into virtual tissue organs and tumor target areas, and reconstructs the patient's virtual tissue organs and tumor target area contour data through the tissue-organ model reconstruction algorithm;
  • the positioning judgment unit registers the contour data of the tissue and organ and tumor target area of the patient's treatment plan with the virtual tissue and organ and the contour data of the tumor target area, outputs the patient's positioning offset parameter, and judges whether the offset parameter is in line with the radiation therapy. Conditions: If not, guide the patient to reposition; if eligible, complete the position.
  • the present invention further provides a processing device, the processing device includes at least a processor and a memory, and a computer program is stored on the memory, and the processor executes the computer program when running the computer program to realize the first aspect of the present invention
  • the three-dimensional image-guided positioning method of the aspect is not limited to a processor, the processing device, and a memory, and a computer program is stored on the memory, and the processor executes the computer program when running the computer program to realize the first aspect of the present invention.
  • the present invention further provides a computer storage medium having computer-readable instructions stored thereon, and the computer-readable instructions can be executed by a processor to implement the three-dimensional image-guided positioning method described in the first aspect of the present invention .
  • the present invention generates a virtual 3D-CT image of patient placement based on a small number of DR images, performs 3D reconstruction and registration of the patient's treatment plan 3D-CT and virtual 3D-CT, obtains patient placement information, and realizes accurate 3D image-guided radiotherapy , to solve the defects and deficiencies in conventional DR images and CBCT image guidance;
  • the present invention uses artificial intelligence technology to convert real-time 2D-DR images into virtual 3D-CT images, and performs 3D reconstruction and registration of virtual 3D-CT images and treatment plan 3D-CT images to realize 3D guidance in the true sense;
  • the present invention only needs a single DR imaging device, and the cost is low; compared with the CBCT system, while reducing the cost of the device to achieve 3D positioning and guidance, the additional radiation dose to the patient during imaging is also low.
  • the present invention is suitable for patient positioning guidance of any radiotherapy system.
  • FIG. 1 is a flowchart of a method for 3D image-guided positioning provided by an embodiment of the present invention
  • FIG. 2 is a schematic diagram of the coordinates of a DR device according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of an artificial intelligence network algorithm according to an embodiment of the present invention.
  • spatially relative terms may be used herein to describe the relationship of one element or feature to another element or feature as shown in the figures, such as “inner”, “outer”, “inner” “, “outside”, “below”, “above”, etc.
  • This spatially relative term is intended to include different orientations of the device in use or operation other than the orientation depicted in the figures.
  • Computer technology especially artificial intelligence technology, has shown excellent performance in computer vision and medical image processing segmentation and multi-modal image generation. More and more multi-modal image generation and automatic segmentation technologies are realized. Therefore, it is feasible and necessary to develop a method based on artificial intelligence technology to achieve 3D high-precision patient positioning image guidance and verification while reducing the price of image-guided equipment.
  • the 3D image-guided positioning method based on artificial intelligence technology and DR system includes the following contents:
  • the DR imaging device in this embodiment includes a set of X-ray emission sources 1 and an imaging flat panel 2 corresponding thereto, which are used to obtain real-time DR images of patients.
  • the system equipment can install the X-ray source 1 on the top of the treatment room, and the imaging plate 2 is installed on the ground part of the treatment room, each of which uses a small-angle track to move, and the movement mode is controlled by the corresponding control system to ensure the consistency of the movement direction and position.
  • the X-ray source 1 and the imaging plate 2 can also be connected together using a C-shaped arm according to needs, so as to perform small-angle movements as a whole.
  • the DR imaging device of this embodiment can take the center point of the treatment room as the origin, perform small-angle rotation imaging, and generate DR images of different angles, wherein, in the treatment room coordinate axis XYZ, the coordinate origin is the beam of the treatment room The isocenter of the flow, the X axis is parallel to the floor of the treatment room and points to the zero degree direction of the treatment bed, the Y axis is parallel to the floor of the treatment room and points to the 90° direction of the treatment bed, and the Z axis is perpendicular to the floor of the treatment room and points to the top of the treatment room.
  • the small angle in this embodiment is defined between -15 degrees and +15 degrees.
  • S2 Build a DR image and a corresponding 3D-CT image data set for patient radiotherapy.
  • the 3D-CT image data set is used for training and verification of an artificial intelligence network algorithm model.
  • a DR imaging device is used to capture a DR image of a patient
  • a CT system is used to capture a 3D-CT image of the same part of the same patient, so that the DR image of the patient and the 3D-CT image are in one-to-one correspondence to establish a data set.
  • step S4 Use the DR image set in step S2 and the corresponding 3D-CT image to train and verify the artificial intelligence network algorithm model in step S3, and obtain the weight and parameters of the artificial intelligence network model, and the parameters include the weight of each neuron of the network model. and neuron parameters;
  • N DR images and the corresponding M-layer 3D-CT images are input.
  • the value range of N is greater than or equal to 1, and the shooting angle of each DR image is different.
  • the DR image to be shot becomes more and more The more, the more additional radiation dose is added to the patient, and the greater the economic cost is, so the N value should not exceed 8.
  • the number of layers of M is determined with reference to the number of layers of CT in the treatment plan. Generally, the number of layers in M is close to or equal to that of CT in the treatment plan. CT for registration.
  • S5 Obtain the real-time DR image of the patient, and use the artificial intelligence network algorithm and artificial intelligence network weights and parameters to generate the virtual 3D-CT image of the current patient with the real-time generated DR image, and the tissues and organs are generated according to the 3D-CT segmentation of the patient. ;
  • the real-time DR image refers to a DR image taken by a patient before or during the current fractional treatment, and the DR image is used to guide and verify the current treatment setup of the patient.
  • S6 Build an automatic segmentation algorithm for tissues and organs based on deep learning. After training and verification by using CT images and the corresponding doctors to manually segment tissues and organs, the algorithm can automatically and accurately segment the tissues and organs on the CT images according to the CT images (such as , skin, bone, etc.) and tumor target areas;
  • an automatic segmentation algorithm for tissues and organs based on deep learning uses a deep learning convolutional neural network model to automatically segment tissues and organs according to the input CT images.
  • the training and verification data of the algorithm comes from experienced physicians.
  • the quality of the manually delineated tissues and organs and the corresponding CT and training certificate datasets must be guaranteed.
  • the algorithm can output all or specified tissue, organs and tumor target contour sets for the next step of 3D reconstruction and registration.
  • the 3D tissue and organ reconstruction algorithm can reconstruct all or specific tissues and organs specified, and render and display different tissues and organs in different colors and modes, which is convenient for users to observe and distinguish operations.
  • S8 Use a 3D tissue and organ reconstruction algorithm to perform 3D reconstruction according to the patient's virtual CT data and/or tissue and organ contour sets to obtain a patient's virtual 3D tissue and organ such as skin, bone and other models;
  • the 3D tissue-organ registration algorithm can perform manual and/or automatic 3D model registration according to the reconstructed 3D tissue-organ model, and accurately output the offset parameter between the two models.
  • S10 Use a 3D model registration algorithm to perform automatic and/or manual registration calculations using the patient's virtual 3D tissue organ and patient plan and real-time partial or full treatment 3D tissue organ and contour set as input.
  • the positioning accuracy of the patient can be verified only by calculating once after the patient completes the positioning or before the treatment, and calculating the positioning accuracy of the patient at an interval of 3-10 minutes during the treatment, and outputting the current positioning offset data of the patient;
  • step S11 Determine whether the offset data output in step S10 meets the set radiation therapy requirements: if it does not meet the patient's treatment requirements, guide the patient to re-setup according to the set-up offset data, and move to step after re-setting S5, start to continue the setup verification process; if the setup offset data meets the treatment requirements, end the setup validation and start the treatment.
  • the standard is obtained by the joint research of doctors and engineering technicians according to the laws and regulations of radiotherapy and industry standards.
  • the present invention uses artificial intelligence technology to perform 3D reconstruction on a simple 2D-DR image, obtains a real-time virtual 3D-CT image of the patient, and reconstructs and matches the reconstructed virtual 3D-CT and the 3D-CT image of the patient treatment plan in three dimensions. Accurate, obtain the patient's precise 3D positioning offset parameters, guide and verify the patient's positioning, and ensure the effect of radiation therapy.
  • this embodiment describes in detail the specific application process of the method for 3D image-guided positioning based on artificial intelligence technology and DR system, and the specific process is as follows:
  • a DR imaging system that can move at a small angle [-15°, +15°] is installed in the treatment room, and the equipment moves around the center of the treatment.
  • the network can reconstruct the virtual 3D-CT image of the patient from N[1 ⁇ 8] DR images.
  • the labeled DR images and corresponding 3D-CT images are used for training and verification, and the weight parameters of the artificial intelligence neural network model are obtained.
  • the network can automatically and accurately segment CT images, and obtain the tissues, organs and tumor target areas in the CT images.
  • the weight parameters of the network model are obtained.
  • Embodiment 1 provides a 3D image-guided positioning method.
  • this embodiment provides a 3D image-guided positioning system.
  • the guidance system provided in this embodiment may implement the 3D image guidance placement method of Embodiment 1, and the guidance system may be implemented by software, hardware, or a combination of software and hardware.
  • the guidance system may include integrated or separate functional modules or functional units to perform corresponding steps in each method of Embodiment 1. Since the guidance system of this embodiment is basically similar to the method embodiment, the description process of this embodiment is relatively simple, and the relevant parts may refer to the partial description of Embodiment 1, and the embodiment of the guidance system of this embodiment is only illustrative of.
  • This embodiment provides a three-dimensional image-guided positioning system, which includes:
  • the organ reconstruction unit is configured to automatically segment the patient's 3D-CT image set into tissues, organs and tumor target areas by using an automatic segmentation algorithm, and reconstruct the contours of the tissues, organs and tumor target areas of the patient's treatment plan through the tissue-organ model reconstruction algorithm data;
  • the virtual image generation unit based on the real-time DR image of the patient, uses the artificial intelligence network algorithm to generate the virtual 3D-CT image set of the patient;
  • the virtual organ reconstruction unit uses an automatic segmentation algorithm to automatically segment the patient's virtual 3D-CT image set into virtual tissue organs and tumor target areas, and reconstructs the patient's virtual tissue organs and tumor target area contour data through the tissue-organ model reconstruction algorithm;
  • the positioning judgment unit registers the contour data of the tissue and organ and tumor target area of the patient's treatment plan with the virtual tissue and organ and the contour data of the tumor target area, outputs the patient's positioning offset parameter, and judges whether the offset parameter is in line with the radiation therapy. Conditions: If not, guide the patient to reposition; if eligible, complete the position.
  • This embodiment provides a processing device corresponding to the 3D image-guided positioning method provided in Embodiment 1, and the processing device may be an electronic device used for a client, such as a mobile phone, a notebook computer, a tablet computer, a desktop computer, etc. , to perform the method of Example 1.
  • a client such as a mobile phone, a notebook computer, a tablet computer, a desktop computer, etc.
  • the processing device includes a processor, a memory, a communication interface and a bus, and the processor, the memory and the communication interface are connected through the bus to complete mutual communication.
  • the bus can be an industry standard architecture (ISA, Industry Standard Architecture) bus, a peripheral device interconnect (PCI, Peripheral Component) bus or an extended industry standard architecture (EISA, Extended Industry Standard Component) bus and so on.
  • ISA Industry Standard Architecture
  • PCI peripheral device interconnect
  • EISA Extended Industry Standard Component
  • the memory may be a high-speed random access memory (RAM: Random Access Memory), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
  • RAM Random Access Memory
  • non-volatile memory such as at least one disk memory.
  • the processor may be various types of general-purpose processors such as a central processing unit (CPU) and a digital signal processor (DSP), which are not limited herein.
  • CPU central processing unit
  • DSP digital signal processor
  • the method for 3D image-guided positioning in Embodiment 1 may be embodied as a computer program product, and the computer program product may include a computer-readable storage medium on which a computer-readable storage medium for executing the method described in Embodiment 1 is uploaded. Read program instructions.
  • a computer-readable storage medium may be a tangible device that retains and stores instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any combination of the above.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Procédé et système de positionnement guidé par image tridimensionnelle (3D), et support d'enregistrement. Le procédé comprend : la segmentation automatique d'un ensemble d'images 3D-CT d'un patient au moyen d'un algorithme de segmentation automatique pour obtenir des organes de tissu et des régions cibles tumorales, et la reconstruction, au moyen d'un algorithme de reconstruction de modèle d'organe de tissu, de données de contour d'organes de tissu et de régions cibles de tumeur dans un plan de traitement de patient; la génération d'un ensemble d'images 3D-CT virtuelles du patient au moyen d'un algorithme de réseau d'intelligence artificielle sur la base d'images DR en temps réel du patient; la segmentation automatique de l'ensemble d'images 3D-CT virtuelles du patient au moyen de l'algorithme de segmentation automatique pour obtenir des organes de tissu virtuel et des régions cibles de tumeur, et la reconstruction des données de contour des organes de tissu virtuel et des régions cibles de tumeur du patient; l'enregistrement des données de contour des organes de tissu et des régions cibles de tumeur dans le plan de traitement de patient avec les données de contour des organes de tissu virtuel et des régions cibles de tumeur, la délivrance en sortie de paramètres de décalage de positionnement de patient, et la détermination quant à savoir si une condition de radiothérapie est satisfaite : si ce n'est pas le cas, le guidage du patient à repositionner; et si tel est le cas, l'achèvement du positionnement.
PCT/CN2021/082938 2021-03-25 2021-03-25 Procédé et système de positionnement guidé par image tridimensionnelle, et support d'enregistrement WO2022198553A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/082938 WO2022198553A1 (fr) 2021-03-25 2021-03-25 Procédé et système de positionnement guidé par image tridimensionnelle, et support d'enregistrement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/082938 WO2022198553A1 (fr) 2021-03-25 2021-03-25 Procédé et système de positionnement guidé par image tridimensionnelle, et support d'enregistrement

Publications (1)

Publication Number Publication Date
WO2022198553A1 true WO2022198553A1 (fr) 2022-09-29

Family

ID=83395150

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/082938 WO2022198553A1 (fr) 2021-03-25 2021-03-25 Procédé et système de positionnement guidé par image tridimensionnelle, et support d'enregistrement

Country Status (1)

Country Link
WO (1) WO2022198553A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116370848A (zh) * 2023-06-07 2023-07-04 浙江省肿瘤医院 放射治疗的摆位方法及系统
CN116740768A (zh) * 2023-08-11 2023-09-12 南京诺源医疗器械有限公司 基于鼻颅镜的导航可视化方法、系统、设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090110145A1 (en) * 2007-10-25 2009-04-30 Tomotherapy Incorporated Method for adapting fractionation of a radiation therapy dose
CN102306239A (zh) * 2011-07-22 2012-01-04 李宝生 基于锥形束ct图像ct值校正技术的放疗剂量评估和优化方法
CN106097347A (zh) * 2016-06-14 2016-11-09 福州大学 一种多模态医学图像配准与可视化方法
CN106408509A (zh) * 2016-04-29 2017-02-15 上海联影医疗科技有限公司 一种配准方法及其装置
CN111968110A (zh) * 2020-09-02 2020-11-20 广州海兆印丰信息科技有限公司 Ct成像方法、装置、存储介质及计算机设备
CN112348857A (zh) * 2020-11-06 2021-02-09 苏州雷泰医疗科技有限公司 一种基于深度学习的放射治疗摆位偏移计算方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090110145A1 (en) * 2007-10-25 2009-04-30 Tomotherapy Incorporated Method for adapting fractionation of a radiation therapy dose
CN102306239A (zh) * 2011-07-22 2012-01-04 李宝生 基于锥形束ct图像ct值校正技术的放疗剂量评估和优化方法
CN106408509A (zh) * 2016-04-29 2017-02-15 上海联影医疗科技有限公司 一种配准方法及其装置
CN106097347A (zh) * 2016-06-14 2016-11-09 福州大学 一种多模态医学图像配准与可视化方法
CN111968110A (zh) * 2020-09-02 2020-11-20 广州海兆印丰信息科技有限公司 Ct成像方法、装置、存储介质及计算机设备
CN112348857A (zh) * 2020-11-06 2021-02-09 苏州雷泰医疗科技有限公司 一种基于深度学习的放射治疗摆位偏移计算方法及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116370848A (zh) * 2023-06-07 2023-07-04 浙江省肿瘤医院 放射治疗的摆位方法及系统
CN116370848B (zh) * 2023-06-07 2023-09-01 浙江省肿瘤医院 放射治疗的摆位方法及系统
CN116740768A (zh) * 2023-08-11 2023-09-12 南京诺源医疗器械有限公司 基于鼻颅镜的导航可视化方法、系统、设备及存储介质
CN116740768B (zh) * 2023-08-11 2023-10-20 南京诺源医疗器械有限公司 基于鼻颅镜的导航可视化方法、系统、设备及存储介质

Similar Documents

Publication Publication Date Title
AU2019262836B2 (en) Phantom for adaptive radiotherapy
JP6886565B2 (ja) 表面の動きを追跡する方法及び装置
EP2931371B1 (fr) Thérapie par rayonnements avec calcul adaptatif de la dose en temps réel
Yang et al. 4D‐CT motion estimation using deformable image registration and 5D respiratory motion modeling
Aird et al. CT simulation for radiotherapy treatment planning
US10134155B2 (en) Systems and methods for real-time imaging
CN113041516B (zh) 三维图像引导摆位的方法、系统、处理设备及存储介质
JP2018504969A (ja) 適応型放射線療法に対する3次元位置特定及び追跡
WO2022198553A1 (fr) Procédé et système de positionnement guidé par image tridimensionnelle, et support d'enregistrement
WO2010039404A1 (fr) Soustraction d'une caractéristique anatomique segmentée à partir d'une image acquise
WO2020087257A1 (fr) Procédé et dispositif de guidage par image, et équipement médical et support d'informations lisible par ordinateur
AU2018266458B2 (en) Systems and methods of accounting for shape change during radiotherapy
US20210121715A1 (en) Cardiac ablation using an mr linac
Zhou et al. Feasibility study of deep learning‐based markerless real‐time lung tumor tracking with orthogonal X‐ray projection images
CN113041515A (zh) 三维图像引导运动器官定位方法、系统及存储介质
WO2022198554A1 (fr) Procédé et système de positionnement guidé par une image tridimensionnelle d'organes en mouvement, et support d'enregistrement
Mestrovic et al. Integration of on-line imaging, plan adaptation and radiation delivery: proof of concept using digital tomosynthesis
CN116322902A (zh) 图像配准系统和方法
Santhanam et al. A multi-GPU real-time dose simulation software framework for lung radiotherapy
US20240001151A1 (en) Methods, systems and computer readable mediums for determining a region-of-interest in surface-guided monitoring
Sumida et al. Introduction to CT/MR simulation in radiotherapy
AU2019453270B2 (en) Geometry-based real-time adaptive radiotherapy
Yoon et al. Accuracy of an automatic patient-positioning system based on the correlation of two edge images in radiotherapy
US20240108915A1 (en) Virtual portal image for treatment setup in a radiation therapy system
US20230381541A1 (en) Prospective and retrospective on-line adaptive radiotherapy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21932181

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21932181

Country of ref document: EP

Kind code of ref document: A1