CN112997216B - Conversion system of positioning image - Google Patents

Conversion system of positioning image Download PDF

Info

Publication number
CN112997216B
CN112997216B CN202180000471.0A CN202180000471A CN112997216B CN 112997216 B CN112997216 B CN 112997216B CN 202180000471 A CN202180000471 A CN 202180000471A CN 112997216 B CN112997216 B CN 112997216B
Authority
CN
China
Prior art keywords
dimensional
image
projection
epid
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202180000471.0A
Other languages
Chinese (zh)
Other versions
CN112997216A (en
Inventor
张艺宝
黄宇亮
马文君
李晨光
王少彬
吴昊
刘宏嘉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Beijing Cancer Hospital
Original Assignee
Peking University
Beijing Cancer Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University, Beijing Cancer Hospital filed Critical Peking University
Publication of CN112997216A publication Critical patent/CN112997216A/en
Application granted granted Critical
Publication of CN112997216B publication Critical patent/CN112997216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/103Treatment planning systems
    • A61N5/1037Treatment planning systems taking into account the movement of the target, e.g. 4D-image based planning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/10X-ray therapy; Gamma-ray therapy; Particle-irradiation therapy
    • A61N5/1048Monitoring, verifying, controlling systems and methods
    • A61N5/1064Monitoring, verifying, controlling systems and methods for adjusting radiation treatment in response to monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a conversion system of a positioning image, which comprises: and the image conversion unit is used for comparing a two-dimensional projection which is obtained according to the positioning CT and is associated with the corresponding respiratory time phase as a reference standard with a respiratory motion signal acquired in the treatment process, and calculating to obtain a real-time reference standard matched with the respiratory motion. The transformation system of the positioning image is closer to the reference anatomical structure of the patient, can be applied to dynamic tracking of a plurality of respiratory phase states, and has the advantages of high efficiency, no wound, safety, reliability, flexibility and the like.

Description

Conversion system of positioning image
Technical Field
The invention relates to the technical field of medical equipment, in particular to a conversion system of a positioning image.
Background
Respiratory motion is one of the most common causes of tumor off-target in patients with chest radiotherapy, and can easily cause failure of tumor control and/or damage to important normal organs. Since the amplitude and frequency of respiratory motion are determined by many factors such as sex and age of the patient, mental state, related diseases, radiotherapy posture fixing device, respiratory motion management measures and the like, the mode has complex individual differences and uncertainties. Previous patents fail to address the problem of dynamic tracking of moving tumor target regions, the main reasons for which are: although the 4DCBCT has information about changes of the same patient anatomy structure with time, at present, only a kilovolt imaging modality is available, so that the imaging modality cannot be directly used for megavolt emergent beam monitoring, and the extra imaging dose of the patient is large and is not commonly used in clinic.
Although the MVCBCT and the localized CT interval before the first treatment are the shortest and are usually confirmed in the field by the physician in charge, studies show that there may still be large anatomical differences compared with the localized CT, for example, the first treatment and the localized CT scan of the patient are separated by a longer time, and the target shape, position, size, etc. of the patient may have been changed significantly during the time, which is often the case in the radiotherapy center with heavy treatment task. Therefore, even if the exit beam monitoring reference is established based on the first MVCBCT, extra effort is required to verify its reliability in actual treatment, and it is difficult to remedy once there is a large deviation. Positioning the CT is expected to provide a more reliable reference in anatomy than the pre-first-treatment MVCBCT employed in the prior art.
Obtaining a reliable and safe reference standard is a technical problem to be solved urgently.
Disclosure of Invention
The invention aims to solve the problems of unreliable and unsafe reference datum in the prior art, and provides a transformation system for positioning images, which is closer to the reference anatomical structure of a patient, can be applied to dynamic tracking of a plurality of respiratory phase states, and simultaneously retains the advantages of high efficiency, non-invasiveness, safety, reliability, flexibility and the like in the prior art.
In order to solve the technical problems, the invention adopts the technical scheme that: a translation system for positioning an image, comprising:
and the image conversion unit is used for comparing the two-dimensional projection which is obtained according to the positioning CT and is associated with the corresponding breathing time and is used as a reference standard with the breathing motion information acquired in the treatment process to obtain a real-time reference standard matched with the breathing motion.
Preferably, the localized CT is a three-dimensional CT or a four-dimensional CT.
Preferably, the comparison unit compares the treatment emergent beam with the real-time reference, so that the state of the treatment emergent beam of the patient can be monitored.
Preferably, the image conversion unit generates two-dimensional projection maps of a specified angle according to different respiratory phases of the four-dimensional positioning CT, the two-dimensional projection maps are associated with the corresponding respiratory phases and used as input of the synthetic network model, the two-dimensional projection maps are output as predicted EPID emergent beam projections, and the predicted EPID emergent beam projections are used as reference bases.
Preferably, the image conversion unit generates corresponding three-dimensional prediction megavolt images according to different respiratory phases of the four-dimensional positioning CT, and then generates two-dimensional real-time reference standards of different respiratory phases at different angles by projection.
Preferably, the synthetic network model comprises a U-net model, or a CycleGAN model, or a Transformer model based on an attention mechanism, or other generative convolutional neural network.
Preferably, the system also comprises a model training unit, wherein a two-dimensional projection database or a three-dimensional image database for positioning the CT at different breathing time phases and different projection angles is established, the two-dimensional projection or the three-dimensional image is input into the synthetic network model, the output is a virtual megavolt image, and the synthetic network model is trained; and comparing the virtual megavolt image with the registered EPID projection or the simulated EPID image until the difference between the virtual megavolt image and the registered EPID projection or the simulated EPID image reaches a preset value, and obtaining a trained synthetic network model.
Preferably, the two-dimensional simulated EPID image or the three-dimensional simulated megavolt CT image reconstructed from the two-dimensional simulated EPID image is obtained through the Monte Carlo model and is used as a training set or a verification set.
Preferably, the establishing a two-dimensional projection database for positioning the CT at different breathing phases and different projection angles includes: and positioning the two-dimensional projection of the CT at different respiratory phases and different projection angles, and registering the two-dimensional projection with the megavolt cone-beam CT digital reconstruction projection or the time-sharing phase EPID projection to obtain the registered EPID projection and the two-dimensional projection of the positioning CT after registration.
Preferably, the two-dimensional projections of the post-registration localization CT are CBCT two-dimensional projection image sequences of different respiratory phases at the same projection angle, which are Icbct1, Icbct2, …, Icbctn, n representing the number of divided respiratory phases, and each patient generates a plurality of two-dimensional image samples.
Preferably, the creating a three-dimensional image database for locating CT at different respiratory phases includes: and positioning the CT in a three-dimensional image database at different respiratory phases, and performing projection registration with the megavoltage cone-beam CT three-dimensional image or the time-sharing EPID to obtain a three-dimensional image sample database of the positioned CT after registration.
Preferably, the sample data comprises a training set and a test set, wherein the training set is used for training a regression network or a generation network; the test set is used to ultimately evaluate the effectiveness of the composite network.
Preferably, the sample data further comprises a verification set for evaluating the model effect and adjusting the hyper-parameters so that the predicted effect of the model on the training set reaches the predetermined effect.
Has the advantages that:
the two-dimensional projection reference datum of different respiratory phases is obtained according to the positioning CT, is closer to the anatomical structure of a patient, can be applied to dynamic tracking of the reference anatomical structure of the patient containing a plurality of respiratory phase states, and has the advantages of high efficiency, no wound, safety, reliability, flexibility and the like. The invention constructs a model which can dynamically track the target area of the patient and can find abnormality in time, and realizes accurate management of respiratory motion and non-invasive dynamic monitoring of in vivo dose in the radiotherapy process.
Drawings
FIG. 1 is a schematic diagram of a translation system for positioning an image according to the present invention;
FIG. 2 is a schematic diagram of a translation system for positioning an image according to the present invention;
FIG. 3 is a schematic diagram of a U-net regression network virtual projection synthesis model of the present invention, in which an output layer obtains a two-dimensional projection diagram of an input corresponding respiratory phase by 1 × 1 convolution;
FIG. 4 is a schematic diagram of a cycleGAN virtual projection synthesis model according to the present invention;
FIG. 5 is a schematic flow chart of a transformation system for positioning an image according to the present invention.
Detailed Description
The following detailed description of the preferred embodiments of the present invention, taken in conjunction with the accompanying drawings, will make the advantages and features of the invention easier to understand by those skilled in the art, and thus will clearly and clearly define the scope of the invention.
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below. It should be apparent that the drawings in the following description are only examples or embodiments of the present invention, and technical features of various embodiments can be combined with each other to form a practical solution for achieving the purpose of the present invention, and a person skilled in the art can apply the present invention to other similar situations according to the drawings without creative efforts. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system" and "unit" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose. Also, "system" and "unit" may be implemented by software or hardware, and may be a name of a portion having the function, physically or virtually.
Flow charts are used in the present invention to illustrate the operations performed by a system according to embodiments of the present invention. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes. The technical solutions in the embodiments can be combined with each other to achieve the purpose of the present invention.
The first embodiment is as follows: as shown in fig. 1, the technical solution adopted by the present invention is: a translation system for positioning an image, comprising: and the image conversion unit is used for obtaining two-dimensional projections related to the corresponding breathing time phases according to the positioning CT and comparing the two-dimensional projections as reference standards with the breathing motion information acquired in the treatment process to obtain real-time reference standards matched with the breathing motion.
The manner in which the respiratory motion signal is acquired includes, but is not limited to: the optical body surface system monitors body surface motion information, the respiration gating system uses infrared rays to track the motion waveform of a body surface reflection marker, the beacon is implanted to obtain the real-time position of a target area or an organ through the signal acquisition device, and human anatomy markers (such as diaphragm) are tracked.
Comparing respiratory motion information acquired in the treatment process with respiratory motion information acquired by positioning CT to find the closest respiratory time phase; and obtaining a two-dimensional projection associated with the respiratory motion as a real-time reference.
The real-time reference datum matched with the respiratory motion is closer to the anatomical structure of a patient, can be applied to dynamic tracking of the reference anatomical structure of the patient containing a plurality of respiratory phase states, and has the advantages of high efficiency, no wound, safety, reliability, flexibility and the like.
Example two: as shown in fig. 2, on the basis of the first embodiment,
further, a comparison unit compares the treatment emergent beam with the real-time reference standard, so that the state of the treatment emergent beam of the patient can be monitored.
The treatment emergent beam has time dimension and anatomical space information, and is compared with the real-time reference, so that whether the treatment emergent beam of the patient is properly set can be monitored, the patient can be timely alarmed to suspend treatment when abnormal conditions occur, and the treatment emergent beam can be adjusted in real time if the abnormal conditions occur. Improve the treatment safety and prognosis of patients and reduce the occurrence risk of major radiotherapy accidents. Further, the localized CT is a three-dimensional CT, or a four-dimensional CT. The three-dimensional CT and the four-dimensional CT can both realize the purpose of the invention and generate a two-dimensional projection drawing with a specified angle.
Further, the image conversion unit generates two-dimensional projection maps of a specified angle according to different breathing phases of the four-dimensional positioning CT, the two-dimensional projection maps are associated with the corresponding breathing phases and used as input of the synthetic network model, the two-dimensional projection maps are output as predicted EPID emergent beam projections, and the predicted EPID emergent beam projections are used as reference bases.
The EPID is an abbreviation of electronic portal imaging device, and is an electronic portal imaging device.
The specified angle may be any angle or angles in 360 degrees, or 360 degrees. Can be adjusted according to the angle required by the treatment of the patient.
When the system is applied, a radiotherapy plan of a target patient can be designed, the position of the radiation field MLC is matched on the basis of the EPID outgoing beam projection output by the synthetic network model, and the real-time reference datum matched with the dynamic respiration phase information of the patient is obtained according to the dynamic respiration phase information of the patient obtained during treatment. Therefore, the method is suitable for patients with larger motion influence in the beam emergence process, in particular to patients with thoracoabdominal tumor with larger respiratory motion influence.
Further, the image conversion unit generates corresponding three-dimensional prediction megavolt images according to different respiratory phases of the four-dimensional positioning CT, and then generates two-dimensional real-time reference standards of different angles and different respiratory phases by projection.
Further, the synthetic network model includes, but is not limited to, a U-net model, a cycleGAN model, a transducer model based on attention mechanism, or other generative convolutional neural network. The U-net model is a U-net deep learning network; the CycleGAN model refers to a CycleGAN cycle-generating antagonistic network.
Aiming at a patient with great respiratory influence and four-dimensional CT, the U-net deep learning network and the CycleGAN cyclic-generating type confrontation network can be adopted to solve the problem that the CT images synthesize projection images at different projection angles and different respiratory states. Because the parameter scale of the network model required by the three-dimensional image far exceeds the two-dimensional image, the four-dimensional CT can be regarded as a plurality of different three-dimensional CTs of the same patient, the invention aims to synthesize two-dimensional megavolt projection by the three-dimensional CT through digital reconstruction, and the method specifically comprises the following steps: the three-dimensional CT of different respiratory phases generates a two-dimensional projection diagram with a specified angle, the two-dimensional projection diagram is associated with the corresponding respiratory phase and projection angle, and the generated two-dimensional image is used as the input of a synthesis network.
An application stage: inputting by a U-net deep learning network: the patient four-dimensional positioning CT is input in two-dimensional projection or three-dimensional image of different breathing time phases and different projection angles.
An application stage: and (3) outputting the U-net deep learning network: and (3) acquiring a real-time matched radiation field from the position of a multi-leaf collimator (MLC) extracted from a radiotherapy plan as an emergent beam reference datum, and comparing the emergent beam with an actual treatment emergent beam to determine whether an abnormality occurs.
Further, as shown in fig. 2 and 5, the system further includes a model training unit, which establishes a two-dimensional projection database or a three-dimensional image database for positioning the CT at different respiratory phases and different projection angles, inputs the two-dimensional projection or the three-dimensional image into the synthetic network model, outputs the two-dimensional projection or the three-dimensional image as a virtual megavolt image, and trains the synthetic network model; and comparing the virtual megavolt image with the registered EPID projection or the simulated EPID image until the difference between the virtual megavolt image and the registered EPID projection or the simulated EPID image reaches a preset value, and obtaining a trained synthetic network model.
As shown in fig. 3, the optimization method for the U-net deep learning network until the difference between the virtual megavolt image and the registered EPID projection reaches a predetermined value includes improving the image synthesis effect of the U-net network and continuously reducing the difference between the virtual megavolt image and the registered EPID projection by optimizing the network model structure, optimizing the training parameters, adjusting the loss function, and enhancing the data volume of the extended training set. The predetermined value is a value or range acceptable to the user. The U-net model can better utilize the overall and local characteristics of the image, thereby being beneficial to reducing common scattering artifacts and local artifacts such as noise and the like.
And (3) quality evaluation standard of the U-net model: calculating the correlation (such as calculating mutual information, mean square error and the like) between the virtual image output by the U-net and the EPID projection (obtained after being registered with the input projection); or calculating the correlation of the virtual image output by the U-net with a simulated EPID projection generated via monte carlo simulation, this method has the advantage of higher accuracy. If the hyper-parameter is used for adjusting the Unet model, the EPID is projected as verification set data; if it is used to verify the final effect of the model, then the EPID is projected as test set data.
The cycleGAN synthetic network model comprises a generator and a discriminator, wherein the generator processes input data and simulates the input data into real data, the discriminator is responsible for distinguishing the simulated data obtained by the generator from the real data, and after continuous game playing, the simulation capability and the discrimination capability of the generator and the real data are respectively enhanced until the confrontation process reaches dynamic balance.
The constructed generative countermeasure network (GAN) consists of two parts: the device comprises a generator and a discriminator, wherein the former simulates input data into real data after processing the input data, and the latter is responsible for distinguishing the simulated data obtained by the generator from the real data. After continuous game, the simulation ability and the discrimination ability of the game and the game are respectively enhanced until the confrontation process reaches dynamic balance. CycleGAN aims at solving the mapping from image domain X to image domain Y, except for the generator G1, which constructs X to Y, and the discriminator D1, which discriminates G1(X) and Y; it also constructs an inverse countermeasure network, i.e. generator G2 from Y to X and discriminator D2 distinguishing G2(Y) from X, forming a ring network (model overall framework see fig. 4). The total loss function needs to introduce a circumferential error term besides the positive generation error:
L(G1,G2,D1,D2)=LGAN(G1,D1,X,Y)+LGAN(G2,D2,Y,X)+Lcycle(G1,G2)
the loss functions in the formula are defined as follows, and through multiple iterations, the final generator G1 can complete the generation task of the CBCT projection map for simulating different parts.
Figure BDA0002975793090000061
Figure BDA0002975793090000062
Figure BDA0002975793090000063
The design of network G1 for the synthesis of CBCT projections from CT projections can be referenced directly to a single site U-net model (as shown in fig. 3); the network G2 for CBCT projection synthesis CT projection can be designed by inverting the input and output of G1, i.e. sharing two generated network parameters, or by retraining a set of inverted structure parameters. A layout of discriminative networks D1 and D2 may be constructed with reference to fig. 4.
The CycleGAN is essentially two mirror-symmetrical GANs, forming a ring network. Two GANs share two generators and each have one arbiter, i.e. there are two arbiters and two generators in common.
Principle of unidirectional GAN: there are two different networks in the network-generator G and arbiter D. There are two data fields, X (input data), Y (target data corresponding to X). G is responsible for emulating the data in the X domain as target data and hiding it in the real target data, while D is responsible for separating the counterfeit data from the real target data. After the two games, the counterfeiting capability of G is continuously improved, and the discrimination capability of D is continuously enhanced. This competing process reaches a dynamic equilibrium until D again does not tell whether the target data is real or G-generated.
The generator function:
and converting the input image from the original domain into the target domain to obtain the target image. The implementation method comprises the following steps: a U-net network is used.
The function of the discriminator:
an image is input and the discriminator attempts to predict whether it is the original image or the output image of the generator. The implementation method comprises the following steps: a convolutional neural network may be used to extract features from an input image and then identify the source of the input image by adding a fully-connected layer that produces a 1 x 1 output to determine whether the extracted features belong to a particular class.
The final purposes of the U-net and cycleGAN modeling methods are: the EPID emergent beam projection is predicted after the two-dimensional projection is input into the model, so that the method can be used for monitoring the state of the emergent beam of the patient during treatment in vivo, and can alarm in time to suspend treatment when abnormal conditions occur. The difference is that CycleGAN has better robustness than U-net, so that the CycleGAN can better resist the more complex task of not distinguishing anatomical part prediction.
Further, as shown in fig. 5, a two-dimensional simulated EPID image or a three-dimensional simulated megavolt CT image reconstructed from the two-dimensional simulated EPID image is obtained by the monte carlo model, and the training set is extended as a training set or the model is verified as a verification set.
As training set: based on the physical characteristics of the accelerator beam and the collimation system and its MV detection plate (i.e., EPID), a relevant monte carlo model can be established, and the dose of the megavoltage therapeutic beam deposited on the EPID plate after passing through the human body can be simulated and calculated. The computational accuracy of the monte carlo simulation depends on the parameter settings of the model, especially the fitting to the accelerator hardware parameters. In application, aiming at the design of the used accelerator, a handpiece phase space file is generated by GATE software to be used as a radiation source to be radiated onto an EPID flat plate, and a simulation result can be compared with an EPID signal actually detected in training data after normalization. By correcting the backscattering of the collimator on the accelerator and optimizing other hardware parameters, Monte Carlo simulation can be compared with actual measurement results. On the basis, the Monte Carlo method can further simulate the in-vivo emergent dose and the EPID detection signal of the patient. In the model training phase, besides the registered EPID image can be used as the training data, the Monte Carlo model simulated EPID response can also be used as the training data.
The specific implementation steps for constructing the Monte Carlo model can be divided into the following 3 parts: (1) establishing a preliminary accelerator geometric and material model in a Monte Carlo program according to the actual hardware parameters of the accelerator; (2) and simulating the back scattering of the outgoing beam of each energy photon by the collimator system, and correcting. And adding a simulation die body and a matched plug-in thereof into the Monte Carlo program, simulating to generate a beam passing through the collimator, and comparing and correcting the beam with the measurement result. This simulated source can then also be used for MVCBCT or MVDR imaging; (3) the method comprises the steps of generating a phase space file, adjusting the Monte Carlo program, and determining the Monte Carlo program used by the integral accelerator and the MVCBCT, MVDR and EPID equipment for correct implantation.
As a validation set, the voxel phantom and the EGS4 phantom converted from a simulated phantom image can be used in a monte carlo simulation procedure to achieve the calculation of "patient" in vivo dose. Firstly, generating a phase space file through the accelerator model, then simulating the interaction between rays and the die body and the energy deposition in the die body, wherein the dose of the rays deposited on the EPID flat plate after penetrating through the die body can simulate an EPID image.
When the accuracy of the model is verified subsequently, the emergent beam image predicted by the artificial intelligence model can be compared with the emergent beam image simulated by the Monte Carlo model, such as average difference or difference root mean square, and the like, and the method can be used for verifying the accuracy of the output of the model.
As a verification, a simulation phantom was inserted in the monte carlo program, each energy beam was simulated, and phase space files within the CBCT, MVDR and EPID geometric spaces were recorded for detailed simulation of MVCBCT, MVDR images and calculation of EPID treatment exit beam distributions.
Further, the establishing a two-dimensional projection database for positioning the CT at different respiratory phases and different projection angles includes: and positioning the two-dimensional projection of the CT at different respiratory phases and different projection angles, and registering the two-dimensional projection with the megavolt cone-beam CT digital reconstruction projection or the time-sharing phase EPID projection to obtain the registered EPID projection and the two-dimensional projection of the positioning CT after registration.
Two-dimensional projections (MVDR projections) refer to two-dimensional projections generated by 3D megavoltage outgoing beam projections of different phases. For example, the CT projection image Ict at a given angle is generated based on a digital reconstruction projection technique.
Since the positioning CT is usually larger than the megavoltage cone-beam CT or MVDR in the scan coverage, which causes the sizes of the two to be inconsistent, rigid and non-rigid registration needs to be performed respectively, so that the positioning CT is spatially matched with the two-dimensional projection or MVDR of the megavoltage cone-beam C.
So that different respiratory phase projection registration with the digitally reconstructed projections or the time-phased EPID projections is required before inputting the two-dimensional projections or the three-dimensional images into the synthetic network model. And obtaining a registered EPID projection and a two-dimensional projection after registration.
According to the spatial transformation relationship, image registration can be classified into two main categories: rigid registration and non-rigid registration. Rigid registration includes translational and rotational transformations, and non-rigid registration includes affine, projective, and elastic transformations, among others.
Further, the establishing a three-dimensional image database for positioning CT at different respiratory phases includes: and positioning the CT in a three-dimensional image database at different respiratory phases, and performing projection registration with the megavoltage cone-beam CT three-dimensional image or the time-sharing EPID to obtain a three-dimensional image sample database of the positioned CT after registration.
Establishing a database and cleaning data, wherein the data set construction method can be divided into 2 types: 1. based on the patient data of the single-center or multi-center clinical database, a universal prediction model is obtained according to the scheme provided by the project, and the universal prediction model can be applied to the treatment of different individual patients; 2. for the targeted individual patient information to be treated, a predictive model that can meet the individualized treatment is obtained using the provided protocol.
For each patient in the database, a CT projection image Ict at a given angle is generated based on a digital reconstruction projection technique, and the algorithms that can be implemented include, but are not limited to: pseudo-simulation algorithm, splatting algorithm, etc.
Since the patent is based on modality conversion by two-dimensional projection, different two-dimensional projections as a database all correspond to respective projection angles. The projection angles generated for three-dimensional images acquired by one patient cover 0-360 deg..
The CBCT projection image sequences Icbct1, Icbct2, …, Icbctn (n represents the number of divided respiratory phases) of different respiratory phases at the same projection angle after registration, thereby constructing one sample. The breathing curve provided by the breathing gating can be used as the basis for selecting the breathing phase corresponding to the CBCT projection.
Each patient ultimately generates multiple two-dimensional image samples or three-dimensional image samples at different projection angles.
Further, the method also comprises sample image preprocessing, including normalizing the CT projection image pixel values, and performing clipping, filtering and resampling on the sample image size to make the resolution of the images in different modalities consistent.
Further, the sample data is divided into a training set, a testing set and a verification set, wherein the training set is used for training a regression network or a generation network; the test set is used for finally evaluating the effect of the synthetic network; and the verification set is used for evaluating the effect of the model and adjusting the hyper-parameters so that the prediction effect of the model on the training set reaches the preset effect.
And (3) carrying out statistical analysis on the trained result by using the test set data, wherein the evaluation indexes comprise a structural similarity index, integral and local mean square errors and the like, and the adaptability of the model in practical application is tested, and the accuracy and reliability of the model are comprehensively evaluated.
Unlike the training set and the test set, the validation set is not required. If the hyper-parameters do not need to be adjusted, the effect can be evaluated by directly using the test set without using the verification set. The effect evaluated by the verification set is not the final effect of the model, and is mainly used for adjusting the hyper-parameters, and the final effect of the model is based on the evaluation result of the test set.
The training, validation, and test sets are divided into 6 when the amount of data is not very large (tens of thousands or less): 2: 2; if the data is large, the proportion of the training set, the verification set and the test set can be adjusted to 98: 1: 1; all sample data of the same anatomical part needs to generate three parts, namely a training set, a verification set and a test set through random sampling, wherein the ratio of the training set to the verification set to the test set is 8:1: 1; the three data division modes are not limited, and are only exemplary, and the data set division with other proportions can also achieve the purpose of the invention. But some advanced methods such as leave-out, K-fold cross-validation, etc. can be used when the available data is rare.
Example three: a method for transforming a positioning image, comprising:
and obtaining two-dimensional projections associated with the corresponding breathing time phases according to the positioning CT, and comparing the two-dimensional projections serving as reference bases with the breathing motion information acquired in the treatment process to obtain real-time reference bases matched with the breathing motion.
Further, the localized CT is a three-dimensional CT, or a four-dimensional CT.
Further, the treatment exit beam is compared to the real-time reference fiducial so that the status of the patient treatment exit beam can be monitored.
Furthermore, a two-dimensional projection diagram with a specified angle is generated according to different breathing phases of the four-dimensional positioning CT, the two-dimensional projection diagram is associated with the corresponding breathing phase and serves as an input of the synthetic network model, the two-dimensional projection diagram is output as a predicted EPID emergent beam projection, and the predicted EPID emergent beam projection serves as a reference standard.
Furthermore, corresponding three-dimensional prediction megavolt images are generated according to different breathing time phases of the four-dimensional positioning CT, and then two-dimensional real-time reference benchmarks with different angles and different breathing time phases are generated through projection.
Further, the synthetic network model comprises a U-net model, or a CycleGAN model, or a Transformer model based on an attention mechanism, or other generative convolutional neural networks.
Further, establishing a two-dimensional projection database or a three-dimensional image database for positioning the CT at different breathing time phases and different projection angles, inputting the two-dimensional projection or the three-dimensional image into a synthetic network model, outputting the two-dimensional projection or the three-dimensional image as a virtual megavolt image, and training the synthetic network model; and comparing the virtual megavolt image with the registered EPID projection or the simulated EPID image until the difference between the virtual megavolt image and the registered EPID projection or the simulated EPID image reaches a preset value, and obtaining a trained synthetic network model.
Further, a two-dimensional simulated EPID image or a three-dimensional simulated megavolt CT image reconstructed from the two-dimensional simulated EPID image is obtained by the monte carlo model as a training set or a verification set.
Further, the establishing a two-dimensional projection database for positioning the CT at different breathing phases and different projection angles includes: and positioning the two-dimensional projection of the CT at different respiratory phases and different projection angles, and registering the two-dimensional projection with the megavolt cone-beam CT digital reconstruction projection or the time-sharing phase EPID projection to obtain the registered EPID projection and the two-dimensional projection of the positioning CT after registration.
Further, the two-dimensional projections of the post-registration localization CT are CBCT two-dimensional projection image sequences of different respiratory phases at the same projection angle, which are Icbct1, Icbct2, …, Icbctn, n representing the number of divided respiratory phases, and each patient generates a plurality of two-dimensional image samples.
Further, the establishing of the three-dimensional image database for positioning CT at different respiratory phases includes: and positioning the CT in a three-dimensional image database at different respiratory phases, and performing projection registration with the megavoltage cone-beam CT three-dimensional image or the time-sharing EPID to obtain a three-dimensional image sample database of the positioned CT after registration. The database includes a training set, a test set, and/or a validation set.
Further, the sample data comprises a training set and a testing set, wherein the training set is used for training a regression network or a generation network; the test set is used to ultimately evaluate the effectiveness of the composite network.
Further, the sample data also comprises a verification set used for evaluating the effect of the model and adjusting the hyper-parameters so that the predicted effect of the model on the training set reaches the preset effect.
The method corresponds to the second embodiment one to one, and the detailed explanation is shown in the second embodiment.
The solution provided by this patent: according to the potential mutual mapping relation existing among images of different modes, a depth learning technology can be utilized, the kilovolt positioning CT images of a patient are synthesized into virtual megavolt images, and a brand-new treatment emergent beam reference datum is generated for dynamically monitoring the emergent beam in vivo. The reference benchmark that this scheme can provide more closely patient's reference anatomical structure, and can be applied to the dynamic tracking who contains a plurality of respiratory phase states, kept advantages such as high efficiency, noninvasive, flexibility of original technology simultaneously. In addition, the patent also provides a scheme for predicting the theoretical value of emergent beam distribution on the EPID by using Monte Carlo simulation, so that a more reliable prediction model can be obtained for emergent beam monitoring by taking the theoretical reference value as a target in the model construction stage.
The applicable scene of this patent system is mostly chest belly tumour, because chest belly tumour patient, it is great that most four-dimensional location CT receives respiratory motion to influence. The system can also be applied to the acquisition and monitoring of other tumor positioning CTs which are greatly influenced by respiratory motion.
It is to be noted that different embodiments may produce different advantages, and in different embodiments, any one or combination of the above advantages may be produced, or any other advantages may be obtained.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be considered as illustrative and not restrictive.
Additionally, the order in which the elements and sequences of the process are described, the use of letters or other designations herein is not intended to limit the order of the processes and methods of the invention unless otherwise indicated by the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it should be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments of the invention.
Similarly, it should be noted that in the preceding description of embodiments of the invention, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to suggest that the claimed subject matter requires more features than are expressly recited in the claims.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of embodiments of the present invention. Other variations are possible within the scope of the invention. Thus, by way of example, and not limitation, alternative configurations of embodiments of the present invention can be viewed as being consistent with the teachings of the present invention. Accordingly, the embodiments of the invention are not limited to the embodiments explicitly described and depicted.

Claims (9)

1. A translation system for positioning an image, comprising:
the image conversion unit is used for comparing a two-dimensional projection which is obtained according to the positioning CT and is associated with the corresponding breathing time and used as a reference standard with breathing motion information acquired in the treatment process to obtain a real-time reference standard matched with breathing motion;
the image conversion unit generates a two-dimensional projection diagram with a specified angle according to different respiratory phases of the four-dimensional positioning CT, the two-dimensional projection diagram is associated with the corresponding respiratory phase and is used as the input of a synthetic network model, the two-dimensional projection diagram is output as a predicted EPID emergent beam projection, and the predicted EPID emergent beam projection is used as a reference standard; or generating corresponding three-dimensional prediction megavolt images according to different respiratory phases of the four-dimensional positioning CT, and then projecting to generate two-dimensional real-time reference standards with different angles and different respiratory phases;
the system also comprises a model training unit, a two-dimensional projection database or a three-dimensional image database for positioning the CT at different breathing time phases and different projection angles is established, the two-dimensional projection or the three-dimensional image is input into the synthetic network model and output as a virtual megavolt image, and the synthetic network model is trained; and comparing the virtual megavolt image with the registered EPID projection or the simulated EPID image until the difference between the virtual megavolt image and the registered EPID projection or the simulated EPID image reaches a preset value, and obtaining a trained synthetic network model.
2. The system for mapping an image of claim 1, further comprising a comparison unit for comparing the treatment emergent beam with the real-time reference standard, thereby monitoring the status of the treatment emergent beam of the patient.
3. A system for transforming a scout image according to claim 1, characterized in that said synthetic network model comprises a U-net model, or a CycleGAN model, or a fransformer model based on attention mechanism, or other generative convolutional neural network.
4. The system for transforming localization images according to claim 1, wherein the two-dimensional simulated EPID image or the three-dimensional simulated megavolt CT image reconstructed from the two-dimensional simulated EPID image is obtained by a monte carlo model as a training set or a validation set.
5. The system for transforming localization images according to claim 1, wherein the establishing a two-dimensional projection database of localization CT at different respiratory phases and different projection angles comprises: and positioning the two-dimensional projection of the CT at different respiratory phases and different projection angles, and registering the two-dimensional projection with the megavolt cone-beam CT digital reconstruction projection or the time-sharing phase EPID projection to obtain the registered EPID projection and the two-dimensional projection of the positioning CT after registration.
6. The system for transforming localization images according to claim 5, wherein the two-dimensional projections of the post-registration localization CT are CBCT two-dimensional projection image sequences of different respiratory phases of the same projection angle, and are Icbct1, Icbct2, …, Ibctn, n representing the number of divided respiratory phases, and a plurality of two-dimensional image samples are generated for each patient.
7. The system for transforming localization images according to claim 1, wherein the establishing of the three-dimensional image database for localizing CT at different respiratory phases comprises: and positioning the CT in a three-dimensional image database at different respiratory phases, and performing projection registration with the megavoltage cone-beam CT three-dimensional image or the time-sharing EPID to obtain a three-dimensional image sample database of the positioned CT after registration.
8. The system for transforming a localization image according to any one of claims 5-7, wherein the sample data comprises a training set, a test set, wherein the training set is used for training a regression network or a generation network; the test set is used to ultimately evaluate the effectiveness of the composite network.
9. The system for transforming a localization image according to claim 8, wherein the sample data further comprises a validation set for evaluating the effect of the model and adjusting the hyper-parameters such that the predicted effect of the model on the training set reaches a predetermined effect.
CN202180000471.0A 2021-02-10 2021-02-10 Conversion system of positioning image Active CN112997216B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/076607 WO2022170607A1 (en) 2021-02-10 2021-02-10 Positioning image conversion system

Publications (2)

Publication Number Publication Date
CN112997216A CN112997216A (en) 2021-06-18
CN112997216B true CN112997216B (en) 2022-05-20

Family

ID=76337090

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180000471.0A Active CN112997216B (en) 2021-02-10 2021-02-10 Conversion system of positioning image

Country Status (2)

Country Link
CN (1) CN112997216B (en)
WO (1) WO2022170607A1 (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1672651A (en) * 2004-02-17 2005-09-28 西门子共同研究公司 System and method for patient positioning for radiotherapy in the presence of respiratory motion
CN101267768A (en) * 2005-07-22 2008-09-17 断层放疗公司 System and method of detecting a breathing phase of a patient receiving radiation therapy
CN102763138A (en) * 2009-11-18 2012-10-31 皇家飞利浦电子股份有限公司 Motion correction in radiation therapy
CN104225809A (en) * 2014-10-15 2014-12-24 大连现代医疗设备科技有限公司 Implementation method and equipment for 4D radiotherapy plan with respiratory compensation
CN107610195A (en) * 2017-07-28 2018-01-19 上海联影医疗科技有限公司 The system and method for image conversion
CN110582328A (en) * 2019-07-22 2019-12-17 北京市肿瘤防治研究所 Radiotherapy emergent beam monitoring method and system
CN111448590A (en) * 2017-09-28 2020-07-24 皇家飞利浦有限公司 Scatter correction based on deep learning
CN112204620A (en) * 2018-04-26 2021-01-08 医科达有限公司 Image enhancement using generative countermeasure networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180018757A1 (en) * 2016-07-13 2018-01-18 Kenji Suzuki Transforming projection data in tomography by means of machine learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1672651A (en) * 2004-02-17 2005-09-28 西门子共同研究公司 System and method for patient positioning for radiotherapy in the presence of respiratory motion
CN101267768A (en) * 2005-07-22 2008-09-17 断层放疗公司 System and method of detecting a breathing phase of a patient receiving radiation therapy
CN102763138A (en) * 2009-11-18 2012-10-31 皇家飞利浦电子股份有限公司 Motion correction in radiation therapy
CN104225809A (en) * 2014-10-15 2014-12-24 大连现代医疗设备科技有限公司 Implementation method and equipment for 4D radiotherapy plan with respiratory compensation
CN107610195A (en) * 2017-07-28 2018-01-19 上海联影医疗科技有限公司 The system and method for image conversion
CN111448590A (en) * 2017-09-28 2020-07-24 皇家飞利浦有限公司 Scatter correction based on deep learning
CN112204620A (en) * 2018-04-26 2021-01-08 医科达有限公司 Image enhancement using generative countermeasure networks
CN110582328A (en) * 2019-07-22 2019-12-17 北京市肿瘤防治研究所 Radiotherapy emergent beam monitoring method and system

Also Published As

Publication number Publication date
WO2022170607A1 (en) 2022-08-18
CN112997216A (en) 2021-06-18

Similar Documents

Publication Publication Date Title
CN111408072B (en) Device for evaluating radiation dose delivery and system for verifying radiation dose delivery
CN107072624A (en) system and method for automatic treatment plan
EP3468668B1 (en) Soft tissue tracking using physiologic volume rendering
US11748927B2 (en) Method and system for synthesizing real-time image by using optical surface motion signals
Geneser et al. Quantifying variability in radiation dose due to respiratory-induced tumor motion
Huang et al. Deep learning‐based synthetization of real‐time in‐treatment 4D images using surface motion and pretreatment images: A proof‐of‐concept study
Virgolin et al. On the feasibility of automatically selecting similar patients in highly individualized radiotherapy dose reconstruction for historic data of pediatric cancer survivors
Zhou et al. Feasibility study of deep learning‐based markerless real‐time lung tumor tracking with orthogonal X‐ray projection images
EP3984462A1 (en) Heatmap and atlas
Von Siebenthal Analysis and modelling of respiratory liver motion using 4DMRI
CN112997216B (en) Conversion system of positioning image
Ranjbar et al. Development and prospective in‐patient proof‐of‐concept validation of a surface photogrammetry+ CT‐based volumetric motion model for lung radiotherapy
Hayashi et al. Real‐time CT image generation based on voxel‐by‐voxel modeling of internal deformation by utilizing the displacement of fiducial markers
Dick et al. A fiducial-less tracking method for radiation therapy of liver tumors by diaphragm disparity analysis part 1: simulation study using machine learning through artificial neural network
Guidi et al. Real-time lung tumour motion modeling for adaptive radiation therapy using lego mindstorms
Samadi Miandoab et al. A simulation study on patient setup errors in external beam radiotherapy using an anthropomorphic 4D phantom
Müller et al. A phantom study to create synthetic CT from orthogonal twodimensional cine MRI and evaluate the effect of irregular breathing
Díez et al. Analysis and evaluation of periodic physiological organ motion in radiotherapy treatments
Ranjbar Simulating the breathing of lung cancer patients to estimate tumor motion and deformation at the time of radiation treatment
DUMLU Dosimetric impact of geometric distortions in an MRI-only proton therapy workflow for extracranial sites
Hargrave The development of a clinical decision making framework for image guided radiotherapy
Dick Fiducial-Less Real-Time Tracking of the Radiation Therapy of Liver Tumors Using Artificial Neural Networks
Siebenthal Analysis and modelling of respiratory liver motion using 4DMRI
Lee et al. A Comprehensive Analysis of Deformable Image Registration Methods for CT Imaging
Salari et al. Artificial Intelligence-based Motion Tracking in Cancer Radiotherapy: A Review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant