CN118079256A - Automatic tracking method for tumor target area of magnetic resonance guided radiation therapy - Google Patents

Automatic tracking method for tumor target area of magnetic resonance guided radiation therapy Download PDF

Info

Publication number
CN118079256A
CN118079256A CN202410511929.4A CN202410511929A CN118079256A CN 118079256 A CN118079256 A CN 118079256A CN 202410511929 A CN202410511929 A CN 202410511929A CN 118079256 A CN118079256 A CN 118079256A
Authority
CN
China
Prior art keywords
dvf
frame
information
reference frame
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410511929.4A
Other languages
Chinese (zh)
Other versions
CN118079256B (en
Inventor
王伊玲
范羽
赵越
刘丹妮
刘敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Cancer Hospital
Original Assignee
Sichuan Cancer Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Cancer Hospital filed Critical Sichuan Cancer Hospital
Priority to CN202410511929.4A priority Critical patent/CN118079256B/en
Priority claimed from CN202410511929.4A external-priority patent/CN118079256B/en
Publication of CN118079256A publication Critical patent/CN118079256A/en
Application granted granted Critical
Publication of CN118079256B publication Critical patent/CN118079256B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The application discloses an automatic tracking method of a magnetic resonance guided radiation therapy tumor target area, which belongs to the technical field of tumor automatic identification, and comprises the following steps: step 1: collecting a cine MRI image generated by a patient in a radiotherapy system in real time, and preprocessing the cine MRI image to obtain a standard image file; step 2: extracting a reference frame and a motion frame from a standard image file, inputting the reference frame and the motion frame into a Deformation Vector Field (DVF) automatic extraction model in a double-channel format, and obtaining the contrast information of the reference frame and the motion frame; step 3: and capturing the contour of the contrast information of the reference frame and the motion frame to obtain the contour of the tumor in the motion frame, thereby realizing the dynamic tracking of the tumor focus. The technical scheme provided by the application can capture the changed region of the motion frame relative to the reference frame, and further realize the dynamic tracking of the tumor focus through contour capture.

Description

Automatic tracking method for tumor target area of magnetic resonance guided radiation therapy
Technical Field
The application relates to the technical field of tumor automatic identification, in particular to an automatic tracking method for a tumor target area of magnetic resonance guided radiotherapy.
Background
The malignant tumors of the chest and abdomen such as lung cancer, liver cancer and the like have high incidence rate and mortality rate, and seriously threaten the life and health of people. However, due to organ motion, accurate chest and abdomen tumor radiotherapy is challenging. The magnetic resonance guided radiotherapy (Magnetic Resonance Image-Guided Radiotherapy, MRgRT) has the advantages of no ionizing radiation, high contrast imaging of soft tissues and integration of radiotherapy equipment, and is very widely applied to the field of radiotherapy.
However, organs such as lung cancer and liver cancer are not fixed in the actual radiotherapy process, but continuously present a moving state along with the respiration of a patient. Therefore, in practice, in order to increase the accuracy of radiotherapy, it is necessary to track the radiotherapy target region manually, and the hysteresis of the manner of manually tracking the radiotherapy target region is too low, so that it is difficult to accurately perform the change of the radiotherapy plan according to the movement state of the focus.
Disclosure of Invention
The summary of the application is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. The summary of the application is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
As a first aspect of the present application, in order to solve the above technical problem of low accuracy of a radiotherapy target region, the present application provides an automatic tracking method for a magnetic resonance guided radiotherapy tumor target region, including the following steps:
Step 1: collecting a cine MRI image generated by a patient in a radiotherapy system in real time, and preprocessing the cine MRI image to obtain a standard image file;
Step 2: extracting a reference frame and a motion frame from a standard image file, and manually sketching the outline of a target region of an MRI image of the reference frame;
Step 3: inputting the reference frame and the motion frame into a DVF automatic extraction model in a double-channel format to obtain DVF information of the motion frame, wherein the DVF information is motion change information which is a deformation vector field of the position of each pixel point of the motion frame relative to the reference frame;
Step 4: and acting the DVF information automatically extracted on the target area outline of the reference frame in a convolution operation mode to obtain the target area outline of the motion frame, so as to realize the dynamic tracking of the tumor focus.
In the technical scheme provided by the application, the cine MRI image generated in the radiotherapy process is divided into a reference frame and a motion frame, then the reference frame and the motion frame are input into a DVF automatic extraction model based on a Transformer structural design, the position change information of the motion frame relative to the reference frame at each pixel point is captured, and then the position change information acts with the outline of the target area of the reference frame to obtain the outline of the target area of the motion frame, so that the dynamic tracking of tumor focus is realized, and the precision of the target area of the radiotherapy is increased.
The automatic extraction model is trained by the unsupervised deep learning, and the target area outline which is not required to be manually sketched is used as a training label set, so that the automatic extraction model has higher training efficiency compared with the conventional supervised deep learning network scheme; and the model robustness is better because of avoiding introducing manual sketching errors.
When the DVF automatic extraction model extracts corresponding feature points or extracts areas with changes, images which are required to be input into the automatic extraction model have standard formats, and the definition or resolution is required to be consistent, so that the application provides the following technical scheme:
Further, step1 includes the following steps:
step 11: collecting the cine MRI image in real time, and converting the cine MRI image into mha format;
Step 12: and adjusting the resolution of the cine MRI image to a preset resolution by linear interpolation to obtain a standard image file.
In the technical scheme provided by the application, the cine MRI image is firstly converted into mha format instead of firstly carrying out resolution adjustment, so that secondary loss of image information during transcoding can be avoided, namely if the resolution adjustment is firstly carried out, then the image information is transcoded into mha format, information loss and noise information amplification which are easily caused by means of interpolation and the like during the resolution adjustment are easily carried out, further enhancement is obtained during the transcoding, and the later processing effect is influenced.
When the collected cine MRI images are processed, the range of the cine MRI images is too large, the detail features are too many, and the range of the detail feature distribution is very large, so that the detail features of a target area are difficult to accurately capture, the capturing of the detail features is enhanced, the calculation amount of a model is increased, and the calculation load is increased, and the application provides the following technical scheme aiming at the problem:
Further, the standard image file is obtained in step 12 as follows:
Filling each minimum pixel grid point by using a linear interpolation method:
Where p (r) is the pixel value of the image to be interpolated at spatial grid point r, and p (r 0)、p(r1) is the pixel value of the known image at spatial grid point r 0、r1;
taking the center of the interpolated image as a reference point, respectively reserving a pixel units along the +x, -x, +y-y directions, so as to cut the image to a multiplied by a pixel points; a is a preset clipping value.
According to the scheme provided by the application, interpolation operation is performed on the picture, so that the detail characteristics of the whole picture can be increased, and then the edge characteristics of the picture are cut, so that the scanning range is smaller when scanning is performed, but more target area related characteristics can be captured, and further the model precision can be increased under the condition of reducing the operation load.
Further, step2 includes the steps of:
Step 21: taking a first frame in a standard image file updated in real time of a patient as a reference frame and taking a subsequent picture as a motion frame;
step 22: the outline of the target region of the reference frame cine MRI image is manually delineated.
Further, the method for capturing the contour in the step 3 is as follows: and (3) acting the DVF information obtained in the step (4) on the reference frame in a convolution operation mode.
When the change information of each pixel point of the motion frame relative to the reference frame is extracted, the motion frame needs to have good perception effect on long distance and short distance so as to avoid that the corresponding relation between the motion frame and the pixel point of the reference frame can not be captured when the motion frame has larger displacement relative to the reference frame, and therefore the application provides the following technical scheme:
Further, the DVF automatic extraction model comprises a 4-layer encoder and a 4-layer decoder, wherein the 4-layer encoder and the 4-layer decoder correspond to each other and are connected through a transducer layer jump;
The motion frame and the reference frame are input into a first layer of decoder, and each layer of decoder executes double-layer convolution operation;
the last layer decoder outputs DVF information of the motion frame.
In the scheme provided by the application, each layer of decoder and each layer of encoder are connected through a transducer layer jump, so that the correlation characteristics of a long distance of a space can be effectively captured, the decoder adopts double-layer convolution operation, and the perception advantage of local characteristics can be enhanced, so that the method has good effects on long distance perception and local perception, and further, the change information of each pixel point of a motion frame relative to a reference frame and the target area outline of the reference frame are accurately analyzed.
The application relates to a DVF automatic extraction model, belonging to an unsupervised learning transducer deep learning network model, which is required to be trained, and in the traditional scheme, if the DVF automatic extraction model is trained by adopting a manual labeling mode, the training efficiency is low because too much manual labeling data is required (manual target region target sketching is required to be carried out on each frame of cine MRI image of a training set), manual sketching deviation is easy to be introduced, and the tracking precision of the model to the target region target is influenced, therefore, the application provides the following technical scheme:
further, the DVF automatic extraction model is trained in the following manner:
S1: acquiring original cine MRI images of at least 80 patients generated in a radiotherapy system, and converting the cine MRI images into standard image files;
s2: an unsupervised learning model loss function is constructed,
L total=Limg + λ LDVF, wherein L img is an image loss function, L DVF is a DVF loss function, L total is a total loss function, and lambda is a super parameter to be optimized;
L img represents the similarity between the motion image after the deconvolution of the DVF information (the DVF -1,DVF-1 convolution obtained by inverting the DVF information in step 4 acts on the motion frame image) and the reference image;
Limg = LMSE+LNCC
LNCC
Wherein L MSE is the loss function of the mean square error portion, and L NCC is the loss function of the local normalized cross-correlation coefficient portion; b is the number of pixel points in each frame in the standard image file, p i is the pixel value of the ith pixel point of the motion frame after the deconvolution of the DVF information extracted automatically, The pixel value of the ith pixel point of the reference frame; cov is covariance operation, σ is standard deviation operation;
L DVF=L1+L2+L3, wherein L1 is a DVF information first-order spatial derivative loss function, L 2 is a DVF information second-order spatial derivative loss function, and L3 is a DVF third-order and above spatial derivative loss function:
Wherein x, y is the x, y coordinate direction of the cine MRI image, Deformation vector field DVF information at spatial coordinate points (x, y) predicted for model,/>For each higher-order spatial derivative loss function weight, k is more than or equal to 3, the order of the spatial derivative is represented, for spatial derivative above second order, L DVF represents the spatially continuous change degree of DVF information output by the DVF automatic extraction model,/>The arithmetic symbol for obtaining the derivative represents that partial derivative is obtained on the x and y coordinates;
S3: training an unsupervised learning model in a test set to determine a super parameter lambda, Lambda is used to balance the weights of L img and L DVF,/>For balancing the higher spatial derivative weights inside the L DVF.
According to the technical scheme provided by the application, the model is not required to be updated and trained in a manual labeling mode, so that the cost of model training can be effectively reduced, more data sets can be selected as much as possible to perform unsupervised training of the model, and the model accuracy is improved. And the model robustness is better because of avoiding introducing manual sketching errors. And because the converter jump connection layer is adopted, the technical scheme can make up the defect that the traditional DVF automatic extraction model cannot sense the space long-distance associated characteristics, and can further improve the DVF extraction precision. And the present application is further optimized for the loss function. That is, conventional automatic DVF extraction models typically only consider the DVF low-order spatial derivative loss function, i.e., the model output typically has only first or second order spatial continuity. The in vivo DVF changes due to respiratory motion are considered to be continuous and have spatially high order smooth varying characteristics. Therefore, a high-order DVF space derivative loss function is introduced in the patent, so that the continuous smooth characteristic of the model output DVF is further improved, and the output result is more in line with the physical actual change.
When training the model, the model is iterated, optimized and validated, so the DVF model that is ultimately used is a validated model. However, in practice, the validated model does not necessarily conform to the actual situation, and in the continuous iterative process, the model is optimized based on the validated rule, so that the validated model is correspondingly different from the actual situation, and therefore, the application provides the following method for evaluation.
Further, the trained automatic DVF extraction model is evaluated in the following manner:
re-collecting the cine MRI images generated by a plurality of patients in a radiotherapy system, and converting the cine MRI images into standard image files to be used as independent test sets; manually delineating the outline of the target area of each frame of the cine MRI image of the independent test set;
defining the target contour of the reference frame as fixed GTV, and defining the target contour of the motion frame as moving GTV;
inputting the independent test set into a trained DVF automatic extraction model to obtain the change information of each pixel point of the motion frame of each patient in the independent test set relative to the reference frame, namely the DVF information of each pixel point;
Acting DVF information on a target region outline moving GTV of a motion frame through deconvolution operation to obtain an output GTV which is an intermediate variable for model evaluation;
the output GTV is compared with the fixed GTV for difference between the Dadslike coefficient DSC and the Haosdorf distance HD to evaluate the accuracy of the DVF automatic extraction model.
According to the technical scheme provided by the application, the DVF automatic extraction model which passes verification is further evaluated, and the difference of the dess similarity coefficient and the Haosdorf distance is adopted for evaluation, so that the accuracy of the DVF automatic extraction model can be accurately measured.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application, are incorporated in and constitute a part of this specification. The drawings and their description are illustrative of the application and are not to be construed as unduly limiting the application.
In addition, the same or similar reference numerals denote the same or similar elements throughout the drawings. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
In the drawings:
Fig. 1 is a flow chart of a method of automatic tracking of a magnetic resonance guided radiation therapy tumor target volume.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the application have been illustrated in the accompanying drawings, it is to be understood that the application may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete. It should be understood that the drawings and embodiments of the application are for illustration purposes only and are not intended to limit the scope of the present application.
It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings. Embodiments of the application and features of the embodiments may be combined with each other without conflict.
The application will be described in detail below with reference to the drawings in connection with embodiments.
Referring to fig. 1, example 1: an automatic tracking method for a tumor target area of magnetic resonance guided radiation therapy comprises the following steps:
step 1: and collecting the cine MRI image of the patient generated in the radiotherapy system in real time, and preprocessing the cine MRI image to obtain a standard image file.
Step1 comprises the following steps:
step 11: and collecting the cine MRI images in real time, and converting the cine MRI images into mha format.
The patient can carry out the cine MRI scanning in real time in the radiotherapy system, and then can generate the cine MRI images in real time, and the cine MRI images can be stored in a binary mode, so that the binary cine MRI images need to be converted into mha format in the scheme.
Step 12: and adjusting the resolution of the cine MRI image to a preset resolution by linear interpolation to obtain a standard image file.
And performing linear interpolation on the cine MRI image, and then cutting to obtain a standard image file with preset resolution. Therefore, in this embodiment, the resolution is not adjusted by scaling the image, but by clipping.
Specifically, filling each minimum pixel grid point by using a linear interpolation method; the calculation formula of the linear interpolation is as follows:
Where p (r) is the pixel value of the image to be interpolated at spatial grid point r and p (r 0)、p(r1) is the pixel value of the known image at spatial grid point r 0、r1. The interpolation operation is beneficial to capturing the detail characteristics of the image, and meanwhile, the operation burden of training the deep learning model is not excessively increased.
Taking the center of the interpolated image as a reference point (for example, the pixel size of the interpolated image is 501×501, the pixel coordinate of the center point is (251, 251)), 128 pixel units are respectively reserved along +x, -x, +y and-y directions, so that the image is cut to 256×256 pixel points. The cutting operation fully considers the distribution range of the actual tumor in the cine MRI scanning, and can further improve the training operation efficiency of the deep learning model.
Step 2: and extracting the reference frame and the motion frame from the standard image file, and manually sketching the outline of the target region of the MRI image of the reference frame.
Specifically, step 2 includes the following steps:
Step 21: taking the first frame in the real-time updated standard image file of the patient as a reference frame and taking the subsequent pictures as motion frames.
When a patient is subjected to radiotherapy, a large number of cine MRI images are generated, and therefore, a first frame image in the cine MRI images generated along with time is used as a reference frame, and subsequently generated images are used as motion frames.
Step 22: the outline of the target region of the reference frame cine MRI image is manually delineated.
After the reference frame is obtained, only the image information of the tumor region of the patient is obtained on the reference frame, but no specific target region is available, and the outline of the target region needs to be manually outlined for this purpose.
Step 3: and inputting the reference frame and the motion frame into a DVF automatic extraction model in a double-channel format to obtain DVF information of the motion frame, wherein the DVF information is position change information of each pixel point of the motion frame relative to the reference frame.
Specifically, the DVF automatic extraction model comprises a 4-layer encoder and a 4-layer decoder, wherein the 4-layer encoder and the 4-layer decoder correspond to each other and are connected through a transducer layer jump;
The motion frame and the reference frame are input into a first layer of decoder, and each layer of decoder executes double-layer convolution operation;
the last layer decoder outputs DVF information of the motion frame.
The DVF automatic extraction model is a deep learning network without supervision learning. The training mode of the DVF automatic extraction model is as follows:
S1: raw cine MRI images of at least 80 patients generated in a radiotherapy system are acquired and converted to standard image files.
The cine MRI image is stored in binary format, and therefore needs to be converted into a standard image file, i.e., the file in mha format, and subjected to linear interpolation and cutting.
S2: an unsupervised learning model loss function is constructed,
L total=Limg + λ LDVF, wherein L img is an image loss function, L DVF is a DVF loss function, L total is a total loss function, and lambda is a super parameter to be optimized;
L img represents the similarity between the moving image and the reference image after DVF information deconvolution (DVF information inversion to obtain DVF -1, and convolution operation with DVF -1);
Limg = LMSE+LNCC
LNCC
Wherein L MSE is the loss function of the mean square error portion, and L NCC is the loss function of the local normalized cross-correlation coefficient portion; b is the number of pixel points in each frame in the standard image file, p i is the pixel value of the ith pixel point of the motion frame after the deconvolution of the DVF information extracted automatically, The pixel value of the ith pixel point of the reference frame; cov is covariance operation, σ is standard deviation operation.
L DVF=L1+L2+L3, wherein L1 is a DVF information first-order spatial derivative (diffusion regularization term) loss function, and L 2 is a DVF information second-order spatial derivative (bending energy regularization term) loss function; l3 is the DVF third order and above spatial derivative loss function (higher order bending energy regularization term):
Wherein x, y is the x, y coordinate direction of the cine MRI image, Deformation vector field DVF information at spatial coordinate points (x, y) predicted for model,/>For each higher order spatial derivative loss function weight, k is greater than or equal to 3, representing the order of the spatial derivative, for spatial derivatives above second order,/>The arithmetic symbol for obtaining the derivative represents that partial derivative is obtained on the x and y coordinates; l DVF represents the spatially continuous variation degree of the DVF information output by the DVF automatic extraction model, and the lower the value of the spatially continuous variation degree is, the more consistent the DVF information is with the physical actual variation rule (because the real-world human body internal motion variation is continuous).
S3, training an unsupervised learning model in the test set to determine the super parameter lambda,Is the optimum value of (3).
Wherein a higher L img weight represents a network parameter for which the unsupervised learning model will be more prone to obtain higher image similarity; a higher L DVF represents a network parameter that the unsupervised learning model is more prone to obtain more continuous changes in DVF information. This patent will experiment lambda get respectively: 1,0.5,0.1,0.05,0.01, obtaining the optimal lambda value through the output result of the model evaluation module. This patent will experiment: i=3, 4; = 0.05,0.1,0.5,1, obtaining the optimal result by the output result of the model evaluation module And (5) taking a value.
In S1-S3, an unsupervised training mode of the DVF automatic extraction model and a specific design mode of a loss function are constructed, so that under the condition that the loss function is known, a sufficient amount of sample data is provided, and training of the DVF automatic extraction model can be completed. After the training of the DVF automatic extraction model is completed, the DVF automatic extraction model also needs to be evaluated. The evaluation was performed as follows:
SO1: the method comprises the steps of collecting the cine MRI images generated by a plurality of patients in a radiotherapy system again, converting the cine MRI images into standard image files, and then using the standard image files as an independent test set, and manually sketching the outline of a target area of each frame of cine MRI image of the independent test set.
It should be noted that because the DVF automatic extraction model that has been trained is evaluated here, the cine MRI images collected cannot be the cine MRI images that were previously used for training.
SO2: defining the target contour of the reference frame as fixed GTV, and defining the target contour of the motion frame as moving GTV;
inputting the independent test set into a trained DVF automatic extraction model to obtain a motion frame DVF of each patient in the independent test set;
Performing deconvolution operation (DVF -1 is obtained by DVF information inversion, and convolution operation is performed by DVF -1), and acting the DVF information on a target region outline moving GTV of a motion frame to obtain an output GTV which is an intermediate variable of model evaluation;
the output GTV is compared with the fixed GTV for difference between the Dadslike coefficient DSC and the Haosdorf distance HD to evaluate the accuracy of the DVF automatic extraction model.
The output GTV may be understood as an intermediate variable for model evaluation, which represents a state of the GTV of the moving image after DVF action, which changes to the reference image description time. The comparison of Output GTV and fixed GTV may reflect the accuracy of the extracted DVF in describing the spatial position change information of the motion image to the reference image.
In particular, the model performance evaluation module of the present patent relates to deconvolution operations. At this time, the DVF matrix is inverted and then acts on the outline of the target region of the moving image to obtain the output GTV: DVF -1 x moving gtv=output GTV, and DSC, HD compared with fixed GTV to obtain model performance assessment
Step 4: and acting the DVF information automatically extracted on the target area outline of the reference frame in a convolution operation mode to obtain the target area outline of the motion frame, so as to realize the dynamic tracking of the tumor focus. Specifically, the convolution method is as follows: in step 4, the DVF -1,DVF-1 convolution obtained by inverting the DVF information is applied to the moving frame image.
The above description is only illustrative of the few preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the application in the embodiments of the present application is not limited to the specific combination of the above technical features, but also encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the application. Such as the above-described features, are mutually replaced with the technical features having similar functions (but not limited to) disclosed in the embodiments of the present application.

Claims (8)

1. An automatic tracking method for a tumor target area of magnetic resonance guided radiation therapy is characterized by comprising the following steps:
Step 1: collecting a cine MRI image generated by a patient in a radiotherapy system in real time, and preprocessing the cine MRI image to obtain a standard image file;
Step 2: extracting a reference frame and a motion frame from a standard image file, and manually sketching the outline of a target region of an MRI image of the reference frame;
Step 3: inputting the reference frame and the motion frame into a DVF automatic extraction model in a double-channel format to obtain DVF information of the motion frame, wherein the DVF information is position change information of each pixel point of the motion frame relative to the reference frame;
Step 4: and acting the DVF information automatically extracted on the target area outline of the reference frame in a convolution operation mode to obtain the target area outline of the motion frame, so as to realize the dynamic tracking of the tumor focus.
2. The method for automatically tracking a tumor target volume for magnetic resonance guided radiation therapy according to claim 1, wherein: step 1 comprises the following steps:
step 11: collecting the cine MRI image in real time, and converting the cine MRI image into mha format;
Step 12: and adjusting the resolution of the cine MRI image to a preset resolution by linear interpolation to obtain a standard image file.
3. The method for automatically tracking a tumor target volume for magnetic resonance guided radiation therapy according to claim 2, wherein: the standard image file is acquired in step 12 as follows:
Filling each minimum pixel grid point by using a linear interpolation method:
Where p (r) is the pixel value of the image to be interpolated at spatial grid point r, and p (r 0)、p(r1) is the pixel value of the known image at spatial grid point r 0、r1;
taking the center of the interpolated image as a reference point, respectively reserving a pixel units along the +x, -x, +y-y directions, so as to cut the image to a multiplied by a pixel points; a is a preset clipping value.
4. The method for automatically tracking a tumor target volume for magnetic resonance guided radiation therapy according to claim 1, wherein: step 2 comprises the following steps:
Step 21: taking a first frame in a standard image file updated in real time of a patient as a reference frame and taking a subsequent picture as a motion frame;
step 22: the outline of the target region of the reference frame cine MRI image is manually delineated.
5. The method for automatically tracking a tumor target volume for magnetic resonance guided radiation therapy according to claim 1, wherein: the DVF information convolution obtained in the step 4 acts on the outline of the target region of the reference frame outlined in the step 2.
6. The method for automatically tracking a tumor target volume for magnetic resonance guided radiation therapy according to claim 1, wherein: the DVF automatic extraction model comprises a 4-layer encoder and a 4-layer decoder, wherein the 4-layer encoder and the 4-layer decoder correspond to each other and are connected through a transducer layer jump;
The motion frame and the reference frame are input into a first layer of decoder, and each layer of decoder executes double-layer convolution operation;
The last layer decoder outputs DVF information of the motion frame with respect to the reference frame.
7. The method of automated tracking of a magnetic resonance guided radiation therapy tumor target volume of claim 6, wherein: the DVF automatic extraction model is trained in the following way:
S1: acquiring original cine MRI images of at least 80 patients generated in a radiotherapy system, and converting the cine MRI images into standard image files;
s2: an unsupervised learning model loss function is constructed,
L total=Limg + λ LDVF, wherein L img is an image loss function, L DVF is a DVF loss function, L total is a total loss function, and lambda is a super parameter to be optimized;
L img represents a moving image subjected to deconvolution of DVF information, and the DVF information is firstly inverted to obtain the similarity degree of the deconvolution of the DVF -1, DVF-1 on the moving image and a reference image;
Limg = LMSE+LNCC
LNCC
Wherein L MSE is the loss function of the mean square error portion, and L NCC is the loss function of the local normalized cross-correlation coefficient portion; b is the number of pixel points in each frame in the standard image file, p i is the pixel value of the ith pixel point of the motion frame after the deconvolution of the DVF information extracted automatically, The pixel value of the ith pixel point of the reference frame; cov is covariance operation, σ is standard deviation operation;
L DVF=L1+L2+L3, wherein L1 is a loss function of first-order spatial derivative of DVF information, and L 2 is a loss function of second-order spatial derivative of DVF information; l3 is a DVF third-order and above spatial derivative loss function:
Wherein x, y is the x, y coordinate direction of the cine MRI image, Deformation vector field DVF information at spatial coordinate points (x, y) predicted for model,/>For each higher-order spatial derivative loss function weight, k is more than or equal to 3, the order of the spatial derivative is represented, for spatial derivative above second order, L DVF represents the spatially continuous change degree of DVF information output by the DVF automatic extraction model,/>The arithmetic symbol for obtaining the derivative represents that partial derivative is obtained on the x and y coordinates;
s3: and training an unsupervised learning model in the test set, and determining the optimal value of the super parameter lambda.
8. The method for automatically tracking a tumor target volume for magnetic resonance guided radiation therapy according to claim 7, wherein: the trained DVF automatic extraction model is evaluated in the following manner:
re-collecting a plurality of independent test sets; manually delineating the outline of the target area of each frame of the cine MRI image of the independent test set;
defining the target contour of the reference frame as fixed GTV, and defining the target contour of the motion frame as moving GTV;
inputting the independent test set into a trained DVF automatic extraction model to obtain the change information of each pixel point of the motion frame of each patient in the independent test set relative to the reference frame, namely the DVF information of each pixel point;
Acting DVF information on a target region outline moving GTV of a motion frame through deconvolution operation to obtain an output GTV which is an intermediate variable for model evaluation;
the output GTV is compared with the fixed GTV for difference between the Dadslike coefficient DSC and the Haosdorf distance HD to evaluate the accuracy of the DVF automatic extraction model.
CN202410511929.4A 2024-04-26 Automatic tracking method for tumor target area of magnetic resonance guided radiation therapy Active CN118079256B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410511929.4A CN118079256B (en) 2024-04-26 Automatic tracking method for tumor target area of magnetic resonance guided radiation therapy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410511929.4A CN118079256B (en) 2024-04-26 Automatic tracking method for tumor target area of magnetic resonance guided radiation therapy

Publications (2)

Publication Number Publication Date
CN118079256A true CN118079256A (en) 2024-05-28
CN118079256B CN118079256B (en) 2024-07-12

Family

ID=

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120035462A1 (en) * 2010-08-06 2012-02-09 Maurer Jr Calvin R Systems and Methods for Real-Time Tumor Tracking During Radiation Treatment Using Ultrasound Imaging
CN110570489A (en) * 2019-09-05 2019-12-13 重庆医科大学附属第一医院 motion compensation high-quality 4D-CBCT image reconstruction method based on bilateral filtering design
EP3628372A1 (en) * 2018-09-28 2020-04-01 Varian Medical Systems International AG Methods and systems for adaptive radiotherapy treatment planning using deep learning engines
CN112330822A (en) * 2020-11-04 2021-02-05 复旦大学附属中山医院 Real-time three-dimensional heart image automatic target area tracking and identifying system
US20210046331A1 (en) * 2019-08-13 2021-02-18 Elekta Ltd. Automatic gating with an mr linac
WO2021184118A1 (en) * 2020-03-17 2021-09-23 Vazquez Romaguera Liset Methods and systems for reconstructing a 3d anatomical structure undergoing non-rigid motion
CN114049948A (en) * 2021-12-21 2022-02-15 山东第一医科大学附属肿瘤医院(山东省肿瘤防治研究院、山东省肿瘤医院) Automatic quality control method, system and platform for radiotherapy process
US20220092771A1 (en) * 2020-09-18 2022-03-24 Siemens Healthcare Gmbh Technique for quantifying a cardiac function from CMR images
CN117100393A (en) * 2023-08-23 2023-11-24 新加坡六莲科技有限公司 Method, system and device for video-assisted surgical target positioning
WO2023230310A1 (en) * 2022-05-26 2023-11-30 Mary Hitchcock Memorial Hospital, For Itself And On Behalf Of Dartmouth-Hitchcock Clinic System and method for real-time image registration during radiotherapy using deep learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120035462A1 (en) * 2010-08-06 2012-02-09 Maurer Jr Calvin R Systems and Methods for Real-Time Tumor Tracking During Radiation Treatment Using Ultrasound Imaging
EP3628372A1 (en) * 2018-09-28 2020-04-01 Varian Medical Systems International AG Methods and systems for adaptive radiotherapy treatment planning using deep learning engines
US20210046331A1 (en) * 2019-08-13 2021-02-18 Elekta Ltd. Automatic gating with an mr linac
CN110570489A (en) * 2019-09-05 2019-12-13 重庆医科大学附属第一医院 motion compensation high-quality 4D-CBCT image reconstruction method based on bilateral filtering design
WO2021184118A1 (en) * 2020-03-17 2021-09-23 Vazquez Romaguera Liset Methods and systems for reconstructing a 3d anatomical structure undergoing non-rigid motion
US20220092771A1 (en) * 2020-09-18 2022-03-24 Siemens Healthcare Gmbh Technique for quantifying a cardiac function from CMR images
CN112330822A (en) * 2020-11-04 2021-02-05 复旦大学附属中山医院 Real-time three-dimensional heart image automatic target area tracking and identifying system
CN114049948A (en) * 2021-12-21 2022-02-15 山东第一医科大学附属肿瘤医院(山东省肿瘤防治研究院、山东省肿瘤医院) Automatic quality control method, system and platform for radiotherapy process
WO2023230310A1 (en) * 2022-05-26 2023-11-30 Mary Hitchcock Memorial Hospital, For Itself And On Behalf Of Dartmouth-Hitchcock Clinic System and method for real-time image registration during radiotherapy using deep learning
CN117100393A (en) * 2023-08-23 2023-11-24 新加坡六莲科技有限公司 Method, system and device for video-assisted surgical target positioning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"《中国生物医学工程学报》2017年第36卷作者索引", 中国生物医学工程学报, no. 06, 20 December 2017 (2017-12-20) *
LIANG, XIAO: "Evaluation of Liver Respiratory Biomechanics using 4D-MRI", 《DUKE UNIVERSITY》, 31 December 2014 (2014-12-31) *
SHAO, HUA-CHIEH ET AL: "3D cine-magnetic resonance imaging using spatial and temporal implicit neural representation learning (STINR-MR).", 《ARXIV》, 18 August 2023 (2023-08-18) *
STEMKENS, B. ET AL: "Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy", 《PHYSICS IN MEDICINE AND BIOLOGY》, 21 July 2016 (2016-07-21) *
刘昊;王冠华;章强;李雨泽;陈慧军: "3D脑肿瘤分割的Dice损失函数的优化", 《中国医疗设备》, 10 June 2019 (2019-06-10) *

Similar Documents

Publication Publication Date Title
CN110930416B (en) MRI image prostate segmentation method based on U-shaped network
CN109272443B (en) PET and CT image registration method based on full convolution neural network
CN113012172B (en) AS-UNet-based medical image segmentation method and system
CN106204550B (en) A kind of method for registering and system of non-rigid multi modal medical image
CN111798462A (en) Automatic delineation method for nasopharyngeal carcinoma radiotherapy target area based on CT image
CN112785632B (en) Cross-modal automatic registration method for DR and DRR images in image-guided radiotherapy based on EPID
CN107341776A (en) Single frames super resolution ratio reconstruction method based on sparse coding and combinatorial mapping
CN109767459A (en) Novel ocular base map method for registering
CN111325750A (en) Medical image segmentation method based on multi-scale fusion U-shaped chain neural network
CN115409739B (en) Method and system for automatically sketching organs at risk
CN111383759A (en) Automatic pneumonia diagnosis system
CN115830016A (en) Medical image registration model training method and equipment
CN116612174A (en) Three-dimensional reconstruction method and system for soft tissue and computer storage medium
CN116342516A (en) Model integration-based method and system for assessing bone age of X-ray images of hand bones of children
CN111046893A (en) Image similarity determining method and device, and image processing method and device
CN116843679B (en) PET image partial volume correction method based on depth image prior frame
CN117876690A (en) Ultrasonic image multi-tissue segmentation method and system based on heterogeneous UNet
CN118079256B (en) Automatic tracking method for tumor target area of magnetic resonance guided radiation therapy
CN118079256A (en) Automatic tracking method for tumor target area of magnetic resonance guided radiation therapy
CN116152235A (en) Cross-modal synthesis method for medical image from CT (computed tomography) to PET (positron emission tomography) of lung cancer
CN114581340A (en) Image correction method and device
CN117974832B (en) Multi-modal liver medical image expansion algorithm based on generation countermeasure network
CN116402812B (en) Medical image data processing method and system
CN112053330B (en) Diaphragm prediction system and method based on PCA and TSSM models
CN117974693B (en) Image segmentation method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant