CN116580814A - Deep learning-based radiotherapy plan automatic generation system and method - Google Patents

Deep learning-based radiotherapy plan automatic generation system and method Download PDF

Info

Publication number
CN116580814A
CN116580814A CN202310290064.9A CN202310290064A CN116580814A CN 116580814 A CN116580814 A CN 116580814A CN 202310290064 A CN202310290064 A CN 202310290064A CN 116580814 A CN116580814 A CN 116580814A
Authority
CN
China
Prior art keywords
deep learning
module
dose distribution
unet
patient dose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310290064.9A
Other languages
Chinese (zh)
Inventor
王明清
杨瑞杰
庄洪卿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University Third Hospital Peking University Third Clinical Medical College
Original Assignee
Peking University Third Hospital Peking University Third Clinical Medical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University Third Hospital Peking University Third Clinical Medical College filed Critical Peking University Third Hospital Peking University Third Clinical Medical College
Priority to CN202310290064.9A priority Critical patent/CN116580814A/en
Publication of CN116580814A publication Critical patent/CN116580814A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/40ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application relates to an automatic radiotherapy plan generation system and method based on deep learning, wherein the system comprises a training unit and a post-processing unit; the training unit comprises an input module, an image preprocessing module, a 3D-Unet deep learning model building module, a training network and an output module; the post-processing unit comprises an input end, a confidence map superposition output module, an output end, an inverse optimization module and a conversion module; the input end is used for inputting a CT image of a new patient, a sketching structure of a jeopardy organ and a target area, and obtaining respective confidence maps of three different trained 3D-Unet deep learning models; the confidence map superposition output module is used for superposing the maximum positions of the three confidence maps, taking an optimal threshold value and predicting to obtain the dose distribution of the new patient; the inverse optimization module is used for inputting the prediction result into the planning system to carry out inverse optimization; the conversion module is used for converting the result after the inverse optimization into the machine parameters of the accelerator and forming a new plan.

Description

Deep learning-based radiotherapy plan automatic generation system and method
Technical Field
The application relates to the technical field of medical assistance, in particular to an automatic generation system and method of a radiotherapy plan based on deep learning.
Background
Treatment planning for radiation therapy has made tremendous progress over the past decades. These advances have emerged in the form of hardware and algorithm innovations that create new therapeutic modalities such as modulated radiation therapy and volume rotation modulated therapy, which greatly improve the prognosis and quality of life of patients. While these developments have improved the quality of the overall plan, they have also increased the complexity of the treatment plan. This results in an increase in treatment planning time, with greater variation in the quality of the final approved plan. Non-convex optimization in intensity modulated radiation therapy and volume rotational intensity modulated therapy also does not guarantee that the final plan quality is within a certain increment of the theoretical optimal plan. Furthermore, with the push of on-line adaptive radiation therapy, the treatment planning time is shortened to several minutes of demand and pressure, even higher.
In the last decade, since AlexNet obtained the first name in the imaging net competition in 2012, deep learning and other artificial intelligence techniques have made explosive progress, particularly in the fields of computer vision and imaging, as well as in the decision making field. Deep learning has become an integral part of many aspects of many different fields, for example from automatic driving automobiles to household appliances. One of the main areas where deep learning began to revolutionize is healthcare. Radiooncology has been an advanced area of technology, and there are many studies and implementations involving the use of artificial intelligence. The field has begun to demonstrate the effectiveness of using such techniques in many of its sub-fields, such as diagnosis, imaging, subdivision, treatment planning, quality assurance, treatment provision and follow-up. Some artificial intelligence techniques have been implemented clinically by commercial systems.
The current automatic planning based on artificial intelligence is mostly knowledge-based planning (KBP), which is based on acquiring historical planning data and then extracting useful features for training models. These features include spatial information such as organs at risk and target volume, distance histogram to target, overlapping volume histogram, structural shape, number of beams, etc. Early versions of KPB utilized Machine Learning (ML) methods to input hand-made features in patient data into an ML model to learn an end-to-end mapping of these features to a plan, such as a Dose Volume Histogram (DVH). When used in conjunction with an optimization engine, these frames may be semi-automated and capable of generating plans for new patients based on their anatomy.
However, early versions of KPBs were highly limited by the complexity of the data that can be entered into the model and the type of data that the model can predict. The output is typically limited to only 1D or 2D data, such as a single constraint value or DVH, and the remaining dose distribution should be entirely dependent on the intuition of the physician and the scheduler in generating the final deliverable schedule. Furthermore, it is not clear which hand-made features need to be entered into the model at all, and so features are typically determined by trial and error. Furthermore, manual hand-crafting of features may result in loss of fine but vital information, resulting in reduced predictive performance of the KBP model. Thus, the quality of the plan is still highly dependent on the skills and experience of the doctor and planner.
Advances in deep learning allow accurate 3D dose distribution predictions. One of the models is U-net by Ronneberger et al. The model was originally introduced for semantic segmentation of biomedical images and was able to learn pixel-to-pixel mappings between 2 data in combination with local and global features. Its pixel-to-pixel or voxel-to-voxel mapping capability makes it an ideal candidate for volumetric dose prediction, where 3D anatomical data is input into a model to predict the 3D dose distribution. Furthermore, deep learning allows raw data to be entered rather than relying on hand-crafted functions as in classical ML.
In the prior art, the method is generally based on a 2D-Unet, a residual base neural network and an improved version thereof, and a 3D-Unet and an improved version thereof, and is seldom considered to realize high-precision prediction through fewer training times, the problem can be perfectly solved by the design, multiple training models are obtained through multiple loss function design, prediction complementation among the multiple models can be realized through integration, and reliability of integration among different models is considered.
Disclosure of Invention
The application aims to provide an automatic generation system and method of a radiotherapy plan based on deep learning, and aims to solve the technical problems of at least how to realize prediction complementation among multiple models, how to reduce occupation of dynamic memory and improve efficiency.
In order to achieve the above object, the present application provides an automatic radiotherapy plan generation system based on deep learning, comprising a training unit and a post-processing unit; the training unit comprises an input module, an image preprocessing module, a 3D-Unet deep learning model construction module, a training network and an output module, wherein the input module is used for inputting CT images, organs at risk and sketching structures of a target area; the image preprocessing module is used for preprocessing the CT images, the organs at risk and the sketching structure of the target area input by the input module, and inputting the preprocessed images into the 3D-Unet deep learning model construction module; the 3D-Unet deep learning model construction module is used for constructing three different 3D-Unet deep learning models based on three different loss functions; the training network is used for training the three different 3D-Unet deep learning models for a plurality of periods according to the images preprocessed by the image preprocessing module until the deviation between the patient dose distribution output by the output module and the known and real patient dose distribution of the sketched structure of the organs at risk and the target region input by the input module meets a preset threshold range, so as to obtain three different trained 3D-Unet deep learning models; the post-processing unit comprises an input end, a confidence map superposition output module, an output end, an inverse optimization module and a conversion module; the input end is used for respectively inputting CT images of new patients, organs at risk and delineating structures of target areas to three different trained 3D-Unet deep learning models to obtain respective confidence maps of the three different trained 3D-Unet deep learning models; the confidence map superposition output module is used for superposing the maximum positions of the confidence maps of three different trained 3D-Unet deep learning models, taking an optimal threshold value, predicting to obtain new patient dose distribution, and outputting a prediction result to the inverse optimization module through the output end, wherein the inverse optimization module is used for inputting the prediction result into the planning system for inverse optimization; the conversion module is used for converting the result after the inverse optimization into the machine parameters of the accelerator and forming a new plan.
Preferably, the method comprises the steps of, the preprocessing refers to converting a three-dimensional image into image blocks of 64 x 64.
Preferably, the three different loss functions include a cross entropy loss function, a generalized dess similarity loss function, and a texas loss function.
Preferably, the cross entropy loss function is used for calculating the cross entropy loss between the network prediction and the target value of the single-label and multi-label classification task, and the calculation formula of the cross entropy loss function is as follows:
where N is the observed value and K is the class number; t (T) ni Is the real patient dose distribution, Y ni Is a predictive patient dose distribution;
preferably, the generalized dess similarity loss function has a calculation formula as follows:
where K is the number of classifications and M is the number of predictions along the CTV result Y km And Wk is a weight factor specific to each class, controlling the degree of contribution of each class to the result; t (T) km Is the true patient dose distribution;
the generalized dess similarity loss is based on the sorensen-dess similarity and is used for measuring the overlap between two segmented images.
Preferably, the calculation formula of the te-wok loss function is as follows:
wherein c corresponds to a class,corresponding to not being in class c; the method comprises the steps of carrying out a first treatment on the surface of the
Y cm Is the predicted patient dose distribution, T cm Is the true patient dose distribution;
m is the predicted patient dose distribution Y cm Element number of the first two dimensions of (a);
α is a weighting factor that controls the contribution of each class's false pair of losses;
beta is a weighting factor that controls the contribution of each class's false negatives to the penalty;
the texel loss function is based on the texel index for measuring the overlap between two segmented images.
The application also provides an automatic generation and method of cervical cancer clinical target radiotherapy plan based on deep learning, which comprises the following steps:
s1, inputting CT images, and delineating structures of organs at risk and target areas;
s2, preprocessing the input CT image, the delineating structure of the organs at risk and the target area;
s3, constructing three different 3D-Unet deep learning models based on three different loss functions;
s4, training the three different 3D-Unet deep learning models for a plurality of periods according to the preprocessed images until the deviation between the output patient dose distribution and the known and real patient dose distribution of the input CT image, the organ-at-risk and the delineating structure of the target region meets a preset threshold range, so as to obtain three different trained 3D-Unet deep learning models;
s5, respectively inputting CT images of new patients, organs at risk and delineating structures of target areas into three different trained 3D-Unet deep learning models to obtain respective confidence maps of the three different trained 3D-Unet deep learning models;
s6, superposing the maximum positions of confidence maps of three different trained 3D-Unet deep learning models, taking an optimal threshold value, predicting to obtain new patient dose distribution, and inputting a prediction result into a planning system for inverse optimization;
s7, converting the result after the inverse optimization into machine parameters of the accelerator, and forming a new plan.
Advantageous effects
Compared with the prior art, the application has the beneficial effects that:
1. compared with the previous neural network design, the deep learning-based radiotherapy plan automatic generation system and method are based on the application of the 3D-Unet with multi-channel input, and can integrate the advantages of multiple loss functions, and under the condition of fewer training times, the prediction precision is improved through the integration of different loss function predictions, and the dynamic memory occupation is reduced based on the patch design, so that the efficiency is improved.
2. According to the application, a multi-training model obtained by combining multiple loss functions is combined, and finally, the dose distribution result is integrated based on a confidence level diagram, so that the precision is improved compared with the design of a single loss function. And by introducing the confidence map, the reliability and reliability of the dose distribution are improved.
Drawings
The accompanying drawings are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate and do not limit the application.
Fig. 1 is a schematic diagram of an automatic generation system and method of radiotherapy plans based on deep learning according to the present application.
FIG. 2 is a flow chart of the process of obtaining a final predicted outcome map.
Detailed Description
The present application is described in more detail below to facilitate an understanding of the present application.
As shown in fig. 1, the radiotherapy plan automatic generation system based on deep learning of the application comprises a training unit and a post-processing unit; the training unit comprises an input module, an image preprocessing module, a 3D-Unet deep learning model construction module, a training network and an output module, wherein the input module is used for inputting CT images, organs at risk and sketching structures of a target area; the image preprocessing module is used for preprocessing the CT images, the organs at risk and the sketching structure of the target area input by the input module, and inputting the preprocessed images into the 3D-Unet deep learning model construction module; the 3D-Unet deep learning model construction module is used for constructing three different 3D-Unet deep learning models based on three different loss functions; the training network is used for training the three different 3D-Unet deep learning models for a plurality of periods according to the images preprocessed by the image preprocessing module until the deviation between the patient dose distribution output by the output module and the known and real patient dose distribution of the sketched structure of the organs at risk and the target region input by the input module meets a preset threshold range, so as to obtain three different trained 3D-Unet deep learning models; the post-processing unit comprises an input end, a confidence map superposition output module, an output end, an inverse optimization module and a conversion module; the input end is used for respectively inputting CT images of new patients, organs at risk and delineating structures of target areas to three different trained 3D-Unet deep learning models to obtain respective confidence maps of the three different trained 3D-Unet deep learning models; the confidence map superposition output module is used for superposing the maximum positions of the confidence maps of three different trained 3D-Unet deep learning models, taking an optimal threshold value, predicting to obtain new patient dose distribution, and outputting a prediction result to the inverse optimization module through the output end, wherein the inverse optimization module is used for inputting the prediction result into the planning system for inverse optimization; the conversion module is used for converting the result after the inverse optimization into the machine parameters of the accelerator and forming a new plan.
Preferably, the method comprises the steps of, the preprocessing refers to the three-dimensional image turning to a 64 x 64 block (patch).
The discrimination standard of the optimal threshold is obtained by combining comprehensive indexes such as prediction accuracy, false positive rate, false negative rate and the like. For example, the confidence map is distributed between 0 and 1, the confidence coefficient is lower than 0.5 by calculating the correlation index at the five thresholds of 0.9,0.8,0.7,0.6,0.5, the default is unreliable, the selection is not performed, and then curve sketching is performed to find the optimal threshold. The optimal control of maximizing the prediction precision and minimizing the false positive and the false negative is realized through the optimal threshold, and compared with the result under the non-optimal threshold, the prediction result has better overall performance. The relationship between the prediction result and the optimal threshold is shown in fig. 2. A patch can be colloquially understood as an image block, and when the resolution of an image to be processed is too large and resources (such as video memory, computing power, etc.) are limited, the image can be divided into small blocks, and the small image blocks are patches.
In order to improve the precision, the application adopts three loss functions, including:
loss function 1: the cross entropy loss function is used for calculating the cross entropy loss between the network prediction and the target value of the single-label and multi-label classification task, and the calculation formula of the cross entropy loss function is as follows:
where N is the observed value and K is the class number; t (T) ni Is the real patient dose distribution, Y ni Is a predictive patient dose distribution.
Loss function 2: the generalized dess similarity loss is based on the sorenson-dess similarity for measuring the overlap between two segmented images. The generalized dess similarity loss function is calculated by the following formula:
where K is the number of classifications and M is the predicted patient dose distribution Y km The number of elements in the first two dimensions of (a) and W k Is a weight specific to each category, controlling the degree of contribution of each category to the result. This weight helps to counteract the effect of larger regions on the dess similarity coefficient. T (T) km Is a true patient dose distribution。
Loss function 3: the Tversky loss function (Tversky loss function) is based on Tversky index and is used for measuring the overlap between two divided images, and the calculation formula of the Tversky loss function is as follows:
wherein c corresponds to a class,corresponding to not being in class c;
Y cm is the predicted patient dose distribution, T cm Is the true patient dose distribution;
m is the predicted patient dose distribution Y cm Element number of the first two dimensions of (a);
α is a weighting factor that controls the contribution of each class's false pair of losses;
beta is a weighting factor that controls the contribution of each class's false negatives to the penalty.
The application also provides an automatic generation and method of cervical cancer clinical target radiotherapy plan based on deep learning, which comprises the following steps:
s1, inputting CT images, and delineating structures of organs at risk and target areas;
s2, preprocessing the input CT image, the delineating structure of the organs at risk and the target area;
s3, constructing three different 3D-Unet deep learning models based on three different loss functions;
s4, training the three different 3D-Unet deep learning models for a plurality of periods according to the preprocessed images until the deviation between the output patient dose distribution and the known and real patient dose distribution of the input CT image, the organ-at-risk and the delineating structure of the target region meets a preset threshold range, so as to obtain three different trained 3D-Unet deep learning models;
s5, respectively inputting CT images of new patients, organs at risk and delineating structures of target areas into three different trained 3D-Unet deep learning models to obtain respective confidence maps of the three different trained 3D-Unet deep learning models;
s6, superposing the maximum positions of confidence maps of three different trained 3D-Unet deep learning models, taking an optimal threshold value, predicting to obtain new patient dose distribution, and inputting a prediction result into a planning system for inverse optimization;
s7, converting the result after the inverse optimization into machine parameters of the accelerator, and forming a new plan.
The key points and advantages of the application include:
1. compared with the prior neural network design, the 3D-Unet based on the multi-channel input and multi-loss function design can integrate the advantages of the multi-loss function, and the prediction precision is improved through the integration of different loss function predictions under the condition of less training times, and the dynamic memory occupation is reduced and the efficiency is improved based on the patch design.
2. And finally, integrating the dose distribution result based on the confidence coefficient graph by combining a multi-training model obtained by the multi-loss function, and improving the precision compared with the design of a single-loss function. And by introducing the confidence map, the reliability and reliability of the dose distribution are improved.
The key technical points of the application at least comprise:
1. based on the multi-channel input, the design of the 3D-Unet of the multi-loss function.
2. And combining the multiple loss functions to obtain a multiple training model, and finally integrating the design of the segmentation result based on the confidence coefficient map threshold.
The foregoing describes preferred embodiments of the present application, but is not intended to limit the application thereto. Modifications and variations to the embodiments disclosed herein may be made by those skilled in the art without departing from the scope and spirit of the application.

Claims (10)

1. The radiotherapy plan automatic generation system based on deep learning is characterized by comprising a training unit and a post-processing unit; the training unit comprises an input module, an image preprocessing module, a 3D-Unet deep learning model construction module, a training network and an output module, wherein the input module is used for inputting CT images, organs at risk and sketching structures of a target area; the image preprocessing module is used for preprocessing the CT images, the organs at risk and the sketching structure of the target area input by the input module, and inputting the preprocessed images into the 3D-Unet deep learning model construction module; the 3D-Unet deep learning model construction module is used for constructing three different 3D-Unet deep learning models based on three different loss functions; the training network is used for training the three different 3D-Unet deep learning models for a plurality of periods according to the images preprocessed by the image preprocessing module until the deviation between the patient dose distribution output by the output module and the known and real patient dose distribution of the sketched structure of the organs at risk and the target region input by the input module meets a preset threshold range, so as to obtain three different trained 3D-Unet deep learning models; the post-processing unit comprises an input end, a confidence map superposition output module, an output end, an inverse optimization module and a conversion module; the input end is used for respectively inputting CT images of new patients, organs at risk and delineating structures of target areas to three different trained 3D-Unet deep learning models to obtain respective confidence maps of the three different trained 3D-Unet deep learning models; the confidence map superposition output module is used for superposing the maximum positions of the confidence maps of three different trained 3D-Unet deep learning models, taking an optimal threshold value, predicting to obtain new patient dose distribution, and outputting a prediction result to the inverse optimization module through the output end, wherein the inverse optimization module is used for inputting the prediction result into the planning system for inverse optimization; the conversion module is used for converting the result after the inverse optimization into the machine parameters of the accelerator and forming a new plan.
2. The automatic generation system of radiotherapy plan based on deep learning according to claim 1, wherein the preprocessing is to convert three-dimensional images into image blocks of 64 x 64.
3. The automatic deep learning based radiotherapy plan generation system of claim 1, wherein the three different loss functions comprise a cross entropy loss function, a generalized dess similarity loss function, and a te-wok loss function.
4. The automatic deep learning-based radiotherapy plan generation system of claim 3, wherein the cross entropy loss function is used to calculate the cross entropy loss between the network predictions and target values of the single-label and multi-label classification tasks, and the calculation formula of the cross entropy loss function is:
where N is the observed value and K is the class number; t (T) ni Is the real patient dose distribution, Y ni Is a predictive patient dose distribution.
5. The automatic deep learning-based radiotherapy plan generation system of claim 3, wherein the generalized dess similarity loss function has a calculation formula:
where K is the number of classifications and M is the predicted patient dose distribution Y km And Wk is a weight factor specific to each class, controlling the degree of contribution of each class to the result; t (T) km Is the true patient dose distribution;
the generalized dess similarity loss is based on the sorensen-dess similarity and is used for measuring the overlap between two segmented images.
6. The automatic generation system of radiotherapy plan based on deep learning according to claim 3, wherein the calculation formula of the te-wok loss function is:
wherein c corresponds to a class,corresponding to not being in class c;
Y cm is the predicted patient dose distribution, T cm Is the true patient dose distribution;
m is the predicted patient dose distribution Y cm Element number of the first two dimensions of (a);
α is a weighting factor that controls the contribution of each class's false pair of losses;
beta is a weighting factor that controls the contribution of each class's false negatives to the penalty;
the texel loss function is based on the texel index for measuring the overlap between two segmented images.
7. A radiotherapy plan automatic generation method of a radiotherapy plan automatic generation system based on deep learning according to any one of claims 1 to 6, comprising the steps of:
s1, inputting CT images, and delineating structures of organs at risk and target areas;
s2, preprocessing the input CT image, the delineating structure of the organs at risk and the target area;
s3, constructing three different 3D-Unet deep learning models based on three different loss functions;
s4, training the three different 3D-Unet deep learning models for a plurality of periods according to the preprocessed images until the deviation between the output patient dose distribution and the known and real patient dose distribution of the input CT image, the organ-at-risk and the delineating structure of the target region meets a preset threshold range, so as to obtain three different trained 3D-Unet deep learning models;
s5, respectively inputting CT images of new patients, organs at risk and delineating structures of target areas into three different trained 3D-Unet deep learning models to obtain respective confidence maps of the three different trained 3D-Unet deep learning models;
s6, superposing the maximum positions of confidence maps of three different trained 3D-Unet deep learning models, taking an optimal threshold value, predicting to obtain new patient dose distribution, and inputting a prediction result into a planning system for inverse optimization;
s7, converting the result after the inverse optimization into machine parameters of the accelerator, and forming a new plan.
8. The automated generation and method of radiation therapy planning of claim 7, wherein the preprocessing is converting a three-dimensional image into 64 x 64 image blocks.
9. The automated generation and method of radiation therapy planning of claim 7, wherein the three different loss functions include a cross entropy loss function, a generalized dess similarity loss function, and a tawny loss function.
10. The method for automatically generating a radiation therapy plan according to claim 9, wherein the cross entropy loss function is used for calculating a cross entropy loss between network predictions and target values of the single-label and multi-label classification tasks, and a calculation formula of the cross entropy loss function is as follows:
where N is the observed value and K is the class number; t (T) ni Is the real patient dose distribution, Y ni Is a predictive patient dose distribution.
CN202310290064.9A 2023-03-23 2023-03-23 Deep learning-based radiotherapy plan automatic generation system and method Pending CN116580814A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310290064.9A CN116580814A (en) 2023-03-23 2023-03-23 Deep learning-based radiotherapy plan automatic generation system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310290064.9A CN116580814A (en) 2023-03-23 2023-03-23 Deep learning-based radiotherapy plan automatic generation system and method

Publications (1)

Publication Number Publication Date
CN116580814A true CN116580814A (en) 2023-08-11

Family

ID=87534746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310290064.9A Pending CN116580814A (en) 2023-03-23 2023-03-23 Deep learning-based radiotherapy plan automatic generation system and method

Country Status (1)

Country Link
CN (1) CN116580814A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116779173A (en) * 2023-08-24 2023-09-19 北京大学第三医院(北京大学第三临床医学院) Radiation therapy dose prediction system and method based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116779173A (en) * 2023-08-24 2023-09-19 北京大学第三医院(北京大学第三临床医学院) Radiation therapy dose prediction system and method based on artificial intelligence
CN116779173B (en) * 2023-08-24 2023-11-24 北京大学第三医院(北京大学第三临床医学院) Radiation therapy dose prediction system and method based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN108717866B (en) Method, device, equipment and storage medium for predicting radiotherapy plan dose distribution
US11386557B2 (en) Systems and methods for segmentation of intra-patient medical images
CN110197709B (en) Three-dimensional dose prediction method based on deep learning and priori planning
EP3856335B1 (en) Methods and systems for radiotherapy treatment planning using deep learning engines
CN112546463B (en) Radiotherapy dose automatic prediction method based on deep neural network
CN111028914B (en) Artificial intelligence guided dose prediction method and system
CN114681813B (en) Automatic radiation therapy planning system, automatic radiation therapy planning method, and storage medium
WO2019027924A1 (en) 3d deep planning radiotherapy system and method
WO2020244172A1 (en) Plan implementation method and device based on predicted dose guidance and gaussian process optimization
CN116580814A (en) Deep learning-based radiotherapy plan automatic generation system and method
EP4287204A2 (en) Methods and systems for adaptive radiotherapy treatment planning using deep learning engines
Jiao et al. TransDose: Transformer-based radiotherapy dose prediction from CT images guided by super-pixel-level GCN classification
CN113795303A (en) Method and system for quality-aware continuous learning for radiation therapy planning
CN114862800A (en) Semi-supervised medical image segmentation method based on geometric consistency constraint
CN109671499B (en) Method for constructing rectal toxicity prediction system
CN116758089A (en) Cervical cancer clinical target area and normal organ intelligent sketching system and method
CN111888665B (en) Construction method of three-dimensional dose distribution prediction model based on adaptive countermeasure network
CN113674834A (en) Radiotherapy target region establishing and correcting method based on dose distribution preview system
CN110706779B (en) Automatic generation method of accurate target function of radiotherapy plan
CN109978852B (en) Deep learning-based radiotherapy image target region delineation method and system for micro tissue organ
CN116433679A (en) Inner ear labyrinth multi-level labeling pseudo tag generation and segmentation method based on spatial position structure priori
Deng et al. Neuro-dynamic programming for fractionated radiotherapy planning
CN113178242B (en) Automatic plan optimization system based on coupled generation countermeasure network
CN105477789A (en) Dynamic intensity-modulated radiotherapy method based on quadratic programming model suppressing total beam-out time
CN112419348B (en) Male pelvic cavity CT segmentation method based on multitask learning edge correction network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination