CN117197203B - Deformation registration model training and dose stacking method and device - Google Patents
Deformation registration model training and dose stacking method and device Download PDFInfo
- Publication number
- CN117197203B CN117197203B CN202311160138.3A CN202311160138A CN117197203B CN 117197203 B CN117197203 B CN 117197203B CN 202311160138 A CN202311160138 A CN 202311160138A CN 117197203 B CN117197203 B CN 117197203B
- Authority
- CN
- China
- Prior art keywords
- feature
- deformation
- dose
- image
- vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000012549 training Methods 0.000 title claims abstract description 33
- 238000001959 radiotherapy Methods 0.000 claims abstract description 60
- 206010028980 Neoplasm Diseases 0.000 claims abstract description 37
- 238000004364 calculation method Methods 0.000 claims abstract description 26
- 238000000605 extraction Methods 0.000 claims abstract description 22
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 15
- 239000013598 vector Substances 0.000 claims description 91
- 238000011282 treatment Methods 0.000 claims description 20
- 238000012545 processing Methods 0.000 claims description 11
- 238000013145 classification model Methods 0.000 claims description 10
- 230000006870 function Effects 0.000 claims description 10
- 210000000920 organ at risk Anatomy 0.000 claims description 10
- 230000009466 transformation Effects 0.000 claims description 10
- 238000004590 computer program Methods 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 claims description 5
- 238000010801 machine learning Methods 0.000 claims description 5
- 238000007637 random forest analysis Methods 0.000 claims description 5
- 230000008791 toxic response Effects 0.000 claims description 5
- 230000005855 radiation Effects 0.000 claims description 4
- 231100000331 toxic Toxicity 0.000 claims description 4
- 230000002588 toxic effect Effects 0.000 claims description 4
- 238000011287 therapeutic dose Methods 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 abstract description 5
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 12
- 238000005286 illumination Methods 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000002725 brachytherapy Methods 0.000 description 2
- 230000006378 damage Effects 0.000 description 2
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 230000002285 radioactive effect Effects 0.000 description 2
- 206010008342 Cervix carcinoma Diseases 0.000 description 1
- 208000006105 Uterine Cervical Neoplasms Diseases 0.000 description 1
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 201000010881 cervical cancer Diseases 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011347 external beam therapy Methods 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 231100000419 toxicity Toxicity 0.000 description 1
- 230000001988 toxicity Effects 0.000 description 1
Landscapes
- Radiation-Therapy Devices (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention provides a deformation registration model training method, a dose stacking method and a device, which belong to the field of artificial intelligence and comprise the following steps: determining a moving image and a reference image according to the acquired CT positioning image of the radiotherapy tumor; inputting the moving image and the reference image into a deformation registration network for feature extraction, and calculating correlation features; inputting the correlation characteristics into a decoder formed by a 3D convolutional neural network to obtain a deformation field; carrying out space deformation on the moving image according to the deformation field to obtain a registered image; carrying out structural similarity loss calculation on the registered image and the reference image, and obtaining a deformation registration model when the loss value meets the condition; determining a deformation field based on the obtained deformation registration model, and carrying out deformation registration on the CT positioning image of the radiotherapy tumor; and realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method. The invention can improve the accuracy of deformation registration, thereby realizing the accurate evaluation of the comprehensive radiation therapy dosage.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a deformation registration model training and dose stacking method, device, equipment and storage medium.
Background
At present, most of tumor treatment means mainly comprise surgery, chemical drug treatment and radiotherapy, and radiotherapy refers to a treatment mode for destroying cell structures by using high-energy rays so as to kill tumors. The radiotherapy of tumor can involve the situation of designing treatment plans for many times, including the situations of two-way radiotherapy, multi-center radiotherapy, internal and external irradiation radiotherapy, etc.; because of different body positions of the positioning images and different internal tissue forms of the patient during multiple times of irradiation, the sizes of the corresponding dose images after the planning and design of the multiple times of positioning images cannot be directly added to carry out comprehensive evaluation of the dose.
Taking cervical cancer radiotherapy as an example, most patients need reasonable collocation of internal and external irradiation; the internal and external irradiation images have huge differences due to different positions and different dose divisions of internal and external irradiation and reasons such as tumor regression, bladder filling degree, applicators and the like in the treatment process, so that the comprehensive dose of the internal and external irradiation applied to the same patient is difficult to evaluate, and the radioactive injury of a normal organ cannot be accurately evaluated.
Disclosure of Invention
The invention provides a deformation registration model training and dose superposition method, device, equipment and storage medium, which can improve the accuracy of deformation registration of a CT positioning image of a radiotherapy tumor and realize accurate evaluation of comprehensive radiation treatment dose through the deformation registration model.
In a first aspect, an embodiment of the present invention provides a deformation registration model training method, including:
determining a moving image and a reference image according to the acquired CT positioning image of the radiotherapy tumor;
inputting the moving image and the reference image into a deformation registration network for feature extraction, and calculating correlation features;
inputting the correlation characteristics into a decoder formed by a 3D convolutional neural network to obtain a deformation field;
carrying out space deformation on the moving image according to the deformation field to obtain a registered image;
and (3) carrying out structural similarity loss calculation on the registered image and the reference image, determining a loss value, and completing training when the loss value is smaller than or equal to a loss value threshold.
Optionally, inputting the moving image and the reference image into a deformation registration network for feature extraction, and calculating the correlation feature includes:
respectively inputting the moving image and the reference image into a 3D convolutional neural network to perform feature extraction to generate a first feature and a second feature;
stretching the first feature and the second feature into a one-dimensional form and merging the first feature embedded vector and the second feature embedded vector into position coding information in a pixel-by-pixel addition mode to obtain a first feature embedded vector and a second feature embedded vector;
inputting the first feature embedded vector and the second feature embedded vector into a full connection layer to perform feature dimension transformation;
and carrying out correlation calculation on the first feature embedded vector and the second feature embedded vector after the dimension transformation according to a preset correlation calculation formula to obtain correlation features.
Alternatively, the correlation calculation formula is as follows:
wherein M is r Feature embedding vectors corresponding to the features extracted from the moving image and taking the correlation features with Softmax as a normalized exponential function as a query vector Q M Feature embedding vector corresponding to extracted features of reference image as key vector K F Sum vector V F D is the query vector, K F T Is the vector K F Is a transpose of (a).
In a second aspect, embodiments of the present invention provide a dose stacking method based on deformation registration, the method comprising:
acquiring CT positioning images of the radiotherapy tumor;
determining a deformation field based on a preset deformation registration model, and performing deformation registration on the CT positioning image of the radiotherapy tumor;
and realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method.
Optionally, the dose superposition of the radiotherapy is realized according to the deformation field and a preset dose superposition method, which comprises the following steps:
performing biological conversion on all dose values of the external irradiation treatment and the internal irradiation treatment, and converting the irradiated physical dose into a bioequivalence dose of 2Gy by using a quadratic linear model;
carrying out deformation registration on the CT positioning image of the radiotherapy tumor according to the deformation field;
and carrying out voxel-level point-to-point addition on the therapeutic dose distribution of the multi-time radiotherapy according to the deformation result to obtain the superimposed dose distribution.
Optionally, the method further comprises:
performing DVH statistic analysis and dose histology feature extraction of each organ at risk on the superimposed dose distribution;
using DVH statistics, dose histology characteristics and clinical indexes of patients as input, and using a classification model based on machine learning to construct a prediction model from characteristic parameters to toxic response; the classification model includes, but is not limited to: SVM, random Forest or XGBoost;
and carrying out toxicity reaction prediction according to the prediction model.
In a third aspect, an embodiment of the present invention provides a deformation registration model training apparatus, including:
the determining module is used for determining a moving image and a reference image according to the acquired CT positioning image of the radiotherapy tumor;
the feature processing module is used for inputting the moving image and the reference image into the deformation registration network to perform feature extraction and calculate correlation features;
the decoding module is used for inputting the correlation characteristics into a decoder formed by the 3D convolutional neural network to obtain a deformation field;
the deformation module is used for carrying out space deformation on the moving image according to the deformation field to obtain a registered image;
and the calculation module is used for carrying out structural similarity loss calculation according to the registered image and the reference image, determining a loss value, and completing training when the loss value is smaller than or equal to a loss value threshold value.
In a fourth aspect, embodiments of the present invention provide a dose stacking device based on deformation registration, the device comprising:
the acquisition module is used for acquiring CT positioning images of the radiotherapy tumor;
the deformation module is used for determining a deformation field based on a preset deformation registration model and carrying out deformation registration on the CT positioning image of the radiotherapy tumor;
and the superposition module is used for realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method.
In a fifth aspect, an embodiment of the present invention provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor implements the method according to any implementation manner of the first aspect when executing the program.
In a sixth aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method according to any of the implementations of the first aspect.
The invention provides a deformation registration model training and dose stacking method, device and equipment and a storage medium, comprising the following steps: dividing the acquired CT positioning image of the radiotherapy tumor into a moving image and a reference image; inputting the moving image and the reference image into a deformation registration network for feature extraction, and calculating correlation features; inputting the correlation characteristics into a decoder formed by a 3D convolutional neural network to obtain a deformation field; carrying out space deformation on the moving image according to the deformation field to obtain a registered image; performing structural similarity loss calculation on the registered image and the reference image, determining a loss value, and completing training when the loss value is smaller than or equal to a loss value threshold; determining a deformation field based on a deformation registration model obtained through training, and carrying out deformation registration on the CT positioning image of the radiotherapy tumor; and realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method. The invention can improve the accuracy of the deformation registration of the CT positioning image of the radiotherapy tumor, and realizes the accurate evaluation of the comprehensive radiation treatment dosage through the deformation registration model.
It should be understood that the description in this summary is not intended to limit the critical or essential features of the embodiments of the invention, nor is it intended to limit the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
The above and other features, advantages and aspects of embodiments of the present invention will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, the same or similar reference numerals denote the same or similar elements.
FIG. 1 is a flowchart of a deformation registration model training method according to an embodiment of the present invention;
FIG. 2 is a diagram of a deformation registration model architecture according to an embodiment of the present invention;
FIG. 3 is a flow chart of a dose stacking method based on deformation registration according to an embodiment of the present invention;
FIG. 4 is a flow chart of another method of dose stacking based on deformation registration in accordance with an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a training device for a deformation registration model according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a dose stacking device based on deformation registration according to an embodiment of the present invention;
fig. 7 is a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to enable a person skilled in the art to better understand the technical solutions in one or more embodiments of the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one or more embodiments of the present disclosure without inventive faculty, are intended to be within the scope of the present disclosure.
It should be noted that, the description of the embodiment of the present invention is only for the purpose of more clearly describing the technical solution of the embodiment of the present invention, and does not constitute a limitation on the technical solution provided by the embodiment of the present invention.
Fig. 1 is a flowchart of a deformation registration model training method according to an embodiment of the present invention. As shown in fig. 1, includes:
optionally, the deformation registration model is obtained by training an image registration network based on a hybrid transducer structure;
s101, determining a moving image and a reference image according to the acquired CT positioning image of the radiotherapy tumor.
Optionally, the radiotherapy tumor CT localization image comprises an inner illumination CT image and an outer illumination CT image.
For example, the inner irradiation CT image may be set as the moving image and the outer irradiation CT image as the reference image or the outer irradiation CT image may be set as the moving image and the inner irradiation CT image as the reference image or the inner irradiation CT image may be set as the moving image and the reference image at the same time or the outer irradiation CT image may be set as the moving image and the reference image at the same time.
It should be noted that the determination of the moving image and the reference image is not limited to the scheme in the present embodiment, and the scheme for determining the deformation field may be applied.
S102, inputting the moving image and the reference image into a deformation registration network for feature extraction, and calculating correlation features.
Optionally, the method specifically includes:
respectively inputting the moving image and the reference image into a 3D convolutional neural network to perform feature extraction to generate a first feature and a second feature;
illustratively, the spatial resolution of the feature widths and heights of the first and second features may be 1/16 of the original image.
Stretching the first feature and the second feature into a one-dimensional form and merging the first feature embedded vector and the second feature embedded vector into position coding information in a pixel-by-pixel addition mode to obtain a first feature embedded vector and a second feature embedded vector;
inputting the first feature embedded vector and the second feature embedded vector into a full connection layer to perform feature dimension transformation;
illustratively, the feature dimension may be transformed to 1×d;
and carrying out correlation calculation on the first feature embedded vector and the second feature embedded vector after the dimension transformation according to a preset correlation calculation formula to obtain correlation features.
Alternatively, the correlation calculation formula is as follows:
wherein M is r Is the correlation
Features, softmax is normalized exponential function, and feature embedded vector corresponding to extracted features of moving image is used as query vector Q M Feature embedding vector corresponding to extracted features of reference image as key vector K F Sum vector V F D is the query vector, K F T Is the vector K F Is a transpose of (a).
S103, inputting the correlation characteristic into a decoder formed by the 3D convolutional neural network to obtain a deformation field.
S104, carrying out space deformation on the moving image according to the deformation field to obtain a registered image.
Alternatively, the coordinate system of the moving image and the reference image can be unified after spatial deformation.
S105, carrying out structural similarity loss calculation on the registered image and the reference image, determining a loss value, and completing training when the loss value is smaller than or equal to a loss value threshold.
Optionally, if the loss value is greater than the loss value threshold, training is continued until the loss value is less than or equal to the loss value threshold or the training frequency threshold is reached, and training is ended.
Illustratively, fig. 2 is a deformation registration model structure diagram according to an embodiment of the present invention, as shown in fig. 2:
firstly, extracting features of a moving image and a reference image through two mutually independent 3D convolutional neural networks to respectively generate three-dimensional form features F M And F F (the spatial resolution of feature width and height is 1/16 of the original image).
Then to adapt to Transubsequent processing flow of sformer structure, the generated three-dimensional feature F M And F F Stretching the three-dimensional characteristic input transducer structure to form a one-dimensional form, and merging the position coding information of the characteristic in a pixel-by-pixel addition mode, so as to obtain a characteristic embedding vector required by the three-dimensional characteristic input transducer structure processing; and then the characteristic dimension is converted into 1 Xd through the full connection layer processing.
Based on the attention mechanism of a transducer, moving the extracted features F of the image M The corresponding feature embedding vector is used as a query vector Q M Extracted features F of reference images F The corresponding feature embedding vector is used as a key vector K F Sum vector V F The pixel-by-pixel correlation calculation is performed on the features of the reference image and the moving image using the following mixed attention mechanism:
wherein M is r Feature embedding vectors corresponding to the features extracted from the moving image and taking the correlation features with Softmax as a normalized exponential function as a query vector Q M Feature embedding vector corresponding to extracted features of reference image as key vector K F Sum vector V F D is the query vector, K F T Is the vector K F Is a transpose of (a).
Correlation characteristic M calculated by the above formula r Input to a decoder composed of a 3D convolutional neural network to predict a deformation fieldAnd carrying out spatial deformation on the moving image by utilizing the predicted deformation field to obtain a registered image.
Optionally, the method further comprises:
evaluating the accuracy of a deformation registration algorithm according to a Dice Similarity Coefficient (DSC);
wherein DSC is a similarity measure of two groups, typically used to estimate the similarity between the two groups; the value ranges from 0 to 1 (0 means no overlap, 1 means complete overlap).
In particular, using the associated organ-at-risk segmentation profiles of each external (External beam radiotherapy, EBRT) image and internal (IntraMAvitary brachytherapy, ICBT) radiation therapy, the average DSC values for all patients in the test set under different image registration methods can be calculated; for a pair of the inner irradiation image and the outer irradiation image, the OAR profile of the outer irradiation may be used as a gold standard, and the OAR profile after the inner irradiation deformation is compared with the gold standard to calculate the DSC value.
The embodiment of the invention provides a deformation registration model training method, which comprises the steps of firstly utilizing two independent 3D convolution feature extraction networks, respectively learning voxel-level features special for a moving image and a reference image by utilizing local receptive field characteristics of convolution kernels, and extracting distinguishing local features for each voxel of the moving image and the reference image. Then, based on the attention mechanism of a transducer, each voxel characteristic of the moving image is used as a query vector, each voxel characteristic of the reference image is used as a key vector and a value vector, so that a mixed form of attention modeling is formed, and voxel-by-voxel relation learning is carried out on each voxel of the moving image and each voxel of the reference image. According to the technical scheme, the special discriminant local features of the voxel level can be respectively learned, the problem that the single feature extraction structure is difficult to separate the voxel features of the moving image and the reference image is avoided, and the problems that the pure Transformer structure cannot effectively model the local features of the voxel level, the model scale is large, the training resource consumption is large and the reasoning speed is low are solved; on the other hand, based on a hybrid transducer formed by a transducer attention mechanism, a process of correlation learning of a moving image and a reference image voxel level in a deformation registration process is explicitly modeled, and the discriminativity of the learned deformation registration features is further improved by means of the characteristic of a global receptive field of the transducer, so that the deformation registration accuracy is effectively improved.
Fig. 3 is a flowchart of a dose stacking method based on deformation registration according to an embodiment of the present invention. As shown in fig. 3, includes:
s301, acquiring CT positioning images of the radiotherapy tumor.
Optionally, the radiotherapy tumor CT localization image comprises an inner illumination CT image and an outer illumination CT image.
S302, determining a deformation field based on a preset deformation registration model, and performing deformation registration on the CT positioning image of the radiotherapy tumor.
Alternatively, the training of the deformation registration model may refer to steps S101-S105, which are not described here again.
S303, realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method.
Optionally, the method comprises:
all dose values of external radiation therapy (External beam radiotherapy, EBRT) and internal radiation therapy (IntraMAvitary brachytherapy, ICBT) are biologically converted, using a quadratic linear model to convert the irradiated physical dose to a bioequivalent dose of 2 Gy;
carrying out deformation registration on the CT positioning image of the radiotherapy tumor according to the deformation field;
and carrying out voxel-level point-to-point addition on the dose distribution of the multi-time radiotherapy according to the deformation result to obtain the superimposed dose distribution.
Illustratively, fig. 4 is a flowchart of another dose stacking method based on deformation registration according to an embodiment of the present invention, as shown in fig. 4:
using ICBT5 as a reference image, performing deformation registration on four ICBT images and one ERBT image by adopting a deformation registration model, and determining a deformation field corresponding to each image;
performing biological conversion on all dose values of the external irradiation treatment and the internal irradiation treatment, and converting the irradiated physical dose into a bioequivalence dose of 2Gy by using a quadratic linear model;
and carrying out voxel-level point-to-point addition on the internal irradiation treatment dose distribution and the external irradiation dose distribution according to the deformation result to obtain the superimposed dose distribution.
Optionally, the method further comprises:
performing DVH statistic analysis and dose histology feature extraction of each organ at risk on the superimposed dose distribution;
using DVH statistics, dose histology characteristics and clinical indexes of patients as input, and using a classification model based on machine learning to construct a prediction model from characteristic parameters to toxic response; the classification model includes, but is not limited to: SVM, random Forest or XGBoost;
and carrying out relevant toxic reaction prediction of the organs at risk according to the prediction model.
The embodiment of the invention provides a dose superposition method based on deformation registration, which comprises the following steps: acquiring CT positioning images of the radiotherapy tumor; determining a deformation field based on a preset deformation registration model, and performing deformation registration on the CT positioning image of the radiotherapy tumor; realizing the dose superposition of radiotherapy according to the deformation field and a preset dose superposition method; the method can realize accurate evaluation of comprehensive radiation treatment dosage through the deformation registration model, and effectively avoid causing radioactive damage to organs.
The following describes in detail, with reference to fig. 5, a device capable of performing the deformation registration model training method according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of a training device for deformation registration model according to an embodiment of the present invention; as shown in fig. 5, the training device 50 includes:
a determining module 501, configured to determine a moving image and a reference image according to the acquired CT positioning image of the radiotherapy tumor;
the feature processing module 502 is configured to input the moving image and the reference image into the deformation registration network for feature extraction, and calculate correlation features;
a decoding module 503, configured to input the correlation feature into a decoder formed by the 3D convolutional neural network to obtain a deformation field;
the deformation module 504 is configured to spatially deform the moving image according to the deformation field to obtain a registered image;
the calculating module 505 is configured to perform structural similarity loss calculation according to the registered image and the reference image, determine a loss value, and complete training when the loss value is less than or equal to a loss value threshold.
Optionally, the feature processing module 502 is further configured to input the moving image and the reference image into a 3D convolutional neural network respectively for feature extraction to generate a first feature and a second feature; stretching the first feature and the second feature into a one-dimensional form and merging the first feature embedded vector and the second feature embedded vector into position coding information in a pixel-by-pixel addition mode to obtain a first feature embedded vector and a second feature embedded vector; inputting the first feature embedded vector and the second feature embedded vector into a full connection layer to perform feature dimension transformation; and carrying out correlation calculation on the first feature embedded vector and the second feature embedded vector after the dimension transformation according to a preset correlation calculation formula to obtain correlation features.
Alternatively, the correlation calculation formula is as follows:
wherein M is r Feature embedding vectors corresponding to the features extracted from the moving image and taking the correlation features with Softmax as a normalized exponential function as a query vector Q M Feature embedding vector corresponding to extracted features of reference image as key vector K F Sum vector V F D is the query vector, K F T Is the vector K F Is a transpose of (a).
The following describes in detail, with reference to fig. 6, a device provided in an embodiment of the present application, which may perform the dose stacking method based on deformation registration described above.
Fig. 6 is a schematic structural diagram of a dose stacking device based on deformation registration according to an embodiment of the present invention; as shown in fig. 6, the superimposing apparatus 60 includes:
an acquisition module 601, configured to acquire a CT positioning image of a radiotherapy tumor;
the deformation module 602 is configured to determine a deformation field based on a preset deformation registration model, and perform deformation registration on the radiotherapy tumor CT positioning image;
and the superposition module 603 is used for realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method.
Optionally, the superposition module 603 is further configured to biologically convert all dose values of the external irradiation treatment and the internal irradiation treatment, and convert the irradiated physical dose into a bioequivalent dose of 2Gy by using a quadratic linear model; carrying out deformation registration on the CT positioning image of the radiotherapy tumor according to the deformation field; and carrying out voxel-level point-to-point addition on the dose distribution of the multi-time radiotherapy according to the deformation result to obtain the superimposed dose distribution.
Optionally, the superimposing apparatus 60 further includes: the prediction module is used for carrying out DVH statistic analysis and dose histology feature extraction on each organ at risk on the superimposed dose distribution; using DVH statistics, dose histology characteristics and clinical indexes of patients as input, and using a classification model based on machine learning to construct a prediction model from characteristic parameters to toxic response; the classification model includes, but is not limited to: SVM, random Forest or XGBoost; and carrying out relevant toxic reaction prediction of the organs at risk according to the prediction model.
The embodiment of the present invention also provides a computer electronic device, fig. 7 shows a schematic diagram of a structure of an electronic device to which the embodiment of the present invention can be applied, and as shown in fig. 7, the electronic device includes a central processing module (CPU) 701 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the system operation are also stored. The CPU 701, ROM 702, and RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output portion 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules or modules may also be provided in a processor, for example, as: a processor includes a determining module 501, a feature processing module 502, a decoding module 503, a deforming module 504, and a calculating module 505, where the names of these modules do not in some cases limit the module itself, for example, the deforming module 504 may also be described as "the deforming module 504 for spatially deforming a moving image according to a deformation field to obtain a registered image".
As another aspect, the present invention further provides a computer readable storage medium, which may be a computer readable storage medium contained in a deformation registration model training device or a dose superimposing device based on deformation registration as described in the above embodiments; or may be a computer-readable storage medium, alone, that is not incorporated into an electronic device. The computer readable storage medium stores one or more programs for use by one or more processors to perform a deformation registration model training method or a deformation registration-based dose stacking method described in the present invention.
The above description is only illustrative of the preferred embodiments of the present invention and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the invention referred to in the present invention is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept. Such as the above-mentioned features and the technical features disclosed in the present invention (but not limited to) having similar functions are replaced with each other.
Claims (6)
1. A deformation registration model training method, the method comprising:
determining a moving image and a reference image according to the acquired CT positioning image of the radiotherapy tumor;
inputting the moving image and the reference image into a deformation registration network for feature extraction, and calculating correlation features;
inputting the correlation characteristics into a decoder formed by a 3D convolutional neural network to obtain a deformation field;
carrying out space deformation on the moving image according to the deformation field to obtain a registered image;
performing structural similarity loss calculation on the registered image and the reference image, determining a loss value, and completing training when the loss value is smaller than or equal to a loss value threshold;
inputting the moving image and the reference image into a deformation registration network for feature extraction, and calculating correlation features, wherein the method comprises the following steps:
respectively inputting the moving image and the reference image into a 3D convolutional neural network to perform feature extraction to generate a first feature and a second feature;
stretching the first feature and the second feature into a one-dimensional form and merging the first feature and the second feature into position coding information in a pixel-by-pixel addition mode to obtain a first feature embedded vector and a second feature embedded vector;
inputting the first feature embedded vector and the second feature embedded vector into a full connection layer to perform feature dimension transformation;
performing correlation calculation on the first feature embedded vector and the second feature embedded vector after dimension transformation according to a preset correlation calculation formula to obtain correlation features;
the correlation calculation formula is as follows:
wherein M is r A feature embedding vector corresponding to the features extracted from the moving image and taking the feature embedding vector as a query vector Q, wherein the feature embedding vector is a correlation feature, softmax is a normalized exponential function M The feature embedded vector corresponding to the extracted features of the reference image is used as a key vector K F Sum vector V F D is the query vector, K F T Is the vector K F Is a transpose of (a).
2. A dose stacking method based on deformation registration, the method comprising:
acquiring CT positioning images of the radiotherapy tumor;
determining a deformation field based on a preset deformation registration model, and performing deformation registration on the CT positioning image of the radiotherapy tumor;
realizing the dose superposition of radiotherapy according to the deformation field and a preset dose superposition method;
the method for realizing the dose superposition of radiation treatment according to the deformation field and the preset dose superposition method comprises the following steps:
performing biological conversion on all dose values of the external irradiation treatment and the internal irradiation treatment, and converting the irradiated physical dose into a bioequivalence dose of 2Gy by using a quadratic linear model;
carrying out deformation registration on the CT positioning image of the radiotherapy tumor according to the deformation field;
performing voxel-level point-to-point addition on the therapeutic dose distribution of the multi-time radiotherapy according to the deformation result to obtain superimposed dose distribution;
the dose stacking method based on deformation registration further comprises the following steps:
performing DVH statistic analysis and dose histology feature extraction of each organ at risk on the superimposed dose distribution;
using DVH statistics, dose histology characteristics and clinical indexes of patients as input, and using a classification model based on machine learning to construct a prediction model from characteristic parameters to toxic response; the classification model includes, but is not limited to: SVM, random Forest or XGBoost;
and carrying out relevant toxic reaction prediction of the organs at risk according to the prediction model.
3. A deformation registration model training device, the device comprising:
the determining module is used for determining a moving image and a reference image according to the acquired CT positioning image of the radiotherapy tumor;
the feature processing module is used for inputting the moving image and the reference image into a deformation registration network for feature extraction and calculating correlation features;
the decoding module is used for inputting the correlation characteristics into a decoder formed by a 3D convolutional neural network to obtain a deformation field;
the deformation module is used for carrying out space deformation on the moving image according to the deformation field to obtain a registered image;
the calculation module is used for carrying out structural similarity loss calculation according to the registered image and the reference image, determining a loss value, and completing training when the loss value is smaller than or equal to a loss value threshold value;
the feature processing module is further used for respectively inputting the moving image and the reference image into a 3D convolutional neural network to perform feature extraction to generate a first feature and a second feature;
stretching the first feature and the second feature into a one-dimensional form and merging the first feature and the second feature into position coding information in a pixel-by-pixel addition mode to obtain a first feature embedded vector and a second feature embedded vector;
inputting the first feature embedded vector and the second feature embedded vector into a full connection layer to perform feature dimension transformation;
performing correlation calculation on the first feature embedded vector and the second feature embedded vector after dimension transformation according to a preset correlation calculation formula to obtain correlation features;
wherein, the correlation calculation formula is as follows:
wherein M is r A feature embedding vector corresponding to the features extracted from the moving image and taking the feature embedding vector as a query vector Q, wherein the feature embedding vector is a correlation feature, softmax is a normalized exponential function M The feature embedded vector corresponding to the extracted features of the reference image is used as a key vector K F Sum vector V F D is the query vector, K F T Is the vector K F Is a transpose of (a).
4. A dose stacking device based on deformation registration, the device comprising:
the acquisition module is used for acquiring CT positioning images of the radiotherapy tumor;
the deformation module is used for determining a deformation field based on a preset deformation registration model and carrying out deformation registration on the CT positioning image of the radiotherapy tumor;
the superposition module is used for realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method;
the superposition module is also used for performing biological conversion on all dose values of the external irradiation treatment and the internal irradiation treatment, and converting irradiated physical dose into bioequivalence dose of 2Gy by using a quadratic linear model;
carrying out deformation registration on the CT positioning image of the radiotherapy tumor according to the deformation field;
performing voxel-level point-to-point addition on the therapeutic dose distribution of the multi-time radiotherapy according to the deformation result to obtain superimposed dose distribution;
the dose superposition device based on deformation registration further comprises a prediction module, wherein the prediction module is used for carrying out DVH statistic analysis and dose histology feature extraction on each organ at risk on the dose distribution after superposition;
using DVH statistics, dose histology characteristics and clinical indexes of patients as input, and using a classification model based on machine learning to construct a prediction model from characteristic parameters to toxic response; the classification model includes, but is not limited to: SVM, random Forest or XGBoost;
and carrying out relevant toxic reaction prediction of the organs at risk according to the prediction model.
5. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, the processor implementing the method of claim 1 or 2 when executing the computer program.
6. A computer-readable storage medium, characterized in that a computer program is stored, which, when being executed by a processor, implements the method according to claim 1 or 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311160138.3A CN117197203B (en) | 2023-09-08 | 2023-09-08 | Deformation registration model training and dose stacking method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311160138.3A CN117197203B (en) | 2023-09-08 | 2023-09-08 | Deformation registration model training and dose stacking method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117197203A CN117197203A (en) | 2023-12-08 |
CN117197203B true CN117197203B (en) | 2024-02-20 |
Family
ID=88984499
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311160138.3A Active CN117197203B (en) | 2023-09-08 | 2023-09-08 | Deformation registration model training and dose stacking method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117197203B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103226837A (en) * | 2013-05-21 | 2013-07-31 | 南方医科大学 | Method for generating distribution image used for observing cervix tumour radiotherapy total dose |
CN111192260A (en) * | 2020-01-03 | 2020-05-22 | 天津大学 | Melon quality detection method based on hyperspectral image depth feature fusion |
CN115359103A (en) * | 2022-08-24 | 2022-11-18 | 北京医智影科技有限公司 | Image registration network model and establishing method, device and medium thereof |
CN115410093A (en) * | 2022-08-31 | 2022-11-29 | 西安理工大学 | Remote sensing image classification method based on dual-channel coding network and conditional random field |
CN115738099A (en) * | 2021-09-03 | 2023-03-07 | 上海联影医疗科技股份有限公司 | Dose verification method and system |
CN115830016A (en) * | 2023-02-09 | 2023-03-21 | 真健康(北京)医疗科技有限公司 | Medical image registration model training method and equipment |
CN116012344A (en) * | 2023-01-29 | 2023-04-25 | 东北林业大学 | Cardiac magnetic resonance image registration method based on mask self-encoder CNN-transducer |
CN116309754A (en) * | 2023-03-29 | 2023-06-23 | 重庆邮电大学 | Brain medical image registration method and system based on local-global information collaboration |
CN116485853A (en) * | 2023-04-14 | 2023-07-25 | 深圳技术大学 | Medical image registration method and device based on deep learning neural network |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102006011022A1 (en) * | 2006-03-09 | 2007-10-25 | Netviewer Gmbh | Two-dimensional adaptive image compression method |
US10699410B2 (en) * | 2017-08-17 | 2020-06-30 | Siemes Healthcare GmbH | Automatic change detection in medical images |
-
2023
- 2023-09-08 CN CN202311160138.3A patent/CN117197203B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103226837A (en) * | 2013-05-21 | 2013-07-31 | 南方医科大学 | Method for generating distribution image used for observing cervix tumour radiotherapy total dose |
CN111192260A (en) * | 2020-01-03 | 2020-05-22 | 天津大学 | Melon quality detection method based on hyperspectral image depth feature fusion |
CN115738099A (en) * | 2021-09-03 | 2023-03-07 | 上海联影医疗科技股份有限公司 | Dose verification method and system |
CN115359103A (en) * | 2022-08-24 | 2022-11-18 | 北京医智影科技有限公司 | Image registration network model and establishing method, device and medium thereof |
CN115410093A (en) * | 2022-08-31 | 2022-11-29 | 西安理工大学 | Remote sensing image classification method based on dual-channel coding network and conditional random field |
CN116012344A (en) * | 2023-01-29 | 2023-04-25 | 东北林业大学 | Cardiac magnetic resonance image registration method based on mask self-encoder CNN-transducer |
CN115830016A (en) * | 2023-02-09 | 2023-03-21 | 真健康(北京)医疗科技有限公司 | Medical image registration model training method and equipment |
CN116309754A (en) * | 2023-03-29 | 2023-06-23 | 重庆邮电大学 | Brain medical image registration method and system based on local-global information collaboration |
CN116485853A (en) * | 2023-04-14 | 2023-07-25 | 深圳技术大学 | Medical image registration method and device based on deep learning neural network |
Non-Patent Citations (1)
Title |
---|
基于InSAR高精度图像配准的同震、震间与震后形变研究;郭晓彤;《中国优秀硕士学位论文全文数据库 (基础科学辑)》;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117197203A (en) | 2023-12-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11944463B2 (en) | Pseudo-CT generation from MR data using a feature regression model | |
Chen et al. | U‐net‐generated synthetic CT images for magnetic resonance imaging‐only prostate intensity‐modulated radiation therapy treatment planning | |
CN108778416B (en) | Systems, methods, and media for pseudo-CT generation from MR data using tissue parameter estimation | |
Peng et al. | A method of rapid quantification of patient‐specific organ doses for CT using deep‐learning‐based multi‐organ segmentation and GPU‐accelerated Monte Carlo dose computing | |
US20190318474A1 (en) | Image synthesis using adversarial networks such as for radiation therapy | |
CN113129308B (en) | Image segmentation method, apparatus and non-transitory computer readable storage medium | |
Chen et al. | MR image‐based synthetic CT for IMRT prostate treatment planning and CBCT image‐guided localization | |
US9256965B2 (en) | Method and apparatus for generating a derived image using images of different types | |
Fetty et al. | Investigating conditional GAN performance with different generator architectures, an ensemble model, and different MR scanners for MR-sCT conversion | |
Montoya et al. | Reconstruction of three‐dimensional tomographic patient models for radiation dose modulation in CT from two scout views using deep learning | |
Salehi et al. | Deep learning-based non-rigid image registration for high-dose rate brachytherapy in inter-fraction cervical cancer | |
Vazquez et al. | A deep learning-based approach for statistical robustness evaluation in proton therapy treatment planning: a feasibility study | |
CN117197203B (en) | Deformation registration model training and dose stacking method and device | |
CN116563402A (en) | Cross-modal MRI-CT image synthesis method, system, equipment and medium | |
Zeng et al. | TransQA: deep hybrid transformer network for measurement-guided volumetric dose prediction of pre-treatment patient-specific quality assurance | |
US20230065196A1 (en) | Patient-specific organ dose quantification and inverse optimization for ct | |
CN114913261A (en) | Three-dimensional in-vivo dose reconstruction method and device based on deep neural network | |
US20230281842A1 (en) | Generation of 3d models of anatomical structures from 2d radiographs | |
Charters | Automated Patient Safety Management and Quality Control in Radiation Therapy | |
Wen | RETRACTED ARTICLE: Application of Monte Carlo calculation method based on special graph in medical imaging | |
Ganß et al. | Deep Learning Approaches for Contrast Removal from Contrast-enhanced CT: Streamlining Personalized Internal Dosimetry | |
Gay et al. | Identifying the optimal deep learning architecture and parameters for automatic beam aperture definition in 3D radiotherapy | |
Sreeja et al. | Pseudo computed tomography image generation from brain magnetic resonance image using integration of PCA & DCNN-UNET: A comparative analysis | |
Zamanian et al. | Nested CNN architecture for three-dimensional dose distribution prediction in tomotherapy for prostate cancer | |
CN118154587A (en) | Quality control method for MRI-only radiotherapy |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |