CN117197203A - Deformation registration model training and dose stacking method and device - Google Patents

Deformation registration model training and dose stacking method and device Download PDF

Info

Publication number
CN117197203A
CN117197203A CN202311160138.3A CN202311160138A CN117197203A CN 117197203 A CN117197203 A CN 117197203A CN 202311160138 A CN202311160138 A CN 202311160138A CN 117197203 A CN117197203 A CN 117197203A
Authority
CN
China
Prior art keywords
deformation
dose
feature
image
registration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311160138.3A
Other languages
Chinese (zh)
Other versions
CN117197203B (en
Inventor
陈颀
王少彬
白璐
崔昕华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yizhiying Technology Co ltd
Original Assignee
Beijing Yizhiying Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yizhiying Technology Co ltd filed Critical Beijing Yizhiying Technology Co ltd
Priority to CN202311160138.3A priority Critical patent/CN117197203B/en
Publication of CN117197203A publication Critical patent/CN117197203A/en
Application granted granted Critical
Publication of CN117197203B publication Critical patent/CN117197203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application provides a deformation registration model training method, a dose stacking method and a device, which belong to the field of artificial intelligence and comprise the following steps: determining a moving image and a reference image according to the acquired CT positioning image of the radiotherapy tumor; inputting the moving image and the reference image into a deformation registration network for feature extraction, and calculating correlation features; inputting the correlation characteristics into a decoder formed by a 3D convolutional neural network to obtain a deformation field; carrying out space deformation on the moving image according to the deformation field to obtain a registered image; carrying out structural similarity loss calculation on the registered image and the reference image, and obtaining a deformation registration model when the loss value meets the condition; determining a deformation field based on the obtained deformation registration model, and carrying out deformation registration on the CT positioning image of the radiotherapy tumor; and realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method. The application can improve the accuracy of deformation registration, thereby realizing the accurate evaluation of the comprehensive radiation therapy dosage.

Description

Deformation registration model training and dose stacking method and device
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a deformation registration model training and dose stacking method, device, equipment and storage medium.
Background
At present, most of tumor treatment means mainly comprise surgery, chemical drug treatment and radiotherapy, and radiotherapy refers to a treatment mode for destroying cell structures by using high-energy rays so as to kill tumors. The radiotherapy of tumor can involve the situation of designing treatment plans for many times, including the situations of two-way radiotherapy, multi-center radiotherapy, internal and external irradiation radiotherapy, etc.; because of different body positions of the positioning images and different internal tissue forms of the patient during multiple times of irradiation, the sizes of the corresponding dose images after the planning and design of the multiple times of positioning images cannot be directly added to carry out comprehensive evaluation of the dose.
Taking cervical cancer radiotherapy as an example, most patients need reasonable collocation of internal and external irradiation; the internal and external irradiation images have huge differences due to different positions and different dose divisions of internal and external irradiation and reasons such as tumor regression, bladder filling degree, applicators and the like in the treatment process, so that the comprehensive dose of the internal and external irradiation applied to the same patient is difficult to evaluate, and the radioactive injury of a normal organ cannot be accurately evaluated.
Disclosure of Invention
The application provides a deformation registration model training and dose superposition method, device, equipment and storage medium, which can improve the accuracy of deformation registration of a CT positioning image of a radiotherapy tumor and realize accurate evaluation of comprehensive radiation treatment dose through the deformation registration model.
In a first aspect, an embodiment of the present application provides a deformation registration model training method, including:
determining a moving image and a reference image according to the acquired CT positioning image of the radiotherapy tumor;
inputting the moving image and the reference image into a deformation registration network for feature extraction, and calculating correlation features;
inputting the correlation characteristics into a decoder formed by a 3D convolutional neural network to obtain a deformation field;
carrying out space deformation on the moving image according to the deformation field to obtain a registered image;
and (3) carrying out structural similarity loss calculation on the registered image and the reference image, determining a loss value, and completing training when the loss value is smaller than or equal to a loss value threshold.
Optionally, inputting the moving image and the reference image into a deformation registration network for feature extraction, and calculating the correlation feature includes:
respectively inputting the moving image and the reference image into a 3D convolutional neural network to perform feature extraction to generate a first feature and a second feature;
stretching the first feature and the second feature into a one-dimensional form and merging the first feature embedded vector and the second feature embedded vector into position coding information in a pixel-by-pixel addition mode to obtain a first feature embedded vector and a second feature embedded vector;
inputting the first feature embedded vector and the second feature embedded vector into a full connection layer to perform feature dimension transformation;
and carrying out correlation calculation on the first feature embedded vector and the second feature embedded vector after the dimension transformation according to a preset correlation calculation formula to obtain correlation features.
Alternatively, the correlation calculation formula is as follows:
wherein M is r Feature embedding vectors corresponding to the features extracted from the moving image and taking the correlation features with Softmax as a normalized exponential function as a query vector Q M Feature embedding vector corresponding to extracted features of reference image as key vector K F Sum vector V F D is the query vector, K F T Is the vector K F Is a transpose of (a).
In a second aspect, embodiments of the present application provide a dose stacking method based on deformation registration, the method comprising:
acquiring CT positioning images of the radiotherapy tumor;
determining a deformation field based on a preset deformation registration model, and performing deformation registration on the CT positioning image of the radiotherapy tumor;
and realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method.
Optionally, the dose superposition of the radiotherapy is realized according to the deformation field and a preset dose superposition method, which comprises the following steps:
performing biological conversion on all dose values of the external irradiation treatment and the internal irradiation treatment, and converting the irradiated physical dose into a bioequivalence dose of 2Gy by using a quadratic linear model;
carrying out deformation registration on the CT positioning image of the radiotherapy tumor according to the deformation field;
and carrying out voxel-level point-to-point addition on the therapeutic dose distribution of the multi-time radiotherapy according to the deformation result to obtain the superimposed dose distribution.
Optionally, the method further comprises:
performing DVH statistic analysis and dose histology feature extraction of each organ at risk on the superimposed dose distribution;
using DVH statistics, dose histology characteristics and clinical indexes of patients as input, and using a classification model based on machine learning to construct a prediction model from characteristic parameters to toxic response; the classification model includes, but is not limited to: SVM, random Forest or XGBoost;
and carrying out toxicity reaction prediction according to the prediction model.
In a third aspect, an embodiment of the present application provides a deformation registration model training apparatus, including:
the determining module is used for determining a moving image and a reference image according to the acquired CT positioning image of the radiotherapy tumor;
the feature processing module is used for inputting the moving image and the reference image into the deformation registration network to perform feature extraction and calculate correlation features;
the decoding module is used for inputting the correlation characteristics into a decoder formed by the 3D convolutional neural network to obtain a deformation field;
the deformation module is used for carrying out space deformation on the moving image according to the deformation field to obtain a registered image;
and the calculation module is used for carrying out structural similarity loss calculation according to the registered image and the reference image, determining a loss value, and completing training when the loss value is smaller than or equal to a loss value threshold value.
In a fourth aspect, embodiments of the present application provide a dose stacking device based on deformation registration, the device comprising:
the acquisition module is used for acquiring CT positioning images of the radiotherapy tumor;
the deformation module is used for determining a deformation field based on a preset deformation registration model and carrying out deformation registration on the CT positioning image of the radiotherapy tumor;
and the superposition module is used for realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method.
In a fifth aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor implements the method according to any implementation manner of the first aspect when executing the program.
In a sixth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method according to any of the implementations of the first aspect.
The application provides a deformation registration model training and dose stacking method, device and equipment and a storage medium, comprising the following steps: dividing the acquired CT positioning image of the radiotherapy tumor into a moving image and a reference image; inputting the moving image and the reference image into a deformation registration network for feature extraction, and calculating correlation features; inputting the correlation characteristics into a decoder formed by a 3D convolutional neural network to obtain a deformation field; carrying out space deformation on the moving image according to the deformation field to obtain a registered image; performing structural similarity loss calculation on the registered image and the reference image, determining a loss value, and completing training when the loss value is smaller than or equal to a loss value threshold; determining a deformation field based on a deformation registration model obtained through training, and carrying out deformation registration on the CT positioning image of the radiotherapy tumor; and realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method. The application can improve the accuracy of the deformation registration of the CT positioning image of the radiotherapy tumor, and realizes the accurate evaluation of the comprehensive radiation treatment dosage through the deformation registration model.
It should be understood that the description in this summary is not intended to limit the critical or essential features of the embodiments of the application, nor is it intended to limit the scope of the application. Other features of the present application will become apparent from the description that follows.
Drawings
The above and other features, advantages and aspects of embodiments of the present application will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, the same or similar reference numerals denote the same or similar elements.
FIG. 1 is a flowchart of a deformation registration model training method according to an embodiment of the present application;
FIG. 2 is a diagram of a deformation registration model architecture according to an embodiment of the present application;
FIG. 3 is a flow chart of a dose stacking method based on deformation registration according to an embodiment of the present application;
FIG. 4 is a flow chart of another method of dose stacking based on deformation registration in accordance with an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a training device for a deformation registration model according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a dose stacking device based on deformation registration according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to enable a person skilled in the art to better understand the technical solutions in one or more embodiments of the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one or more embodiments of the present disclosure without inventive faculty, are intended to be within the scope of the present disclosure.
It should be noted that, the description of the embodiment of the present application is only for the purpose of more clearly describing the technical solution of the embodiment of the present application, and does not constitute a limitation on the technical solution provided by the embodiment of the present application.
Fig. 1 is a flowchart of a deformation registration model training method according to an embodiment of the present application. As shown in fig. 1, includes:
optionally, the deformation registration model is obtained by training an image registration network based on a hybrid transducer structure;
s101, determining a moving image and a reference image according to the acquired CT positioning image of the radiotherapy tumor.
Optionally, the radiotherapy tumor CT localization image comprises an inner illumination CT image and an outer illumination CT image.
For example, the inner irradiation CT image may be set as the moving image and the outer irradiation CT image as the reference image or the outer irradiation CT image may be set as the moving image and the inner irradiation CT image as the reference image or the inner irradiation CT image may be set as the moving image and the reference image at the same time or the outer irradiation CT image may be set as the moving image and the reference image at the same time.
It should be noted that the determination of the moving image and the reference image is not limited to the scheme in the present embodiment, and the scheme for determining the deformation field may be applied.
S102, inputting the moving image and the reference image into a deformation registration network for feature extraction, and calculating correlation features.
Optionally, the method specifically includes:
respectively inputting the moving image and the reference image into a 3D convolutional neural network to perform feature extraction to generate a first feature and a second feature;
illustratively, the spatial resolution of the feature widths and heights of the first and second features may be 1/16 of the original image.
Stretching the first feature and the second feature into a one-dimensional form and merging the first feature embedded vector and the second feature embedded vector into position coding information in a pixel-by-pixel addition mode to obtain a first feature embedded vector and a second feature embedded vector;
inputting the first feature embedded vector and the second feature embedded vector into a full connection layer to perform feature dimension transformation;
illustratively, the feature dimension may be transformed to 1×d;
and carrying out correlation calculation on the first feature embedded vector and the second feature embedded vector after the dimension transformation according to a preset correlation calculation formula to obtain correlation features.
Alternatively, the correlation calculation formula is as follows:
wherein M is r Is the correlation
Features, softmax is normalized exponential function, and feature embedded vector corresponding to extracted features of moving image is used as query vector Q M Feature embedding vector corresponding to extracted features of reference image as key vector K F Sum vector V F D is the query vector, K F T Is the vector K F Is a transpose of (a).
S103, inputting the correlation characteristic into a decoder formed by the 3D convolutional neural network to obtain a deformation field.
S104, carrying out space deformation on the moving image according to the deformation field to obtain a registered image.
Alternatively, the coordinate system of the moving image and the reference image can be unified after spatial deformation.
S105, carrying out structural similarity loss calculation on the registered image and the reference image, determining a loss value, and completing training when the loss value is smaller than or equal to a loss value threshold.
Optionally, if the loss value is greater than the loss value threshold, training is continued until the loss value is less than or equal to the loss value threshold or the training frequency threshold is reached, and training is ended.
Illustratively, fig. 2 is a deformation registration model structure diagram according to an embodiment of the present application, as shown in fig. 2:
firstly, extracting features of a moving image and a reference image through two mutually independent 3D convolutional neural networks to respectively generate three-dimensional form features F M And F F (the spatial resolution of feature width and height is 1/16 of the original image).
The three-dimensional feature F is then generated to adapt the subsequent processing flow of the transducer structure M And F F Stretching the three-dimensional characteristic input transducer structure to form a one-dimensional form, and merging the position coding information of the characteristic in a pixel-by-pixel addition mode, so as to obtain a characteristic embedding vector required by the three-dimensional characteristic input transducer structure processing; and then the characteristic dimension is converted into 1 Xd through the full connection layer processing.
Based on the attention mechanism of a transducer, moving the extracted features F of the image M The corresponding feature embedding vector is used as a query vector Q M Extracted features F of reference images F The corresponding feature embedding vector is used as a key vector K F Sum vector V F The pixel-by-pixel correlation calculation is performed on the features of the reference image and the moving image using the following mixed attention mechanism:
wherein M is r Feature embedding vectors corresponding to the features extracted from the moving image and taking the correlation features with Softmax as a normalized exponential function as a query vector Q M Feature embedding vector corresponding to extracted features of reference image as key vector K F Sum vector V F D is the query vector, K F T Is the vector K F Is a transpose of (a).
Correlation characteristic M calculated by the above formula r Input to a 3D convolutional neural networkDecoder, thereby predicting deformation fieldAnd carrying out spatial deformation on the moving image by utilizing the predicted deformation field to obtain a registered image.
Optionally, the method further comprises:
evaluating the accuracy of a deformation registration algorithm according to a Dice Similarity Coefficient (DSC);
wherein DSC is a similarity measure of two groups, typically used to estimate the similarity between the two groups; the value ranges from 0 to 1 (0 means no overlap, 1 means complete overlap).
In particular, using the associated organ-at-risk segmentation profiles of each external (External beam radiotherapy, EBRT) image and internal (IntraMAvitary brachytherapy, ICBT) radiation therapy, the average DSC values for all patients in the test set under different image registration methods can be calculated; for a pair of the inner irradiation image and the outer irradiation image, the OAR profile of the outer irradiation may be used as a gold standard, and the OAR profile after the inner irradiation deformation is compared with the gold standard to calculate the DSC value.
The embodiment of the application provides a deformation registration model training method, which comprises the steps of firstly utilizing two independent 3D convolution feature extraction networks, respectively learning voxel-level features special for a moving image and a reference image by utilizing local receptive field characteristics of convolution kernels, and extracting distinguishing local features for each voxel of the moving image and the reference image. Then, based on the attention mechanism of a transducer, each voxel characteristic of the moving image is used as a query vector, each voxel characteristic of the reference image is used as a key vector and a value vector, so that a mixed form of attention modeling is formed, and voxel-by-voxel relation learning is carried out on each voxel of the moving image and each voxel of the reference image. According to the technical scheme, the special discriminant local features of the voxel level can be respectively learned, the problem that the single feature extraction structure is difficult to separate the voxel features of the moving image and the reference image is avoided, and the problems that the pure Transformer structure cannot effectively model the local features of the voxel level, the model scale is large, the training resource consumption is large and the reasoning speed is low are solved; on the other hand, based on a hybrid transducer formed by a transducer attention mechanism, a process of correlation learning of a moving image and a reference image voxel level in a deformation registration process is explicitly modeled, and the discriminativity of the learned deformation registration features is further improved by means of the characteristic of a global receptive field of the transducer, so that the deformation registration accuracy is effectively improved.
Fig. 3 is a flowchart of a dose stacking method based on deformation registration according to an embodiment of the present application. As shown in fig. 3, includes:
s301, acquiring CT positioning images of the radiotherapy tumor.
Optionally, the radiotherapy tumor CT localization image comprises an inner illumination CT image and an outer illumination CT image.
S302, determining a deformation field based on a preset deformation registration model, and performing deformation registration on the CT positioning image of the radiotherapy tumor.
Alternatively, the training of the deformation registration model may refer to steps S101-S105, which are not described here again.
S303, realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method.
Optionally, the method comprises:
all dose values of external radiation therapy (External beam radiotherapy, EBRT) and internal radiation therapy (IntraMAvitary brachytherapy, ICBT) are biologically converted, using a quadratic linear model to convert the irradiated physical dose to a bioequivalent dose of 2 Gy;
carrying out deformation registration on the CT positioning image of the radiotherapy tumor according to the deformation field;
and carrying out voxel-level point-to-point addition on the dose distribution of the multi-time radiotherapy according to the deformation result to obtain the superimposed dose distribution.
Illustratively, fig. 4 is a flowchart of another dose stacking method based on deformation registration according to an embodiment of the present application, as shown in fig. 4:
using ICBT5 as a reference image, performing deformation registration on four ICBT images and one ERBT image by adopting a deformation registration model, and determining a deformation field corresponding to each image;
performing biological conversion on all dose values of the external irradiation treatment and the internal irradiation treatment, and converting the irradiated physical dose into a bioequivalence dose of 2Gy by using a quadratic linear model;
and carrying out voxel-level point-to-point addition on the internal irradiation treatment dose distribution and the external irradiation dose distribution according to the deformation result to obtain the superimposed dose distribution.
Optionally, the method further comprises:
performing DVH statistic analysis and dose histology feature extraction of each organ at risk on the superimposed dose distribution;
using DVH statistics, dose histology characteristics and clinical indexes of patients as input, and using a classification model based on machine learning to construct a prediction model from characteristic parameters to toxic response; the classification model includes, but is not limited to: SVM, random Forest or XGBoost;
and carrying out relevant toxic reaction prediction of the organs at risk according to the prediction model.
The embodiment of the application provides a dose superposition method based on deformation registration, which comprises the following steps: acquiring CT positioning images of the radiotherapy tumor; determining a deformation field based on a preset deformation registration model, and performing deformation registration on the CT positioning image of the radiotherapy tumor; realizing the dose superposition of radiotherapy according to the deformation field and a preset dose superposition method; the method can realize accurate evaluation of comprehensive radiation treatment dosage through the deformation registration model, and effectively avoid causing radioactive damage to organs.
The following describes in detail the apparatus provided by the embodiment of the present application, which can execute the deformation registration model training method described above, with reference to fig. 5.
Fig. 5 is a schematic structural diagram of a training device for deformation registration model according to an embodiment of the present application; as shown in fig. 5, the training device 50 includes:
a determining module 501, configured to determine a moving image and a reference image according to the acquired CT positioning image of the radiotherapy tumor;
the feature processing module 502 is configured to input the moving image and the reference image into the deformation registration network for feature extraction, and calculate correlation features;
a decoding module 503, configured to input the correlation feature into a decoder formed by the 3D convolutional neural network to obtain a deformation field;
the deformation module 504 is configured to spatially deform the moving image according to the deformation field to obtain a registered image;
the calculating module 505 is configured to perform structural similarity loss calculation according to the registered image and the reference image, determine a loss value, and complete training when the loss value is less than or equal to a loss value threshold.
Optionally, the feature processing module 502 is further configured to input the moving image and the reference image into a 3D convolutional neural network respectively for feature extraction to generate a first feature and a second feature; stretching the first feature and the second feature into a one-dimensional form and merging the first feature embedded vector and the second feature embedded vector into position coding information in a pixel-by-pixel addition mode to obtain a first feature embedded vector and a second feature embedded vector; inputting the first feature embedded vector and the second feature embedded vector into a full connection layer to perform feature dimension transformation; and carrying out correlation calculation on the first feature embedded vector and the second feature embedded vector after the dimension transformation according to a preset correlation calculation formula to obtain correlation features.
Alternatively, the correlation calculation formula is as follows:
wherein M is r Feature embedding vectors corresponding to the features extracted from the moving image and taking the correlation features with Softmax as a normalized exponential function as a query vector Q M Feature embedding vector corresponding to extracted features of reference image as key vector K F Sum vector V F D is the query vector, K F T Is the vector K F Is a transpose of (a).
The following describes in detail, with reference to fig. 6, a device provided by an embodiment of the present application, which can perform the dose stacking method based on deformation registration.
Fig. 6 is a schematic structural diagram of a dose stacking device based on deformation registration according to an embodiment of the present application; as shown in fig. 6, the superimposing apparatus 60 includes:
an acquisition module 601, configured to acquire a CT positioning image of a radiotherapy tumor;
the deformation module 602 is configured to determine a deformation field based on a preset deformation registration model, and perform deformation registration on the radiotherapy tumor CT positioning image;
and the superposition module 603 is used for realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method.
Optionally, the superposition module 603 is further configured to biologically convert all dose values of the external irradiation treatment and the internal irradiation treatment, and convert the irradiated physical dose into a bioequivalent dose of 2Gy by using a quadratic linear model; carrying out deformation registration on the CT positioning image of the radiotherapy tumor according to the deformation field; and carrying out voxel-level point-to-point addition on the dose distribution of the multi-time radiotherapy according to the deformation result to obtain the superimposed dose distribution.
Optionally, the superimposing apparatus 60 further includes: the prediction module is used for carrying out DVH statistic analysis and dose histology feature extraction on each organ at risk on the superimposed dose distribution; using DVH statistics, dose histology characteristics and clinical indexes of patients as input, and using a classification model based on machine learning to construct a prediction model from characteristic parameters to toxic response; the classification model includes, but is not limited to: SVM, random Forest or XGBoost; and carrying out relevant toxic reaction prediction of the organs at risk according to the prediction model.
The embodiment of the present application also provides a computer electronic device, fig. 7 shows a schematic diagram of a structure of an electronic device to which the embodiment of the present application can be applied, and as shown in fig. 7, the electronic device includes a central processing module (CPU) 701 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage section 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the system operation are also stored. The CPU 701, ROM 702, and RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
The following components are connected to the I/O interface 705: an input section 706 including a keyboard, a mouse, and the like; an output portion 707 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 708 including a hard disk or the like; and a communication section 709 including a network interface card such as a LAN card, a modem, or the like. The communication section 709 performs communication processing via a network such as the internet. The drive 710 is also connected to the I/O interface 705 as needed. A removable medium 711 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 710 as necessary, so that a computer program read therefrom is mounted into the storage section 708 as necessary.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or modules involved in the embodiments of the present application may be implemented in software or in hardware. The described modules or modules may also be provided in a processor, for example, as: a processor includes a determining module 501, a feature processing module 502, a decoding module 503, a deforming module 504, and a calculating module 505, where the names of these modules do not in some cases limit the module itself, for example, the deforming module 504 may also be described as "the deforming module 504 for spatially deforming a moving image according to a deformation field to obtain a registered image".
As another aspect, the present application further provides a computer readable storage medium, which may be a computer readable storage medium contained in a deformation registration model training device or a dose superimposing device based on deformation registration as described in the above embodiments; or may be a computer-readable storage medium, alone, that is not incorporated into an electronic device. The computer readable storage medium stores one or more programs for use by one or more processors to perform a deformation registration model training method or a deformation registration-based dose stacking method described in the present application.
The above description is only illustrative of the preferred embodiments of the present application and of the principles of the technology employed. It will be appreciated by persons skilled in the art that the scope of the application referred to in the present application is not limited to the specific combinations of the technical features described above, but also covers other technical features formed by any combination of the technical features described above or their equivalents without departing from the inventive concept. Such as the above-mentioned features and the technical features disclosed in the present application (but not limited to) having similar functions are replaced with each other.

Claims (10)

1. A deformation registration model training method, the method comprising:
determining a moving image and a reference image according to the acquired CT positioning image of the radiotherapy tumor;
inputting the moving image and the reference image into a deformation registration network for feature extraction, and calculating correlation features;
inputting the correlation characteristics into a decoder formed by a 3D convolutional neural network to obtain a deformation field;
carrying out space deformation on the moving image according to the deformation field to obtain a registered image;
and carrying out structural similarity loss calculation on the registered image and the reference image, determining a loss value, and completing training when the loss value is smaller than or equal to a loss value threshold.
2. The deformation registration model training method according to claim 1, wherein the inputting the moving image and the reference image into a deformation registration network performs feature extraction, and calculating correlation features includes:
respectively inputting the moving image and the reference image into a 3D convolutional neural network to perform feature extraction to generate a first feature and a second feature;
stretching the first feature and the second feature into a one-dimensional form and merging the first feature and the second feature into position coding information in a pixel-by-pixel addition mode to obtain a first feature embedded vector and a second feature embedded vector;
inputting the first feature embedded vector and the second feature embedded vector into a full connection layer to perform feature dimension transformation;
and carrying out correlation calculation on the first feature embedded vector and the second feature embedded vector after dimension transformation according to a preset correlation calculation formula to obtain the correlation feature.
3. The deformation registration model training method according to claim 2, wherein the correlation calculation formula is as follows:
wherein M is r A feature embedding vector corresponding to the features extracted from the moving image and taking the feature embedding vector as a query vector Q, wherein the feature embedding vector is a correlation feature, softmax is a normalized exponential function M The feature embedded vector corresponding to the extracted features of the reference image is used as a key vector K F Sum vector V F D is the query vector, K F T Is the vector K F Is a transpose of (a).
4. A dose stacking method based on deformation registration, the method comprising:
acquiring CT positioning images of the radiotherapy tumor;
determining a deformation field based on a preset deformation registration model, and performing deformation registration on the CT positioning image of the radiotherapy tumor;
and realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method.
5. The deformation registration-based dose stacking method of claim 4, wherein the dose stacking for radiation therapy according to the deformation field and preset dose stacking method comprises:
performing biological conversion on all dose values of the external irradiation treatment and the internal irradiation treatment, and converting the irradiated physical dose into a bioequivalence dose of 2Gy by using a quadratic linear model;
carrying out deformation registration on the CT positioning image of the radiotherapy tumor according to the deformation field;
and carrying out voxel-level point-to-point addition on the therapeutic dose distribution of the multi-time radiotherapy according to the deformation result to obtain the superimposed dose distribution.
6. The deformation registration-based dose stacking method of claim 4, further comprising:
performing DVH statistic analysis and dose histology feature extraction of each organ at risk on the superimposed dose distribution;
using DVH statistics, dose histology characteristics and clinical indexes of patients as input, and using a classification model based on machine learning to construct a prediction model from characteristic parameters to toxic response; the classification model includes, but is not limited to: SVM, random Forest or XGBoost;
and carrying out relevant toxic reaction prediction of the organs at risk according to the prediction model.
7. A deformation registration model training device, the device comprising:
the determining module is used for determining a moving image and a reference image according to the acquired CT positioning image of the radiotherapy tumor;
the feature processing module is used for inputting the moving image and the reference image into a deformation registration network for feature extraction and calculating correlation features;
the decoding module is used for inputting the correlation characteristics into a decoder formed by a 3D convolutional neural network to obtain a deformation field;
the deformation module is used for carrying out space deformation on the moving image according to the deformation field to obtain a registered image;
and the calculation module is used for carrying out structural similarity loss calculation according to the registered image and the reference image, determining a loss value, and completing training when the loss value is smaller than or equal to a loss value threshold value.
8. A dose stacking device based on deformation registration, the device comprising:
the acquisition module is used for acquiring CT positioning images of the radiotherapy tumor;
the deformation module is used for determining a deformation field based on a preset deformation registration model and carrying out deformation registration on the CT positioning image of the radiotherapy tumor;
and the superposition module is used for realizing the dose superposition of the radiotherapy according to the deformation field and a preset dose superposition method.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, the processor implementing the method according to any of claims 1 to 6 when the computer program is executed.
10. A computer readable storage medium, characterized in that a computer program is stored, which computer program, when being executed by a processor, implements the method according to any of claims 1 to 6.
CN202311160138.3A 2023-09-08 2023-09-08 Deformation registration model training and dose stacking method and device Active CN117197203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311160138.3A CN117197203B (en) 2023-09-08 2023-09-08 Deformation registration model training and dose stacking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311160138.3A CN117197203B (en) 2023-09-08 2023-09-08 Deformation registration model training and dose stacking method and device

Publications (2)

Publication Number Publication Date
CN117197203A true CN117197203A (en) 2023-12-08
CN117197203B CN117197203B (en) 2024-02-20

Family

ID=88984499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311160138.3A Active CN117197203B (en) 2023-09-08 2023-09-08 Deformation registration model training and dose stacking method and device

Country Status (1)

Country Link
CN (1) CN117197203B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070211949A1 (en) * 2006-03-09 2007-09-13 Juergen Neumann Two-Dimensional Adaptive Image Compression Method
CN103226837A (en) * 2013-05-21 2013-07-31 南方医科大学 Method for generating distribution image used for observing cervix tumour radiotherapy total dose
US20190057505A1 (en) * 2017-08-17 2019-02-21 Siemens Healthcare Gmbh Automatic change detection in medical images
CN111192260A (en) * 2020-01-03 2020-05-22 天津大学 Melon quality detection method based on hyperspectral image depth feature fusion
CN115359103A (en) * 2022-08-24 2022-11-18 北京医智影科技有限公司 Image registration network model and establishing method, device and medium thereof
CN115410093A (en) * 2022-08-31 2022-11-29 西安理工大学 Remote sensing image classification method based on dual-channel coding network and conditional random field
CN115738099A (en) * 2021-09-03 2023-03-07 上海联影医疗科技股份有限公司 Dose verification method and system
CN115830016A (en) * 2023-02-09 2023-03-21 真健康(北京)医疗科技有限公司 Medical image registration model training method and equipment
CN116012344A (en) * 2023-01-29 2023-04-25 东北林业大学 Cardiac magnetic resonance image registration method based on mask self-encoder CNN-transducer
CN116309754A (en) * 2023-03-29 2023-06-23 重庆邮电大学 Brain medical image registration method and system based on local-global information collaboration
CN116485853A (en) * 2023-04-14 2023-07-25 深圳技术大学 Medical image registration method and device based on deep learning neural network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070211949A1 (en) * 2006-03-09 2007-09-13 Juergen Neumann Two-Dimensional Adaptive Image Compression Method
CN103226837A (en) * 2013-05-21 2013-07-31 南方医科大学 Method for generating distribution image used for observing cervix tumour radiotherapy total dose
US20190057505A1 (en) * 2017-08-17 2019-02-21 Siemens Healthcare Gmbh Automatic change detection in medical images
CN111192260A (en) * 2020-01-03 2020-05-22 天津大学 Melon quality detection method based on hyperspectral image depth feature fusion
CN115738099A (en) * 2021-09-03 2023-03-07 上海联影医疗科技股份有限公司 Dose verification method and system
CN115359103A (en) * 2022-08-24 2022-11-18 北京医智影科技有限公司 Image registration network model and establishing method, device and medium thereof
CN115410093A (en) * 2022-08-31 2022-11-29 西安理工大学 Remote sensing image classification method based on dual-channel coding network and conditional random field
CN116012344A (en) * 2023-01-29 2023-04-25 东北林业大学 Cardiac magnetic resonance image registration method based on mask self-encoder CNN-transducer
CN115830016A (en) * 2023-02-09 2023-03-21 真健康(北京)医疗科技有限公司 Medical image registration model training method and equipment
CN116309754A (en) * 2023-03-29 2023-06-23 重庆邮电大学 Brain medical image registration method and system based on local-global information collaboration
CN116485853A (en) * 2023-04-14 2023-07-25 深圳技术大学 Medical image registration method and device based on deep learning neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭晓彤: "基于InSAR高精度图像配准的同震、震间与震后形变研究", 《中国优秀硕士学位论文全文数据库 (基础科学辑)》 *

Also Published As

Publication number Publication date
CN117197203B (en) 2024-02-20

Similar Documents

Publication Publication Date Title
US11944463B2 (en) Pseudo-CT generation from MR data using a feature regression model
Chen et al. U‐net‐generated synthetic CT images for magnetic resonance imaging‐only prostate intensity‐modulated radiation therapy treatment planning
CN108778416B (en) Systems, methods, and media for pseudo-CT generation from MR data using tissue parameter estimation
Peng et al. A method of rapid quantification of patient‐specific organ doses for CT using deep‐learning‐based multi‐organ segmentation and GPU‐accelerated Monte Carlo dose computing
CN113129308B (en) Image segmentation method, apparatus and non-transitory computer readable storage medium
Chen et al. MR image‐based synthetic CT for IMRT prostate treatment planning and CBCT image‐guided localization
US20140212013A1 (en) Method and Apparatus for Generating a Derived Image Using Images of Different Types
Sanders et al. Machine segmentation of pelvic anatomy in MRI-assisted radiosurgery (MARS) for prostate cancer brachytherapy
O’Connor et al. Comparison of synthetic computed tomography generation methods, incorporating male and female anatomical differences, for magnetic resonance imaging-only definitive pelvic radiotherapy
Salehi et al. Deep learning-based non-rigid image registration for high-dose rate brachytherapy in inter-fraction cervical cancer
Adamson et al. Evaluation of a V‐Net autosegmentation algorithm for pediatric CT scans: Performance, generalizability, and application to patient‐specific CT dosimetry
CN117197203B (en) Deformation registration model training and dose stacking method and device
US20230065196A1 (en) Patient-specific organ dose quantification and inverse optimization for ct
Vazquez et al. A deep learning-based approach for statistical robustness evaluation in proton therapy treatment planning: a feasibility study
CN111583303A (en) System and method for generating pseudo CT image based on MRI image
Zeng et al. TransQA: deep hybrid transformer network for measurement-guided volumetric dose prediction of pre-treatment patient-specific quality assurance
Charters Automated Patient Safety Management and Quality Control in Radiation Therapy
Wen RETRACTED ARTICLE: Application of Monte Carlo calculation method based on special graph in medical imaging
Ganß et al. Deep Learning Approaches for Contrast Removal from Contrast-enhanced CT: Streamlining Personalized Internal Dosimetry
Gay et al. Identifying the optimal deep learning architecture and parameters for automatic beam aperture definition in 3D radiotherapy
Sreeja et al. Pseudo computed tomography image generation from brain magnetic resonance image using integration of PCA & DCNN-UNET: A comparative analysis
CN116563402A (en) Cross-modal MRI-CT image synthesis method, system, equipment and medium
CN117423426A (en) Method, device and equipment for constructing treatment plan dose prediction model
CN116091517A (en) Medical image processing method, medical image processing device, storage medium and computer program product
CN114913261A (en) Three-dimensional in-vivo dose reconstruction method and device based on deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant