CN112348857A - Radiotherapy positioning offset calculation method and system based on deep learning - Google Patents

Radiotherapy positioning offset calculation method and system based on deep learning Download PDF

Info

Publication number
CN112348857A
CN112348857A CN202011235669.0A CN202011235669A CN112348857A CN 112348857 A CN112348857 A CN 112348857A CN 202011235669 A CN202011235669 A CN 202011235669A CN 112348857 A CN112348857 A CN 112348857A
Authority
CN
China
Prior art keywords
image
images
drr
training
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011235669.0A
Other languages
Chinese (zh)
Inventor
姚伟
姚毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Linatech Medical Science And Technology Co ltd
Original Assignee
Suzhou Linatech Medical Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Linatech Medical Science And Technology Co ltd filed Critical Suzhou Linatech Medical Science And Technology Co ltd
Priority to CN202011235669.0A priority Critical patent/CN112348857A/en
Publication of CN112348857A publication Critical patent/CN112348857A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30008Bone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a radiotherapy positioning offset calculation method and system based on deep learning. The invention can obtain correct offset without drawing contour lines on DRR and DR or observing with naked eyes, and can calculate on a modern Graphic Processing Unit (GPU) with the required time less than 1 second, thereby greatly reducing the workload of a positioning technician, improving the treatment effect of tumor patients and improving the overall treatment efficiency of hospitals.

Description

Radiotherapy positioning offset calculation method and system based on deep learning
Technical Field
The invention belongs to the technical field of medical treatment, and particularly relates to a radiotherapy positioning offset calculation method and system based on deep learning.
Background
At present, tumor radiotherapy is generally carried out in a fractionated manner, and the patients are repeatedly fixed by using corresponding thermoplastic films, negative pressure bags and other devices in cooperation with laser lamps according to the posture fixation during positioning CT scanning and the resetting condition during simulation during each treatment. However, for various reasons, the patient still has a certain deviation, which is more than several millimeters to one centimeter, and even several centimeters.
A radiotherapy accelerator equipped with MV-EPID (Mega-Voltage Electronic Portal Imaging Device) can capture two orthogonal angles before a patient is treated, generally 0 degree and 90 degrees or 0 degree and 270 degrees, where 0 degree refers to a direction in which a handpiece is above, 90 degrees is a direction in which the handpiece rotates 90 degrees clockwise, and 270 degrees is a direction in which the handpiece rotates 90 degrees counterclockwise, and DR (Digital Radiography, i.e., Digital Radiography) images of two corresponding angles are compared with DRR (Digital Reconstructed radio Radiography, i.e., Digital images Reconstructed by CT) to obtain a positioning deviation, and the treatment is started after the deviation is corrected. In comparison, at present, a method of drawing a line or visually observing the line is usually adopted, a technician is required to manually perform the method, the time and labor are wasted, and human errors are easily caused, and fig. 1 is an operating interface of ivs (image Viewing system) software of a certain company.
Disclosure of Invention
In order to solve the technical problem, the invention provides a radiotherapy positioning offset calculation method and system based on deep learning.
In order to achieve the purpose, the technical scheme of the invention is as follows:
on one hand, the invention discloses a radiotherapy positioning offset calculation method based on deep learning, which comprises the following steps:
(1) collecting DR images, CT images and/or DRR images matched with clinical use, wherein the CT images can generate corresponding DRR images;
(2) marking the part type of the matched image, and training an image part classification network by using marking data contained in the matched image to obtain a part classification model;
(3) further labeling the image of the preset part obtained in the step (1), wherein the content is that a professional physicist aligns the image, and then training a registration network by using the aligned image to obtain an image registration model;
(4) analyzing and processing clinically obtained DR images and DRR images by using the model obtained by training in the step (2) and the step (3) to obtain the positioning deviation distance of the patient in the direction of the three-dimensional space coordinate system, wherein the step (4) specifically comprises the following steps:
(4.1) analyzing the image data of the DR image and the DRR image;
(4.2) forming a three-channel image by the collected DR image and the collected DRR image according to the two-channel 0-degree DR image and the one-channel 0-degree DRR image, inputting the three-channel image into the part classification model, judging whether the patient is a patient with a preset part, and if so, continuing the next step;
(4.3) preprocessing the DR image and the DRR image with the angle of 0 degree, inputting the preprocessed DR image and the DRR image into an image registration convolution neural network, and calculating to obtain an offset x in the left-right direction of the LAT and an offset z0 in the head-foot direction of the LNG;
(4.4) preprocessing the DR image and the DRR image of 90/270 degrees, inputting the preprocessed DR image and the preprocessed DRR image into an image registration convolutional neural network, and calculating to obtain an offset y in the front-back direction of the VRT and an offset z1 in the head-foot direction of the LNG;
(4.5) taking the deviation of the LNG head and foot direction as z ═ z0+ z 1)/2;
and (4.6) converting offset values in three directions according to the pixel size.
On the basis of the technical scheme, the following improvements can be made:
preferably, step (1) further comprises the following steps: and (5) sorting and cleaning the collected data images.
Preferably, the specific content of the data image collected in the step (1) for sorting and cleaning comprises one or more of the following; and checking whether the data images are matched or not and whether the data images are clinical patient images or not, removing the images which do not meet the requirements, and classifying the images which meet the requirements according to whether CT exists or not.
Preferably, the input of step (2) is a plurality of groups of medical image data, each group including at least 0 degree and 90/270 degree 2 pairs of DR image and DRR image, and step (2) specifically includes the following steps:
(2.1) carrying out data annotation, namely viewing each group of data, and classifying each group of data according to different categories;
(2.2) dividing the marked medical images into a training set and a testing set according to the proportion;
(2.3) preprocessing the images of the training set and the test set;
(2.4) sending the preprocessed training set data and the label into an image part classification algorithm, and training the weight of the model;
(2.5) testing the classification accuracy of the model obtained by training by adopting the preprocessed test set data; if the iteration stop condition is not met or the specified training times are not met, continuing the training of the step (2.4); and if the stopping condition is reached, stopping the iterative training and storing the weight of the trained part classification model.
Preferably, the input of step (3) is a plurality of groups of medical image data, each group including at least 0 degree and 90/270 degrees 2 pairs of DR image and DRR image, and step (3) specifically includes the following steps:
(3.1) aligning the DR images and the DRR images at 0 degree and 90 degrees/270 degrees;
(3.2) dividing the registered medical images into a training set and a test set according to the proportion;
(3.3) preprocessing the images of the training set and the test set, randomly offsetting the registered images, generating and recording an offset value as a true value label;
(3.4) sending the preprocessed training data set and the truth value offset label into a registration algorithm together, and training the weight of the model;
(3.5) testing the error between the registration deviation value and the true value of the model obtained by training by adopting the preprocessed test set data; if the iteration stop condition is not met or the specified training times are not met, continuing the training of the step (3.4); and if the stopping condition is reached, stopping the iterative training and storing the weight of the trained registration model.
Preferably, the CT image is generated into a DRR image by the following steps:
(d1) determining the position of a ray source according to the plan central point and the source wheelbase, and determining the related CT voxel range according to the plan central point, the pixel size and the image matrix size;
(d2) setting a proper attenuation rate;
(d3) a plurality of virtual rays are emitted from a ray source toward an imaging region, the rays pass through involved CT voxels, the path length through each voxel is calculated, and the gray level of the pixel obtained by transmission is calculated according to the attenuation rate.
Preferably, in step (d2),
if a DRR image of the skeleton is to be obtained, setting an appropriate attenuation rate threshold value to be less than the DRR image, and setting the attenuation rate of the tissue less than the DRR image threshold value to be 0;
if a DRR image of the soft tissue is to be obtained, setting the attenuation rate of the tissue larger than the threshold value as 0;
to obtain a DRR image of a specific tissue, the tissue is divided and the attenuation rate of other tissues is set to 0.
Preferably, the DR image, the CT image, and the DRR image are DICOM-format files.
In another aspect, the present invention also discloses a computing system, comprising:
one or more processors;
a memory;
and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for any of the deep learning based radiation therapy setup offset calculation methods described above.
According to the radiotherapy positioning offset automatic calculation method and system based on deep learning, accurate offset can be obtained without drawing contour lines on DRRs and DRRs or carrying out visual observation, calculation is carried out on a modern Graphic Processing Unit (GPU), the required time is less than 1 second, the workload of a positioning technician is reduced to a great extent, the treatment effect of tumor patients is improved, and the overall treatment efficiency of hospitals is also improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic view of an IVS software operation interface provided in the prior art.
Fig. 2 is a schematic block diagram of the flow of step (4) provided in the embodiment of the present invention.
Fig. 3 is a schematic block diagram of a flow of an image registration convolutional neural network according to an embodiment of the present invention.
Fig. 4 is a schematic block diagram of the flow of step (2) provided in the embodiment of the present invention.
Fig. 5 is a schematic block diagram of the flow of step (3) provided in the embodiment of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to achieve the object of the present invention, in some embodiments of a method and a system for calculating a placement offset of radiotherapy based on deep learning, a method for calculating a placement offset of radiotherapy based on deep learning, taking a head tumor therapy as an example, includes the following steps:
(1) collecting DR images, CT images and/or DRR images for clinical use, wherein the DR images are necessary, the DRR images and the CT images must have one or two, the CT images can generate corresponding DRR images, and the formats of the DR images, the CT images and the DRR images can be but are not limited to DICOM;
(2) labeling the part types of the matched images, wherein the part types can be set into five types including head, neck, chest, abdomen and basin, and training an image part classification network by using contained labeling data to obtain a part classification model;
(3) further labeling the images of the head obtained in the step (1), aligning the images, and then training a registration network by using the aligned images to obtain an image registration model;
(4) analyzing and processing clinically obtained DR images and DRR images by using the model obtained by training in the step (2) and the step (3) to obtain the positioning deviation distance of the patient in the direction of the three-dimensional space coordinate system, wherein the step (4) specifically comprises the following steps, as shown in fig. 2:
(4.1) analyzing the image data of the DR image and the DRR image;
(4.2) combining the collected DR image and the collected DRR image into a three-channel image according to the two-channel 0-degree DR image and the one-channel 0-degree DRR image, inputting the three-channel image into a part classification model, judging whether the head is the patient, and if so, continuing the next step;
(4.3) preprocessing the DR image and the DRR image with the angle of 0 degree, inputting the preprocessed DR image and the preprocessed DRR image into an image registration convolution neural network, and calculating to obtain an offset x in the left-right direction of the LAT and an offset z0 in the head-foot direction of the LNG, wherein the image registration convolution neural network is shown in fig. 3;
(4.4) preprocessing the DR image and the DRR image of 90/270 degrees, inputting the preprocessed DR image and the preprocessed DRR image into an image registration convolution neural network, and calculating to obtain an offset y in the front-back direction of the VRT and an offset z1 in the head-foot direction of the LNG, wherein the image registration convolution neural network is shown in fig. 3;
(4.5) taking the deviation of the LNG head and foot direction as z ═ z0+ z 1)/2;
and (4.6) converting offset values in mm in three directions according to the pixel size.
And (4) inputting a clinically acquired DR image and a proper DRR image generated by CT, outputting the DR image and the proper DRR image as the positioning deviation distance of the patient in the direction of a three-dimensional space coordinate system, designing a specific flow for the clinically acquired CT, DR and DRR images by using the model obtained by training in the step (2) and the step (3), and automatically classifying and registering image parts to obtain correct deviation.
The part classification network in the step (2) is ResNet-18, and ResNet is a classic deep learning model and has excellent properties due to the shortcut connection; ResNet-18 is a ResNet model with an 18-layer convolutional neural network.
The ResNet-18 structure is shown in Table 1:
TABLE 1 ResNet-18 network architecture
Figure BDA0002765176040000071
The step (1) is data collection, the steps (2) and (3) are data labeling and model training, and the step (4) is model application. Because the original medical image data is in a DICOM format, in the invention, model training is carried out by adopting deep learning frames TensorFlow and PyTorch.
Wherein: LAT, LNG, VRT: is an abbreviation for lateral, longitudinal, vertical, i.e. the left-right direction, the head-foot direction and the front-back direction of the patient in the supine position;
DICOM: digital Imaging and Communication in Medicine, i.e., medical Digital Imaging and Communication, is an international standard for medical images and related information (ISO 12052) defining a medical image format that can be used for data exchange with quality meeting clinical needs.
In order to further optimize the implementation effect of the present invention, in other embodiments, the rest features are the same, except that the step (1) further includes the following steps: and (5) sorting and cleaning the collected data images. In the present invention, data cleansing and data annotation is performed using an IVS system developed by a company.
Further, the specific content of the data image collected in the step (1) for sorting and cleaning includes one or more of the following: and checking whether the data images are matched or not and whether the data images are clinical patient images or not, removing the images which do not meet the requirements, and classifying the images which meet the requirements according to whether CT exists or not.
In order to further optimize the implementation effect of the invention, in other embodiments, the rest of the feature technologies are the same, except that the input of the step (2) is a plurality of groups of medical image data, each group comprises at least 0 degree and 90/270 degree 2 pairs of DR image and DRR image, the format is DICOM, the output is a trained part classification model, and a ResNet18 convolutional neural network model is adopted;
the step (2) specifically comprises the following steps:
(2.1) carrying out data annotation, namely viewing each group of data, and classifying each group of data according to five categories, namely head (H), neck (N), chest (T), abdomen (A) and pelvis (P);
(2.2) dividing the marked medical images into a training set and a testing set according to the proportion;
(2.3) preprocessing the training set and the test set images, wherein the preprocessing comprises converting the DICOM format into the PNG format so as to adopt a deep learning training frame for training and converting the PNG format into a size matched with the network input, wherein the network input adopts a three-channel image format, the first two channels are the same 0-degree DR image, and the last channel is a 0-degree DRR image;
(2.4) sending the preprocessed training set data and the label into an image part classification algorithm, and training the weight of the model;
(2.5) testing the classification accuracy of the model obtained by training by adopting the preprocessed test set data; if the iteration stop condition is not met (for example, the accuracy rate is improved higher) or the specified training times are not met, continuing the training in the step (2.4); and if the stopping condition is reached, stopping the iterative training and storing the weight of the trained part classification model.
In order to further optimize the implementation effect of the present invention, in other embodiments, the rest of the features are the same, except that the input of step (3) is several sets of medical image data, each set comprises at least 0 degree and 90/270 degree 2 pairs of DR image and DRR image, the format is DICOM, and the output is a trained registration model, and the model can be used to predict the three-direction shift of a set of clinically obtained images;
the step (3) specifically comprises the following steps:
(3.1) manually registering each group of images by clinical staff with rich experience, namely aligning 0-degree DR images and 90-degree/270-degree DR images to reach the level of accurate radiotherapy;
(3.2) dividing the registered medical images into a training set and a test set according to the proportion;
(3.3) preprocessing the images of the training set and the test set, wherein the preprocessing comprises converting the DICOM format into the PNG format so as to train by adopting a deep learning training frame, randomly offsetting the registered images, generating and recording an offset value which is used as a truth value label and normalized to a proper pixel size, and cutting out an image with a size suitable for network input from the truth value label;
(3.4) sending the preprocessed training data set and the truth value offset label into a registration algorithm together, and training the weight of the model;
(3.5) testing the error between the registration deviation value and the true value of the model obtained by training by adopting the preprocessed test set data; if the iteration stop condition is not reached (if the error can be further reduced) or the specified training times are not reached, continuing the training of the step (3.4); and if the stopping condition is reached, stopping the iterative training and storing the weight of the trained registration model.
In order to further optimize the implementation effect of the present invention, on the basis of the above embodiment, if the DRR input in step (4) is not satisfactory and the data includes CT images, an appropriate DRR may be generated according to a DRR generation algorithm, for example, in the case that the input DRR is a DRR of soft tissue (soft DRR) and a DRR of bone (bone DRR) is needed, to achieve this goal, the following algorithm steps may be performed:
(d1) determining the position of a ray source according to the plan central point and the source wheelbase, and determining the related CT voxel range according to the plan central point, the pixel size and the image matrix size;
(d2) setting a proper attenuation rate;
(d3) a plurality of virtual rays are emitted from a ray source toward an imaging region, the rays pass through involved CT voxels, the path length through each voxel is calculated, and the gray level of the pixel obtained by transmission is calculated according to the attenuation rate.
In order to further optimize the effect of the present invention, on the basis of the above-mentioned embodiment, in the step (d2),
if a DRR image of the skeleton is to be obtained, setting an appropriate attenuation rate threshold value to be less than the DRR image, and setting the attenuation rate of the tissue less than the DRR image threshold value to be 0;
if a DRR image of the soft tissue is to be obtained, setting the attenuation rate of the tissue larger than the threshold value as 0;
to obtain a DRR image of a specific tissue, the tissue is divided, and the attenuation rate of other tissues is set to 0.
In another aspect, the present invention also discloses a computing system, comprising:
one or more processors;
a memory;
and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for a deep learning based radiation therapy placement offset calculation method as disclosed in any of the embodiments above.
It should be understood that the various techniques described herein may be implemented in connection with hardware or software or, alternatively, with a combination of both. Thus, the methods and apparatus of the present invention, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention.
The input data of the invention is the DRR image reconstructed by positioning CT of the patient and the DR image shot before the fractionated treatment, generally speaking, the DR image is a square image with fixed size, the DRR image is related to the parameters of CT, the pixel size of the DRR image is greatly different, so the size of the image matrix is also greatly different, and the automatic offset calculation steps are as follows:
(a) converting the DRR image and the DR image with the degree of 0 to the size matched with the network input, combining the DR images of two channels and the DRR image of one channel to form a three-channel image, sending the three-channel image to an image part classification network, determining the part where the current DRR and DR image pair is located through model prediction, and continuing to perform the step (b) if the part is the head;
(b) normalizing the pixels of the DRR image and the DR image with 0 degree to proper sizes, only taking a proper middle part, sending the proper middle part into an image registration network, and obtaining a deviation x in the LAT direction and a deviation z0 in the LNG direction through model prediction;
(c) normalizing the pixels of the DRR image and the DR image of 90 degrees or 270 degrees to proper sizes, only taking a proper part in the middle, sending the proper part into an image registration network, and obtaining the deviation y in the VRT direction and the other deviation z1 in the LNG direction through model prediction;
(d) the deviation in the LNG direction was taken as (z0+ z1)/2, and the actual deviations in millimeters for the three directions were calculated in terms of pixel size.
With the development of deep learning, machine learning and artificial intelligence technology, an algorithm can be designed to learn to obtain a model, the positioning deviation can be automatically calculated without manual comparison, human errors can be eliminated, the radiotherapy accuracy is improved, the whole radiotherapy time is shortened, and a hospital can treat more tumor patients.
The automatic calculation result of the positioning deviation can be obtained by collecting CT, DR and DRR image data of radiotherapy, cleaning and labeling the data, designing a deep learning model, training, and reasoning images collected during the treatment by using the model obtained by learning. However, since the DR image is MV-level radiographic imaging on an accelerator, the CT imaging is kV-level radiographic imaging, the DRR is a simulated image obtained by calculation, the modal difference is large, and the implementation is not seen in products of various large radiotherapy companies so far. Therefore, the method for automatically calculating the head tumor radiotherapy positioning offset based on deep learning has pioneering significance and can fill the blank in the field.
The invention can carry out automatic registration in a head tumor patient, and the error between the calculated offset and the offset obtained by manually registering the delineation line is very small, thereby reaching the level of accurate radiotherapy.
According to the radiotherapy positioning offset automatic calculation method and system based on deep learning, accurate offset can be obtained without drawing contour lines on DRRs and DRRs or carrying out visual observation, calculation is carried out on a modern Graphic Processing Unit (GPU), the required time is less than 1 second, the workload of a positioning technician is reduced to a great extent, the treatment effect of a tumor patient is improved, and the overall treatment efficiency of a hospital is also improved.
The above embodiments are merely illustrative of the technical concept and features of the present invention, and the purpose thereof is to enable those skilled in the art to understand the content of the present invention and implement the present invention, and not to limit the scope of the present invention, and all equivalent changes or modifications made according to the spirit of the present invention should be covered in the scope of the present invention.

Claims (9)

1. A radiotherapy positioning offset calculation method based on deep learning is characterized by comprising the following steps:
(1) collecting DR images, CT images and/or DRR images matched with clinical use, wherein the CT images can generate corresponding DRR images;
(2) marking the part type of the matched image, and training an image part classification network by using marking data contained in the matched image to obtain a part classification model;
(3) further labeling the image of the preset part obtained in the step (1), aligning the image, and then training a registration network by using the aligned image to obtain an image registration model;
(4) analyzing and processing clinically obtained DR images and DRR images by using the model obtained by training in the step (2) and the step (3) to obtain the positioning deviation distance of the patient in the direction of the three-dimensional space coordinate system, wherein the step (4) specifically comprises the following steps:
(4.1) analyzing the image data of the DR image and the DRR image;
(4.2) forming a three-channel image by the collected DR image and the collected DRR image according to the two-channel 0-degree DR image and the one-channel 0-degree DRR image, inputting the three-channel image into the part classification model, judging whether the patient is a patient with a preset part, and if so, continuing the next step;
(4.3) preprocessing the DR image and the DRR image with the angle of 0 degree, inputting the preprocessed DR image and the DRR image into an image registration convolution neural network, and calculating to obtain an offset x in the left-right direction of the LAT and an offset z0 in the head-foot direction of the LNG;
(4.4) preprocessing the DR image and the DRR image of 90/270 degrees, inputting the preprocessed DR image and the preprocessed DRR image into an image registration convolutional neural network, and calculating to obtain an offset y in the front-back direction of the VRT and an offset z1 in the head-foot direction of the LNG;
(4.5) taking the deviation of the LNG head and foot direction as z ═ z0+ z 1)/2;
and (4.6) converting offset values in three directions according to the pixel size.
2. The radiation therapy positioning offset calculation method according to claim 1, wherein the step (1) further comprises the following steps: and (5) sorting and cleaning the collected data images.
3. The method of claim 2, wherein the data images collected in step (1) are sorted and washed by one or more of the following: and checking whether the data images are matched or not and whether the data images are clinical patient images or not, removing the images which do not meet the requirements, and classifying the images which meet the requirements according to whether CT exists or not.
4. The method of claim 1, wherein the input of step (2) is a plurality of sets of medical image data, each set comprising at least 0 degree and 90/270 degree 2-pair DR and DRR images, and wherein step (2) comprises the steps of:
(2.1) carrying out data annotation, namely viewing each group of data, and classifying each group of data according to different categories;
(2.2) dividing the marked medical images into a training set and a testing set according to the proportion;
(2.3) preprocessing the images of the training set and the test set;
(2.4) sending the preprocessed training set data and the label into an image part classification algorithm, and training the weight of the model;
(2.5) testing the classification accuracy of the model obtained by training by adopting the preprocessed test set data; if the iteration stop condition is not met or the specified training times are not met, continuing the training of the step (2.4); and if the stopping condition is reached, stopping the iterative training and storing the weight of the trained part classification model.
5. The method of claim 1, wherein the input of step (3) is a plurality of sets of medical image data, each set comprising at least 0 degree and 90/270 degree 2-pair DR and DRR images, and wherein step (3) comprises the following steps:
(3.1) aligning the DR images and the DRR images at 0 degree and 90 degrees/270 degrees;
(3.2) dividing the registered medical images into a training set and a test set according to the proportion;
(3.3) preprocessing the images of the training set and the test set, randomly offsetting the registered images, generating and recording an offset value as a true value label;
(3.4) sending the preprocessed training data set and the truth value offset label into a registration algorithm together, and training the weight of the model;
(3.5) testing the error between the registration deviation value and the true value of the model obtained by training by adopting the preprocessed test set data; if the iteration stop condition is not met or the specified training times are not met, continuing the training of the step (3.4); and if the stopping condition is reached, stopping the iterative training and storing the weight of the trained registration model.
6. The radiation therapy positioning offset calculation method of any one of claims 1-5, wherein the CT images are generated into DRR images by:
(d1) determining the position of a ray source according to the plan central point and the source wheelbase, and determining the related CT voxel range according to the plan central point, the pixel size and the image matrix size;
(d2) setting a proper attenuation rate;
(d3) a plurality of virtual rays are emitted from a ray source toward an imaging region, the rays pass through involved CT voxels, the path length through each voxel is calculated, and the gray level of the pixel obtained by transmission is calculated according to the attenuation rate.
7. The radiation therapy positioning offset calculation method of claim 6, wherein, in the step (d2),
if a DRR image of the skeleton is to be obtained, setting an appropriate attenuation rate threshold value to be less than the DRR image, and setting the attenuation rate of the tissue less than the DRR image threshold value to be 0;
if a DRR image of the soft tissue is to be obtained, setting the attenuation rate of the tissue larger than the threshold value as 0;
to obtain a DRR image of a specific tissue, the tissue is divided and the attenuation rate of other tissues is set to 0.
8. The method of calculating radiation treatment placement offset according to any of claims 1-5, wherein the DR images, CT images and DRR images are DICOM formatted files.
9. A computing system, comprising:
one or more processors;
a memory;
and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for a method of deep learning based radiation therapy placement offset calculation as claimed in any of claims 1-8.
CN202011235669.0A 2020-11-06 2020-11-06 Radiotherapy positioning offset calculation method and system based on deep learning Pending CN112348857A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011235669.0A CN112348857A (en) 2020-11-06 2020-11-06 Radiotherapy positioning offset calculation method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011235669.0A CN112348857A (en) 2020-11-06 2020-11-06 Radiotherapy positioning offset calculation method and system based on deep learning

Publications (1)

Publication Number Publication Date
CN112348857A true CN112348857A (en) 2021-02-09

Family

ID=74429001

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011235669.0A Pending CN112348857A (en) 2020-11-06 2020-11-06 Radiotherapy positioning offset calculation method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN112348857A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785632A (en) * 2021-02-13 2021-05-11 常州市第二人民医院 Cross-modal automatic registration method for DR (digital radiography) and DRR (digital radiography) images in image-guided radiotherapy based on EPID (extended medical imaging)
CN113041516A (en) * 2021-03-25 2021-06-29 中国科学院近代物理研究所 Method, system and storage medium for guiding positioning of three-dimensional image
CN113440745A (en) * 2021-07-21 2021-09-28 苏州雷泰医疗科技有限公司 Automatic positioning method and device based on deep learning and radiotherapy equipment
WO2022198553A1 (en) * 2021-03-25 2022-09-29 中国科学院近代物理研究所 Three-dimensional image-guided positioning method and system, and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112785632A (en) * 2021-02-13 2021-05-11 常州市第二人民医院 Cross-modal automatic registration method for DR (digital radiography) and DRR (digital radiography) images in image-guided radiotherapy based on EPID (extended medical imaging)
CN112785632B (en) * 2021-02-13 2024-05-24 常州市第二人民医院 Cross-modal automatic registration method for DR and DRR images in image-guided radiotherapy based on EPID
CN113041516A (en) * 2021-03-25 2021-06-29 中国科学院近代物理研究所 Method, system and storage medium for guiding positioning of three-dimensional image
CN113041516B (en) * 2021-03-25 2022-07-19 中国科学院近代物理研究所 Method, system, processing equipment and storage medium for guiding positioning of three-dimensional image
WO2022198553A1 (en) * 2021-03-25 2022-09-29 中国科学院近代物理研究所 Three-dimensional image-guided positioning method and system, and storage medium
CN113440745A (en) * 2021-07-21 2021-09-28 苏州雷泰医疗科技有限公司 Automatic positioning method and device based on deep learning and radiotherapy equipment

Similar Documents

Publication Publication Date Title
CN112348857A (en) Radiotherapy positioning offset calculation method and system based on deep learning
Torosdagli et al. Deep geodesic learning for segmentation and anatomical landmarking
CN108765417B (en) Femur X-ray film generating system and method based on deep learning and digital reconstruction radiographic image
US8803910B2 (en) System and method of contouring a target area
Heutink et al. Multi-Scale deep learning framework for cochlea localization, segmentation and analysis on clinical ultra-high-resolution CT images
US10149987B2 (en) Method and system for generating synthetic electron density information for dose calculations based on MRI
Bulatova et al. Assessment of automatic cephalometric landmark identification using artificial intelligence
AU2020101836A4 (en) A method for generating femoral x-ray films based on deep learning and digital reconstruction of radiological image
US9142020B2 (en) Osteo-articular structure
KR20080044251A (en) Method of placing constraints on a deformation map and system for implementing same
JP2009536857A (en) Deformable registration of images for image-guided radiology
CN111028914A (en) Artificial intelligence guided dose prediction method and system
US10445904B2 (en) Method and device for the automatic generation of synthetic projections
CN109472835A (en) Handle the method for medical image and the image processing system of medical image
EP4365838A1 (en) Registration method and system
Gupta Challenges for computer aided diagnostics using X-ray and tomographic reconstruction images in craniofacial applications
Widiasri et al. Dental-yolo: Alveolar bone and mandibular canal detection on cone beam computed tomography images for dental implant planning
CN114261095A (en) AI-based orthopedic 3D printing method and device
Uğurlu Performance of a convolutional neural network-based artificial intelligence algorithm for automatic cephalometric landmark detection
CN116630427B (en) Method and device for automatically positioning hip bone and femur in CT image
CN114558251A (en) Automatic positioning method and device based on deep learning and radiotherapy equipment
CN113255774A (en) Automatic positioning method and device based on anatomical structure detection and radiotherapy equipment
Boxwala et al. Retrospective reconstruction of three-dimensional radiotherapy treatment plans of the thorax from two dimensional planning data
CN113226184A (en) Method for metal artifact reduction in X-ray dental volume tomography
Jassim et al. The geometric and dosimetric accuracy of kilovoltage cone beam computed tomography images for adaptive treatment: a systematic review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination