CN113077433A - Deep learning-based tumor target area cloud detection device, system, method and medium - Google Patents
Deep learning-based tumor target area cloud detection device, system, method and medium Download PDFInfo
- Publication number
- CN113077433A CN113077433A CN202110342538.0A CN202110342538A CN113077433A CN 113077433 A CN113077433 A CN 113077433A CN 202110342538 A CN202110342538 A CN 202110342538A CN 113077433 A CN113077433 A CN 113077433A
- Authority
- CN
- China
- Prior art keywords
- target area
- deep learning
- tumor target
- tumor
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 206010028980 Neoplasm Diseases 0.000 title claims abstract description 221
- 238000001514 detection method Methods 0.000 title claims abstract description 163
- 238000013135 deep learning Methods 0.000 title claims abstract description 101
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000012549 training Methods 0.000 claims abstract description 28
- 230000002285 radioactive effect Effects 0.000 claims abstract description 25
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 17
- 231100000987 absorbed dose Toxicity 0.000 claims description 28
- 238000004364 calculation method Methods 0.000 claims description 20
- 238000004891 communication Methods 0.000 claims description 16
- 230000005855 radiation Effects 0.000 claims description 16
- 238000012545 processing Methods 0.000 claims description 14
- 238000013527 convolutional neural network Methods 0.000 claims description 13
- 230000004927 fusion Effects 0.000 claims description 13
- 238000007781 pre-processing Methods 0.000 claims description 13
- 238000001914 filtration Methods 0.000 claims description 12
- 238000001727 in vivo Methods 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 9
- 230000005540 biological transmission Effects 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 238000004980 dosimetry Methods 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 6
- 238000012706 support-vector machine Methods 0.000 claims description 6
- 238000012937 correction Methods 0.000 claims description 5
- 238000010521 absorption reaction Methods 0.000 claims description 4
- 230000001133 acceleration Effects 0.000 claims description 4
- 238000012986 modification Methods 0.000 claims description 3
- 230000004048 modification Effects 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 2
- 238000002591 computed tomography Methods 0.000 description 53
- 238000002600 positron emission tomography Methods 0.000 description 50
- 238000001959 radiotherapy Methods 0.000 description 31
- 238000011282 treatment Methods 0.000 description 12
- 230000008569 process Effects 0.000 description 11
- 230000001225 therapeutic effect Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 239000000284 extract Substances 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000002560 therapeutic procedure Methods 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 210000000056 organ Anatomy 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- 235000002566 Capsicum Nutrition 0.000 description 1
- 239000006002 Pepper Substances 0.000 description 1
- 235000016761 Piper aduncum Nutrition 0.000 description 1
- 235000017804 Piper guineense Nutrition 0.000 description 1
- 244000203593 Piper nigrum Species 0.000 description 1
- 235000008184 Piper nigrum Nutrition 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 201000011510 cancer Diseases 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 210000000920 organ at risk Anatomy 0.000 description 1
- 230000036285 pathological change Effects 0.000 description 1
- 231100000915 pathological change Toxicity 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 150000003839 salts Chemical class 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10104—Positron emission tomography [PET]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02E—REDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
- Y02E30/00—Energy generation of nuclear origin
- Y02E30/30—Nuclear fission reactors
Abstract
The application discloses a device, a system, a method and a medium for detecting the cloud of a tumor target area based on deep learning, which comprise a data acquisition module and a detection module, wherein the data acquisition module is used for acquiring multi-modal medical images to be detected and radioactive source parameters; the detection module comprises a delineation submodule, and the delineation submodule is used for delineating a tumor target area in the multi-modal medical image to be detected by using the trained target area delineation model; the target region delineation model is obtained by training a model constructed based on a deep learning algorithm by using a training set, the training set comprises a multi-modal medical sample image and a corresponding sample label, and the sample label comprises tumor target region information and radioactive source parameters in the multi-modal medical sample image. This application sets up data acquisition module and including the detection module who sketches the submodule piece on tumour target area cloud detection device, realizes the automatic sketching to tumour target area through each module of the aforesaid, improves detection accuracy and efficiency to tumour target area.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a device, a system, a method and a medium for detecting a tumor target area cloud based on deep learning.
Background
Tumor radiotherapy is a local treatment method for treating tumors by utilizing radioactive rays, and the role and the position of radiotherapy in tumor treatment are increasingly highlighted, so that the radiotherapy becomes one of main means for treating malignant tumors. The core of modern radiotherapy technology is conformal intensity modulated radiotherapy, which is characterized in that the shape of a radiation field needs to be consistent with that of a target area, namely a pathological change, so that the control rate of tumors is improved, and the damage to normal tissues and organs is reduced to the minimum. Therefore, delineation of the target region and organs at risk of the radiotherapy image is a key step in planning radiotherapy.
The traditional target region delineation method is based on CT images of radiotherapy parts, extracts the characteristics in a certain image fault as prior knowledge, and carries out manual target region delineation layer by layer on a fault sequence. On the one hand, most of the current clinical practice is mainly drawn by the manual of the radiotherapist, the workload of the radiotherapist is large, the efficiency is low, and the drawing levels, the drawing capabilities, the drawing standards and other aspects of different physicists are difficult to reach the consistency. On the other hand, the existing automatic auxiliary system adopts a template matching method, so that the automatic auxiliary system cannot adaptively perform image delineation results, needs to manually set parameters such as height, weight and the like of a patient in advance, cannot effectively reduce workload, has low working efficiency, and has difficulty in ensuring the accuracy of delineation when a new case is encountered, and still needs to manually delineate and modify again with large workload.
Disclosure of Invention
In view of this, the present invention provides a device, a system, a method and a medium for detecting a tumor target area cloud based on deep learning, which can achieve automatic delineation of the tumor target area and improve detection accuracy and efficiency of the tumor target area, and the specific scheme is as follows:
the first aspect of the application provides a tumour target area cloud detection device based on deep learning, including data acquisition module and detection module, wherein:
the data acquisition module is used for acquiring the multi-modal medical image to be detected and corresponding radioactive source parameters;
the detection module comprises a delineation submodule, and the delineation submodule is used for delineating a tumor target area in the multi-modal medical image to be detected by using a trained target area delineation model; the target region delineation model is obtained by training a model constructed based on a deep learning algorithm by using a training set, wherein the training set comprises a multi-modal medical sample image and a corresponding sample label, and the sample label comprises tumor target region information in the multi-modal medical sample image and a corresponding radioactive source parameter.
Optionally, the detection module further includes a dose calculation submodule, and the dose calculation submodule is configured to calculate a distribution of absorbed doses of the radiation by using an in-vivo dose absorption distribution calculation model.
Optionally, the dose calculation sub-module is further configured to calculate a distribution of absorbed doses of the radiation using a single homogeneous dosimetry model.
Optionally, the delineation sub-module includes:
a preprocessing unit for preprocessing the multi-modality medical image;
the image fusion unit is used for obtaining the coordinate corresponding relation between the preprocessed multi-modal medical images by utilizing a multi-modal medical image fusion technology and carrying out coordinate matching operation on the multi-modal medical images based on the coordinate corresponding relation;
the recognition unit is used for recognizing all target parts in the multi-modal medical image after the coordinate matching by using a target detection algorithm based on a convolutional neural network and marking the tumor position in the multi-modal medical image;
and the target area delineating unit is used for delineating the tumor target area in the multi-modal medical image by utilizing a three-dimensional convolution neural network based on all target part information and tumor position information.
Optionally, the preprocessing unit includes a denoising subunit and a normalizing subunit, where:
the denoising subunit is specifically configured to perform denoising processing on the multimodal medical image through a gaussian filtering method;
the normalization subunit is specifically configured to perform normalization processing on the denoised multi-modal medical image.
Optionally, the identification unit includes an extraction subunit, a classification subunit, a modification subunit, and a marking subunit, where:
the extraction subunit is specifically configured to extract a preset number of target candidate regions from the multi-modal medical image after each coordinate matching, and perform feature extraction on the target candidate regions by using a convolutional neural network;
the classification subunit is specifically configured to input the extracted features into a support vector machine classifier to identify all target portions in the multi-modal medical image after coordinate matching;
the correction subunit is specifically configured to correct the position of the identified candidate frame of the target portion by using a regressor;
the marking subunit is specifically configured to mark a tumor position in the multimodal medical image by a standard uptake value method.
The second aspect of the application provides a tumour target area cloud detecting system based on degree of depth study, including a plurality of tumour target area detection client and at least one aforementioned tumour target area cloud detecting device based on degree of depth study, wherein:
the tumor target area detection client is used for sending the multi-modal medical image to be detected and the corresponding radioactive source parameters to the tumor target area cloud detection device based on deep learning, receiving and detecting the tumor target area in the multi-modal medical image to be detected based on the delineation result and the absorbed dose distribution condition calculation result returned by the tumor target area cloud detection device based on deep learning.
Optionally, a data transmission security communication framework between the tumor target area detection client and the deep learning-based tumor target area cloud detection device is constructed based on a security transport layer protocol.
The third aspect of the present application provides a deep learning-based tumor target area cloud detection method, which is applied to the aforementioned deep learning-based tumor target area cloud detection apparatus, and includes:
receiving a multi-modal medical image to be detected and corresponding radioactive source parameters sent by a tumor target area detection client;
the delineation submodule on the deep learning-based tumor target area cloud detection device is used for delineating the tumor target area in the multi-modal medical image to be detected;
and sending the delineation result to the tumor target area detection client through a communication network so that the tumor target area detection client can detect the tumor target area in the multi-modal medical image to be detected based on the delineation result.
A fourth aspect of the present application provides a deep learning-based tumor target area cloud detection apparatus, which includes a processor and a memory; wherein the memory is used for storing a computer program which is loaded and executed by the processor to implement the aforementioned deep learning based tumor target area cloud detection method.
Optionally, the processor includes a plurality of central processing units and a plurality of heterogeneous acceleration units, and the memory includes a plurality of storage units.
A fifth aspect of the present application provides a computer-readable storage medium, having stored therein computer-executable instructions, which when loaded and executed by a processor, implement the aforementioned deep learning based tumor target area cloud detection method.
In the application, the tumor target area cloud detection device based on deep learning comprises a data acquisition module and a detection module, wherein the data acquisition module is used for acquiring a multi-modal medical image to be detected and corresponding radioactive source parameters; the detection module comprises a delineation submodule, and the delineation submodule is used for delineating a tumor target area in the multi-modal medical image to be detected by using a trained target area delineation model; the target region delineation model is obtained by training a model constructed based on a deep learning algorithm by using a training set, wherein the training set comprises a multi-modal medical sample image and a corresponding sample label, and the sample label comprises tumor target region information in the multi-modal medical sample image and a corresponding radioactive source parameter. This application is used for acquireing the data acquisition module that waits to detect multimodal medical image and corresponding radiation source parameter and including the detection module who sketches the submodule through setting up on tumour target area cloud detection device, realizes the automatic sketching to the tumour target area through each module of the aforesaid, has improved detection accuracy and efficiency to the tumour target area.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a tumor target area cloud detection device based on deep learning according to the present application;
FIG. 2 is a flow chart of a target delineation model training process provided herein;
fig. 3 is a flowchart illustrating a target region delineation model provided in the present application;
fig. 4 is a schematic structural diagram of a specific tumor target area cloud detection device based on deep learning according to the present application;
fig. 5 is a schematic diagram of a deep learning-based tumor target cloud detection system provided in the present application;
fig. 6 is a flowchart of a deep learning-based tumor target region cloud detection method provided by the present application;
fig. 7 is a structural diagram of a tumor target area cloud detection device based on deep learning according to the present application;
fig. 8 is a hardware architecture diagram of a specific tumor target cloud detection device based on deep learning according to the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The existing traditional target region delineation method is based on CT images of radiotherapy parts, extracts features in a certain image fault as prior knowledge, and carries out manual target region delineation layer by layer on a fault sequence. On the one hand, most of the current clinical practice is mainly drawn by the manual of the radiotherapist, the workload of the radiotherapist is large, the efficiency is low, and the drawing levels, the drawing capabilities, the drawing standards and other aspects of different physicists are difficult to reach the consistency. On the other hand, the existing automatic auxiliary system adopts a template matching method, so that the automatic auxiliary system cannot adaptively perform image delineation results, needs to manually set parameters such as height, weight and the like of a patient in advance, cannot effectively reduce workload, has low working efficiency, and has difficulty in ensuring the accuracy of delineation when a new case is encountered, and still needs to manually delineate and modify again with large workload. To above-mentioned technical defect, this application provides a tumour target area cloud detection device based on degree of depth study, sets up the data acquisition module that is used for acquireing to wait to detect multimodal medical image and corresponding radiation source parameter and including the detection module who sketches the submodule on tumour target area cloud detection device, realizes the automatic sketching to the tumour target area through above-mentioned each module, improves the detection accuracy and the efficiency to the tumour target area simultaneously.
Fig. 1 is a schematic structural diagram of a tumor target area cloud detection device based on deep learning according to an embodiment of the present application. Referring to fig. 1, the apparatus for detecting a tumor target cloud based on deep learning includes a data acquisition module 01 and a detection module 02, wherein:
the data acquisition module 01 is used for acquiring the multi-modal medical image to be detected and corresponding radioactive source parameters.
In this embodiment, for the same radiotherapy part, when the target region is delineated on different modality images of the radiotherapy part, a complex relationship exists between the images of different target regions, and the complex relationship may be caused by factors such as the imaging principle and resolution of the images of different modalities, so that the result of detecting the multi-modality medical image of the same detection target has higher accuracy than the result of detecting a single medical image. The multi-modal medical images are Positron Emission Tomography (PET) images and Computed Tomography (CT) images, and include, but are not limited to, PET images and CT images, and may also include ultrasound images, etc. the steps of delineating the target region of a tumor in different multi-modal medical images are substantially the same, and only during model training, the sample images to be trained include all the multi-modal medical images. Because the PET image can clearly display the functional structure of the target part, the CT image can clearly display the physical structure of the target part, the medical image fusion technology is fully utilized, the functional visualization and the physical visualization are combined, and the advantage complementation is realized, so the PET image and the CT image are taken as an example for explanation in the embodiment. The radioactive source parameters are parameters of a therapeutic machine which emits radioactive rays for radiotherapy, such as the type of the therapeutic machine, and because the radiotherapy effect is closely related to the therapeutic device and the parameters of the therapeutic device, the detection error is reduced to a great extent by taking the therapeutic device and the parameters of the therapeutic device into account.
The detection module 02 comprises a delineation submodule 03, and the delineation submodule 03 is used for delineating a target area of a tumor in the multi-modal medical image to be detected by using a trained target area delineation model; the target region delineation model is obtained by training a model constructed based on a deep learning algorithm by using a training set, wherein the training set comprises a multi-modal medical sample image and a corresponding sample label, and the sample label comprises tumor target region information in the multi-modal medical sample image and a corresponding radioactive source parameter.
In this embodiment, the detection module 02 mainly performs delineation of the tumor target region in the multi-modal medical sample image, that is, the PET image to be detected and the CT image to be detected, acquired by the data acquisition module 01, so as to detect the tumor target region. Specifically, the detection module 02 is provided with a delineation submodule 03, and the delineation submodule 03 utilizes a trained target delineation model to delineate the target areas of the tumors in the to-be-detected PET image and the to-be-detected CT image. The target area delineation model is a deep learning algorithm model, can comprehensively consider various factors, and can accurately reflect the mapping relation between the tumor target areas of different modality medical images, so that the tumor target areas of different modality medical images can be accurately positioned.
In this embodiment, a training process of the target delineation model is specifically shown in fig. 2, and first a multi-modality medical sample image, that is, an original target PET sample image and an original target CT sample image, is obtained, then a tumor target in the original target PET sample image and the original target CT sample image is manually delineated to obtain corresponding tumor target information, and an emitter and parameters, that is, a treatment device and parameters, are selected, and the tumor target information and the corresponding treatment device and parameters in the original target PET sample image and the original target CT sample image are used as sample labels, so as to obtain a training set including the original target PET sample image, the original target CT sample image and the corresponding sample labels. The process of training the model constructed based on the deep learning algorithm by utilizing the training set specifically comprises the steps of taking the original target area PET sample image and the original target area CT sample image, a curer and parameters as input data of a target area delineation model to be trained, training the target area delineation model, and obtaining the trained target area delineation model when the model training ending condition is reached. The trained deep learning model learns the method for delineating the tumor target area (namely the PET target area to be detected) in the PET image to be detected and the tumor target area (namely the CT target area to be detected) in the CT image to be detected under different cursors and parameters thereof and the complex mapping relation between the PET target area to be detected and the CT target area to be detected, so that the CT target area to be detected and the PET target area to be detected output by the model have higher accuracy and accurately correspond to the same radiotherapy part.
The delineation submodule 03 comprises a preprocessing unit, an image fusion unit, an identification unit and a target delineation unit. Wherein the preprocessing unit is used for preprocessing the multi-modal medical image; the image fusion unit is used for obtaining a coordinate corresponding relation between the preprocessed multi-modal medical images by utilizing a multi-modal medical image fusion technology and carrying out coordinate matching operation on the multi-modal medical images based on the coordinate corresponding relation; the recognition unit is used for recognizing all target parts in the multi-modal medical image after coordinate matching by using a target detection algorithm based on a convolutional neural network and marking the tumor position in the multi-modal medical image; and the target area delineating unit is used for delineating the tumor target area in the multi-modal medical image by utilizing a three-dimensional convolution neural network based on all target part information and tumor position information. The preprocessing unit, the image fusion unit, the identification unit and the target area delineation unit jointly realize the function of delineating the target areas of the tumors in the to-be-detected PET image and the to-be-detected CT image acquired by the data acquisition module 01, and the specific process is shown in fig. 3.
Firstly, a denoising subunit and a normalizing subunit in the preprocessing unit are used for preprocessing the PET image to be detected and the CT image to be detected, the denoising subunit is specifically used for denoising the multimodal medical image through a Gaussian filtering method, and the normalizing subunit is specifically used for normalizing the denoised multimodal medical image. Namely, the method mainly comprises two steps, wherein the denoising subunit is used for denoising the PET image to be detected and the CT image to be detected by a Gaussian filtering method, and the normalization subunit is used for normalizing the denoised PET image to be detected and the CT image to be detected. Gaussian filter is widely applied to a noise reduction process of image processing, namely a process of carrying out weighted average on the whole image, the value of each pixel point is obtained by carrying out weighted average on the pixel point and other pixel values in a neighborhood, when the pixel in the neighborhood of the image is smoothed by the Gaussian filter, the pixels at different positions in the neighborhood are endowed with different weights, and the overall gray scale distribution characteristics of the image can be more kept while the image is smoothed. In addition to the gaussian filtering manner, common filtering algorithms such as mean filtering and median filtering may also be used to perform noise processing on the PET image to be detected and the CT image to be detected, but the mean filtering manner is susceptible to noise interference, cannot completely eliminate noise, and only can relatively reduce noise, and the median filtering manner is not so sensitive to noise, so that salt and pepper noise can be better eliminated, but discontinuity of the images is easily caused. A suitable filtering method may be selected according to different service requirements, which is not limited in this embodiment.
Although the PET image can clearly display the functional structure of the target portion, and the CT image can clearly display the physical structure of the target portion, because the two display structures are different, and the coordinate of the same portion may have a certain deviation, the multi-modal medical image fusion technique needs to be used to correct the coordinate correspondence between the PET image to be detected and the CT image to be detected output by the preprocessing unit, so as to obtain the PET image to be detected and the CT image to be detected, which are matched in coordinates, and reduce the position deviation which may occur in the subsequent delineation process. The specific implementation of the multi-modal medical image fusion technology can be feature-based BP neural network multi-modal image fusion.
The identification unit is mainly used for feature identification and comprises an extraction subunit, a classification subunit, a modification subunit and a marking subunit. The extraction subunit is specifically configured to extract a preset number of target candidate regions from the multi-modal medical image after each coordinate matching, and perform feature extraction on the target candidate regions by using a convolutional neural network; the classification subunit is specifically configured to input the extracted features into a Support Vector Machine (SVM) classifier, so as to identify all target portions in the multimodal medical image after coordinate matching; the correction subunit is specifically configured to correct the position of the identified candidate frame of the target portion by using a regressor; the labeling subunit is specifically configured to label a tumor position in the multimodal medical image by a Standard Uptake Value (SUV) method. Specifically, the extracting subunit automatically extracts a preset number of target candidate regions of the PET image to be detected and the CT image to be detected, extracts features of each target candidate region by using a convolutional neural network, inputs the extracted features to each class of SVM classifier, and determines whether the type of a target part of the target candidate region belongs to the class of SVM classifier, so as to identify all target parts in the PET image to be detected and the CT image to be detected. The regressor, that is, the regression algorithm, is a type of algorithm that explores the relationship between variables by using the measure of the error, and common regression algorithms include a least square method (linear regression), a logistic regression, a stepwise regression, a multivariate adaptive regression spline, and the like, which is not limited in this embodiment. And simultaneously, marking the tumor positions in the PET image to be detected and the CT image to be detected by using an SUV (sum of absolute value) 2.5 delineation method so as to obtain the tumor position information. Finally, the target area delineation unit is based on all target part information and the tumor position information of the to-be-detected PET image and the to-be-detected CT image, and because the PET image and the CT image are three-dimensional images, the delineation of the tumor target area is completed by utilizing a three-dimensional convolutional neural network (3DCNN), and the three-dimensional convolutional neural network can better capture time sequence information in the images. It should be noted that, because the coordinates in the detection PET image and the CT image to be detected have a mapping relationship, in this embodiment, only the tumor position in one of the detection PET image and the CT image to be detected may be marked and delineated by using a three-dimensional convolutional neural network, and on this basis, only the tumor target area on the image after the delineation processing needs to be mapped to another image.
Therefore, the tumor target area cloud detection device based on deep learning in the embodiment of the application comprises a data acquisition module and a detection module, wherein the data acquisition module is used for acquiring multi-modal medical images to be detected and corresponding radioactive source parameters; the detection module comprises a delineation submodule, and the delineation submodule is used for delineating a tumor target area in the multi-modal medical image to be detected by using a trained target area delineation model; the target region delineation model is obtained by training a model constructed based on a deep learning algorithm by using a training set, wherein the training set comprises a multi-modal medical sample image and a corresponding sample label, and the sample label comprises tumor target region information in the multi-modal medical sample image and a corresponding radioactive source parameter. This application is used for acquireing the data acquisition module that waits to detect multimodal medical image and corresponding radiation source parameter and including the detection module who sketches the submodule through setting up on the tumour target area cloud detection device based on degree of depth study, realizes the automatic sketching to the tumour target area through above-mentioned each module, has improved the detection accuracy and the efficiency to the tumour target area.
Fig. 4 is a schematic structural diagram of a specific tumor target area cloud detection device based on deep learning according to an embodiment of the present application. Referring to fig. 4, the tumor target region cloud detection apparatus based on deep learning is added with a dosimeter operator module on the basis of the above embodiment.
In this embodiment, the dose calculation submodule is configured to calculate a distribution of an absorbed dose of the radiation using an in-vivo dose absorption distribution calculation model. In the course of radiotherapy, under the condition of ensuring that the form of radiation field is identical to that of target region, the dose of several points in the radiation field can be regulated according to the requirements, so that the irradiation dose distribution is identical to that of target region, and when the high-dose distribution of radiotherapy is identical to that of target region of tumor, the dose distribution can be maximally controlled, and the control rate of tumor can be raised, and the damage to normal tissue and position can be reduced. Therefore, on the basis of delineating the tumor target area of the multi-modal medical sample image by using the target area delineation model, the absorption dose distribution condition in the combination can accurately position the target area, so that the accurate detection of the tumor target area is realized. The tumor target area cloud detection device based on deep learning comprises a dose calculation submodule, and meanwhile, the device is composed of a model training data interaction module, an image and radioactive source and parameter data network receiving module thereof, and an image and in-vivo absorbed dose distribution data network sending module. Through the cooperation among the modules, the PET image to be detected and the tumor target area in the CT image to be detected are sketched, and meanwhile, the distribution condition of absorbed dose in vivo can be further calculated, so that the radiotherapy effect can be known in advance. In order to solve the corresponding relationship of the positions of different images of the multi-modal image and the distribution condition of the absorbed dose in vivo when different parameters of different treatment devices are adopted for treatment, the PET image, the CT image, the treatment devices and the parameters thereof are combined to obtain richer information to know the radiotherapy part on one hand, and on the other hand, the distribution condition of the absorbed dose in vivo can be calculated during treatment, and meanwhile, the distribution data of the absorbed dose in vivo output by the calculation model of the absorbed dose in vivo has high accuracy, which is helpful for a radiotherapy operator to make a radiotherapy plan. More specifically, the dosimeter operator module may be further configured to calculate a distribution of absorbed doses of the radiation using a single uniform dosimetry model. The single uniform dosimetry model is built based on equal dose distribution, wherein points with the same absorbed dose in irradiation areas in a body model are connected to draw a curve, namely an equal dose distribution curve, and the equal dose curve is usually obtained by direct measurement in a uniform water model and is basic dosimetry data. The isodose distribution curve may be used to characterize the distribution of absorbed dose of the radiation. Factors that affect the isodose profile include, but are not limited to, the type of radiation, the quality of the radiation, the source skin distance, and the penumbra size.
In this embodiment, a specific implementation process of the tumor target area cloud detection device based on deep learning is as follows. Firstly, the PET image to be detected, the CT image to be detected, the therapeutic apparatus and parameters thereof are received through a network receiving module. Then, the delineation submodule denoises the input PET image to be detected and the input CT image to be detected by adopting a Gaussian filtering method and performs normalization processing to obtain the PET image to be detected and the CT image to be detected with pixel values between [0, 255 ]. And then, completing multi-modal image fusion by adopting a BP neural network algorithm based on characteristics, obtaining the coordinate corresponding relation between the PET image to be detected and the CT image to be detected, completing the identification of human organs by utilizing an R-CNN target detection algorithm, automatically extracting 3000 candidate regions in each image, extracting characteristics by utilizing an R-CNN network, sending the extracted characteristics to each type of SVN classifier, judging target parts, and finely repairing the position of a selection frame by adopting a regressor method. And then, marking the tumor position by using an SUV (sum of absolute value) 2.5 absolute value delineation method, and on the basis, using the tumor image characteristics automatically extracted by the R-CNN to complete delineation of the tumor target area and the endangered part by using a three-dimensional convolution neural network. Furthermore, the dose calculation sub-module calculates the distribution of the absorbed dose in vivo according to a calculation method of a single uniform dosimetry model. And finally, the network sending module outputs the delineated PET image to be detected, the delineated CT image to be detected and the distribution data of the absorbed dose in the body.
It can be seen that, in the embodiment of the present application the tumour target area cloud detection device based on deep learning automatically delineates through the target area delineation model based on deep learning treat the PET image and treat the tumour target area in the CT image, the internal absorbed dose distribution of output under appointed therapentic equipment and parameter setting simultaneously is favorable to more clear accurate observation radiotherapy position and the internal absorbed dose distribution condition, improves tumour detection efficiency and target area delineation uniformity.
Fig. 5 is a schematic view of a deep learning-based tumor target area cloud detection system provided by the present application. Referring to fig. 5, the deep learning based tumor target area cloud detection system includes at least one aforementioned tumor target area cloud detection device 11 and several tumor target area detection clients 12.
In this embodiment, the cloud detection system for target tumor areas based on deep learning is composed of 1 cloud detection device 11 for target tumor areas based on deep learning and N client detection devices 12 for target tumor areas, each client detection device 12 for target tumor areas is a workstation computer used by a radiotherapist, and is connected to the cloud detection device for target tumor areas based on deep learning 11 through ethernet, 4G/5G or WIFI network. The tumor target area detection client 12 is configured to send the multi-modal medical image to be detected and the corresponding radioactive source parameters to the tumor target area cloud detection device 11 based on deep learning, receive and detect the tumor target area in the multi-modal medical image to be detected based on a delineation result and an absorbed dose distribution condition calculation result returned by the tumor target area cloud detection device 11 based on deep learning. Specifically, the tumor target detection client 12 is configured to send the PET image to be detected, the CT image to be detected, the corresponding therapy apparatus and parameters thereof to the tumor target cloud detection device 11 based on deep learning, after the tumor target cloud detection device 11 based on deep learning receives the data sent by the tumor target detection client 12, the tumor target cloud detection device 11 based on deep learning delineates the PET image to be detected and the tumor target in the CT image to be detected through the delineation submodule, and at the same time, the distribution of the absorbed dose of the radioactive ray is calculated through the dosimeter operator module, and the tumor target delineation result and the distribution calculation result of the absorbed dose are returned to the tumor target detection client 12 at the same time. The tumor target area detection client 12 receives the tumor target area delineation result and the absorbed dose distribution condition calculation result, and detects the tumor target area in the multi-modal medical image to be detected based on the results. In addition, a data transmission security communication framework between the tumor target detection client 12 and the deep learning based tumor target cloud detection device 11 is constructed based on a security transport layer protocol. That is, the tumor target area cloud detection device 11 and the tumor target area detection client 12 based on deep learning perform data transmission by using the TLS protocol, thereby ensuring confidentiality and data integrity of communication data.
Therefore, the tumor target area cloud detection system based on deep learning in the embodiment of the application realizes the functions of a traditional target area delineation system and a radiotherapy planning system. Tumor target area cloud detection system based on degree of depth study receives PET image, CT image and the therapy apparatus model at same radiotherapy position, and with PET image, CT image, the therapy apparatus model at same position as the degree of depth study model that the input was accomplished through the training, the system can automatic output two kinds of target area image data after the sketch and calculate the internal absorbed dose distribution condition of patient, and the sketch result supplies radiotherapy plan formulator with dose distribution to use. The system introduces the PET image, the CT image, the radioactive source model and the parameters into the deep learning model, improves the accuracy rate of the drawing of the deep learning system and the accuracy rate of the calculation result of the distribution of the absorbed dose in the body, systematically improves the formulation effect of the radiotherapy plan, and improves the working efficiency of the radiotherapy. Meanwhile, the tumor target area cloud detection system based on deep learning adopts a private cloud computing technology, on one hand, computing power and storage capacity are greatly enhanced, and on the other hand, data communication adopts a TLS (transport layer security) protocol, so that the security of private data in the transmission process is ensured.
Fig. 6 is a flowchart of a tumor target area cloud detection method based on deep learning according to an embodiment of the present application. Referring to fig. 6, the method for detecting the cloud of the target tumor area based on deep learning is applied to the device for detecting the cloud of the target tumor area based on deep learning, and includes:
s11: and receiving the multi-modal medical image to be detected and corresponding radioactive source parameters sent by the tumor target area detection client.
S12: and delineating the tumor target area in the multi-modal medical image to be detected by utilizing the delineation submodule on the deep learning-based tumor target area cloud detection device.
S13: and sending the delineation result to the tumor target area detection client through a communication network so that the tumor target area detection client can detect the tumor target area in the multi-modal medical image to be detected based on the delineation result.
In this embodiment, first, a multi-modal medical image to be detected and corresponding radiation source parameters sent by a tumor target area detection client are received. And the radiotherapy operator sends the PET image, the CT image, the therapeutic apparatus and the parameters of the same part of the target to be detected to the tumor target area cloud detection device based on deep learning through the Ethernet, the mobile network or the WiFi by using the computer software of the tumor target area detection client. And then, the delineation submodule on the deep learning-based tumor target area cloud detection device is used for delineating the tumor target area in the multi-modal medical image to be detected. The tumor target area cloud detection device based on deep learning receives the PET image, the CT image, the therapeutic device and the parameters thereof sent by the tumor target area detection client, sends the PET image, the CT image, the therapeutic device and the parameters thereof to a target area sketching model based on deep learning, and outputs the sketched PET image and CT image after model reasoning. And finally, sending the delineation result to the tumor target area detection client through a communication network so that the tumor target area detection client can detect the tumor target area in the multi-modal medical image to be detected based on the delineation result. The tumor target area cloud detection device based on deep learning returns the delineated PET image and the delineated CT image to the tumor target area detection client through the Ethernet, the mobile network or the WiFi. The radiotherapy engineer acquires the delineated PET image and the CT image on the computer software of the tumor detection client, confirms the result and makes necessary correction, and develops the work of radiotherapy plan design, verification, treatment scheduling and the like on the basis. With the continuous development of genomics, metabonomics and iconics, a big data support is provided for accurate medical treatment of tumor patients, but a huge challenge is brought to data analysis of oncologists. The artificial intelligence, especially the deep learning, can process high-dimensional data in a large scale, can automatically identify and dynamically monitor the target focus in the aspect of image identification, assists clinicians in obtaining more accurate imaging evaluation, improves the working efficiency, reduces the workload, and has important values in the aspects of tumor diagnosis, recurrence detection and individualized diagnosis and treatment. In addition, in the embodiment of the application, while the delineation submodule of the tumor target area cloud detection device based on deep learning is used for delineating the received PET image and the CT image sent by the tumor target area detection client, a dosimeter operator module on the tumor target area cloud detection device based on deep learning can also be used for delineating the absorbed dose distribution data in vivo of the target to be detected under the treatment apparatus and the parameter setting thereof, and the calculation result and the delineation result are simultaneously returned to the computer software of the tumor detection client through the communication network for the radiotherapy officer to confirm and make necessary corrections.
It can be seen that, the multi-modal medical image to be detected and the corresponding radioactive source parameters which are sent by the tumor target area detection client are received firstly, then the delineation submodule which is positioned on the deep learning-based tumor target area cloud detection device is used for delineating the tumor target area in the multi-modal medical image to be detected, and finally the delineation result is sent to the tumor target area detection client through the communication network, so that the tumor target area detection client can detect the tumor target area in the multi-modal medical image to be detected based on the delineation result. According to the embodiment of the application, the automatically-delineated two target area image data and the distribution condition of the absorbed dose in the body are provided by the method for the radiotherapy plan maker to use, so that the detection efficiency and accuracy of the tumor target area are improved.
Further, the embodiment of the application also provides a tumor target area cloud detection device based on deep learning. Fig. 7 is a block diagram illustrating a deep learning based tumor target area cloud detection apparatus 20 according to an exemplary embodiment, which should not be construed as limiting the scope of the application in any way.
Fig. 7 is a schematic structural diagram of a tumor target area cloud detection apparatus 20 based on deep learning according to an embodiment of the present application. This tumour target region cloud detection device 20 based on deep learning specifically can include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein the memory 22 is used for storing a computer program, which is loaded and executed by the processor 21 to implement relevant steps in the deep learning based tumor target area cloud detection method disclosed in any of the foregoing embodiments. The processor 21 includes a plurality of central processing units and a plurality of heterogeneous acceleration units, the memory 22 includes a plurality of storage units, and the tumor target area cloud detection apparatus based on deep learning may further include a plurality of network transmission units, as shown in fig. 8. The number of the central processor unit, the heterogeneous acceleration unit and the storage unit is 1 to n (n >1), and the network transmission unit can also be configured with 1 to a plurality of units according to the requirement. The tumor target area cloud detection device based on deep learning has the advantages of being flexible in configuration, good in framework expansibility, capable of increasing or reducing the number of corresponding units according to needs, capable of supporting large computing power and large storage capacity required by artificial intelligence and capable of improving model computing speed.
In this embodiment, the power supply 23 is configured to provide a working voltage for each hardware device on the deep learning-based tumor target area cloud detection apparatus 20; the communication interface 24 can create a data transmission channel between the device 20 and an external device for the deep learning-based tumor target area cloud detection, and the communication protocol followed by the device is any communication protocol applicable to the technical scheme of the application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
In addition, the storage 22 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon may include an operating system 221, a computer program 222, image data 223, etc., and the storage may be a transient storage or a permanent storage.
The operating system 221 is configured to manage and control hardware devices and computer programs 222 on the deep learning-based tumor target cloud detection apparatus 20, so as to implement the operation and processing of the mass image data 223 in the memory 22 by the processor 21, where the operation and processing may be Windows Server, Netware, Unix, Linux, and the like. The computer program 222 may further include a computer program that can be used to perform other specific tasks in addition to the computer program that can be used to perform the deep learning based tumor target area cloud detection method performed by the deep learning based tumor target area cloud detection apparatus 20 disclosed in any of the foregoing embodiments. The data 223 may include image data collected by the deep learning based tumor target cloud detection apparatus 20.
Further, an embodiment of the present application also discloses a storage medium, in which a computer program is stored, and when the computer program is loaded and executed by a processor, the steps of the tumor target area cloud detection method based on deep learning disclosed in any of the foregoing embodiments are implemented.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The tumor target area cloud detection device, system, method and storage medium based on deep learning provided by the invention are described in detail above, and a specific example is applied in the text to explain the principle and implementation of the invention, and the description of the above embodiment is only used to help understanding the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.
Claims (12)
1. The utility model provides a tumour target area cloud detection device based on deep learning which characterized in that, includes data acquisition module and detection module, wherein:
the data acquisition module is used for acquiring the multi-modal medical image to be detected and corresponding radioactive source parameters;
the detection module comprises a delineation submodule, and the delineation submodule is used for delineating a tumor target area in the multi-modal medical image to be detected by using a trained target area delineation model; the target region delineation model is obtained by training a model constructed based on a deep learning algorithm by using a training set, wherein the training set comprises a multi-modal medical sample image and a corresponding sample label, and the sample label comprises tumor target region information in the multi-modal medical sample image and a corresponding radioactive source parameter.
2. The deep learning based tumor target cloud detection apparatus according to claim 1, wherein the detection module further comprises a dose calculation submodule for calculating a distribution of absorbed dose of the radiation using an in vivo dose absorption distribution calculation model.
3. The deep learning based tumor target cloud detection apparatus according to claim 2, wherein the dose calculation sub-module is further configured to calculate a distribution of absorbed doses of the radiation using a single uniform dosimetry model.
4. The deep learning based tumor target cloud detection apparatus according to any one of claims 1 to 3, wherein the delineation sub-module comprises:
a preprocessing unit for preprocessing the multi-modality medical image;
the image fusion unit is used for obtaining the coordinate corresponding relation between the preprocessed multi-modal medical images by utilizing a multi-modal medical image fusion technology and carrying out coordinate matching operation on the multi-modal medical images based on the coordinate corresponding relation;
the recognition unit is used for recognizing all target parts in the multi-modal medical image after the coordinate matching by using a target detection algorithm based on a convolutional neural network and marking the tumor position in the multi-modal medical image;
and the target area delineating unit is used for delineating the tumor target area in the multi-modal medical image by utilizing a three-dimensional convolution neural network based on all target part information and tumor position information.
5. The deep learning based tumor target cloud detection apparatus according to claim 4, wherein the preprocessing unit comprises a denoising subunit and a normalizing subunit, wherein:
the denoising subunit is specifically configured to perform denoising processing on the multimodal medical image through a gaussian filtering method;
the normalization subunit is specifically configured to perform normalization processing on the denoised multi-modal medical image.
6. The deep learning based tumor target cloud detection apparatus according to claim 4, wherein the identification unit comprises an extraction subunit, a classification subunit, a modification subunit and a labeling subunit, wherein:
the extraction subunit is specifically configured to extract a preset number of target candidate regions from the multi-modal medical image after each coordinate matching, and perform feature extraction on the target candidate regions by using a convolutional neural network;
the classification subunit is specifically configured to input the extracted features into a support vector machine classifier to identify all target portions in the multi-modal medical image after coordinate matching;
the correction subunit is specifically configured to correct the position of the identified candidate frame of the target portion by using a regressor;
the marking subunit is specifically configured to mark a tumor position in the multimodal medical image by a standard uptake value method.
7. A deep learning based tumor target cloud detection system, comprising a plurality of tumor target detection clients and at least one deep learning based tumor target cloud detection apparatus according to any one of claims 1 to 6, wherein:
the tumor target area detection client is used for sending the multi-modal medical image to be detected and the corresponding radioactive source parameters to the tumor target area cloud detection device based on deep learning, receiving and detecting the tumor target area in the multi-modal medical image to be detected based on the delineation result and the absorbed dose distribution condition calculation result returned by the tumor target area cloud detection device based on deep learning.
8. The deep learning based tumor target cloud detection system according to claim 7, wherein a data transmission security communication framework between the tumor target detection client and the deep learning based tumor target cloud detection device is constructed based on a security transport layer protocol.
9. A deep learning based tumor target cloud detection method applied to the deep learning based tumor target cloud detection device according to any one of claims 1 to 6, comprising:
receiving a multi-modal medical image to be detected and corresponding radioactive source parameters sent by a tumor target area detection client;
the delineation submodule on the deep learning-based tumor target area cloud detection device is used for delineating the tumor target area in the multi-modal medical image to be detected;
and sending the delineation result to the tumor target area detection client through a communication network so that the tumor target area detection client can detect the tumor target area in the multi-modal medical image to be detected based on the delineation result.
10. The device for detecting the cloud of the target area of the tumor based on the deep learning is characterized by comprising a processor and a memory; wherein the memory is for storing a computer program that is loaded and executed by the processor to implement the deep learning based tumor target area cloud detection method of claim 9.
11. The deep learning based tumor target cloud detection apparatus of claim 10, wherein the processor comprises a plurality of central processing units and a plurality of heterogeneous acceleration units, and the memory comprises a plurality of memory units.
12. A computer-readable storage medium storing computer-executable instructions which, when loaded and executed by a processor, implement the deep learning based tumor target cloud detection method of claim 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110342538.0A CN113077433B (en) | 2021-03-30 | 2021-03-30 | Deep learning-based tumor target area cloud detection device, system, method and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110342538.0A CN113077433B (en) | 2021-03-30 | 2021-03-30 | Deep learning-based tumor target area cloud detection device, system, method and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113077433A true CN113077433A (en) | 2021-07-06 |
CN113077433B CN113077433B (en) | 2023-04-07 |
Family
ID=76611801
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110342538.0A Active CN113077433B (en) | 2021-03-30 | 2021-03-30 | Deep learning-based tumor target area cloud detection device, system, method and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113077433B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114202524A (en) * | 2021-12-10 | 2022-03-18 | 中国人民解放军陆军特色医学中心 | Performance evaluation method and system of multi-modal medical image |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140180065A1 (en) * | 2011-05-11 | 2014-06-26 | The Regents Of The University Of California | Fiduciary markers and methods of placement |
CN104027128A (en) * | 2014-06-23 | 2014-09-10 | 中国科学院合肥物质科学研究院 | Offline dose verification method based on improved CBCT (cone beam computed tomography) images |
CN104036109A (en) * | 2014-03-14 | 2014-09-10 | 上海大图医疗科技有限公司 | Image based system and method for case retrieving, sketching and treatment planning |
CN105893772A (en) * | 2016-04-20 | 2016-08-24 | 上海联影医疗科技有限公司 | Data acquiring method and data acquiring device for radiotherapy plan |
CN107403201A (en) * | 2017-08-11 | 2017-11-28 | 强深智能医疗科技(昆山)有限公司 | Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method |
CN107456278A (en) * | 2016-06-06 | 2017-12-12 | 北京理工大学 | A kind of ESS air navigation aid and system |
CN108171738A (en) * | 2018-01-25 | 2018-06-15 | 北京雅森科技发展有限公司 | Multimodal medical image registration method based on brain function template |
CN109031440A (en) * | 2018-06-04 | 2018-12-18 | 南京航空航天大学 | A kind of gamma radiation imaging method based on deep learning |
CN109949352A (en) * | 2019-03-22 | 2019-06-28 | 邃蓝智能科技(上海)有限公司 | A kind of radiotherapy image Target delineations method based on deep learning and delineate system |
CN110465004A (en) * | 2019-08-02 | 2019-11-19 | 北京全域医疗技术集团有限公司 | A kind of generation method of cloud radiotherapy treatment planning system and radiotherapy treatment planning |
CN112336996A (en) * | 2020-09-30 | 2021-02-09 | 四川大学 | Radiotherapy target area automatic delineation system based on deep neural network |
-
2021
- 2021-03-30 CN CN202110342538.0A patent/CN113077433B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140180065A1 (en) * | 2011-05-11 | 2014-06-26 | The Regents Of The University Of California | Fiduciary markers and methods of placement |
CN104036109A (en) * | 2014-03-14 | 2014-09-10 | 上海大图医疗科技有限公司 | Image based system and method for case retrieving, sketching and treatment planning |
CN104027128A (en) * | 2014-06-23 | 2014-09-10 | 中国科学院合肥物质科学研究院 | Offline dose verification method based on improved CBCT (cone beam computed tomography) images |
CN105893772A (en) * | 2016-04-20 | 2016-08-24 | 上海联影医疗科技有限公司 | Data acquiring method and data acquiring device for radiotherapy plan |
CN107456278A (en) * | 2016-06-06 | 2017-12-12 | 北京理工大学 | A kind of ESS air navigation aid and system |
CN107403201A (en) * | 2017-08-11 | 2017-11-28 | 强深智能医疗科技(昆山)有限公司 | Tumour radiotherapy target area and jeopardize that organ is intelligent, automation delineation method |
CN108171738A (en) * | 2018-01-25 | 2018-06-15 | 北京雅森科技发展有限公司 | Multimodal medical image registration method based on brain function template |
CN109031440A (en) * | 2018-06-04 | 2018-12-18 | 南京航空航天大学 | A kind of gamma radiation imaging method based on deep learning |
CN109949352A (en) * | 2019-03-22 | 2019-06-28 | 邃蓝智能科技(上海)有限公司 | A kind of radiotherapy image Target delineations method based on deep learning and delineate system |
CN110465004A (en) * | 2019-08-02 | 2019-11-19 | 北京全域医疗技术集团有限公司 | A kind of generation method of cloud radiotherapy treatment planning system and radiotherapy treatment planning |
CN112336996A (en) * | 2020-09-30 | 2021-02-09 | 四川大学 | Radiotherapy target area automatic delineation system based on deep neural network |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114202524A (en) * | 2021-12-10 | 2022-03-18 | 中国人民解放军陆军特色医学中心 | Performance evaluation method and system of multi-modal medical image |
Also Published As
Publication number | Publication date |
---|---|
CN113077433B (en) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11386557B2 (en) | Systems and methods for segmentation of intra-patient medical images | |
JP7030050B2 (en) | Pseudo-CT generation from MR data using tissue parameter estimation | |
JP6567179B2 (en) | Pseudo CT generation from MR data using feature regression model | |
US11715203B2 (en) | Image processing method and apparatus, server, and storage medium | |
CN106157320B (en) | A kind of image blood vessel segmentation method and device | |
US10149987B2 (en) | Method and system for generating synthetic electron density information for dose calculations based on MRI | |
US9684961B2 (en) | Scan region determining apparatus | |
EP4131160A1 (en) | Image obtaining method and system, image quality determination method and system, and medical image acquisition method and system | |
AU2015312327A1 (en) | Systems and methods for segmenting medical images based on anatomical landmark-based features | |
CN107596578A (en) | The identification and location determining method of alignment mark, imaging device and storage medium | |
US10275895B2 (en) | Mechanism for advanced structure generation and editing | |
US11854232B2 (en) | Systems and methods for patient positioning | |
EP2854946A1 (en) | Elasticity imaging-based methods for improved gating efficiency and dynamic margin adjustment in radiation therapy | |
US11672496B2 (en) | Imaging systems and methods | |
WO2021136505A1 (en) | Imaging systems and methods | |
CN115485019A (en) | Automatically planned radiation-based treatment | |
US11406844B2 (en) | Method and apparatus to derive and utilize virtual volumetric structures for predicting potential collisions when administering therapeutic radiation | |
CN113077433B (en) | Deep learning-based tumor target area cloud detection device, system, method and medium | |
US20230169668A1 (en) | Systems and methods for image registration | |
US20210125330A1 (en) | Systems and methods for imaging | |
CN116168097A (en) | Method, device, equipment and medium for constructing CBCT sketching model and sketching CBCT image | |
US20230342974A1 (en) | Imaging systems and methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |