CN113223028A - Multi-modal liver tumor segmentation method based on MR and CT - Google Patents
Multi-modal liver tumor segmentation method based on MR and CT Download PDFInfo
- Publication number
- CN113223028A CN113223028A CN202110493725.9A CN202110493725A CN113223028A CN 113223028 A CN113223028 A CN 113223028A CN 202110493725 A CN202110493725 A CN 202110493725A CN 113223028 A CN113223028 A CN 113223028A
- Authority
- CN
- China
- Prior art keywords
- image sequence
- image
- liver
- modal
- original
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000011218 segmentation Effects 0.000 title claims abstract description 33
- 238000000034 method Methods 0.000 title claims abstract description 29
- 206010019695 Hepatic neoplasm Diseases 0.000 title claims abstract description 25
- 208000014018 liver neoplasm Diseases 0.000 title claims abstract description 25
- 210000001015 abdomen Anatomy 0.000 claims abstract description 33
- 210000004185 liver Anatomy 0.000 claims abstract description 28
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 18
- 238000012549 training Methods 0.000 claims abstract description 16
- 238000012360 testing method Methods 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000003062 neural network model Methods 0.000 claims abstract description 6
- 238000012952 Resampling Methods 0.000 claims description 16
- 230000003187 abdominal effect Effects 0.000 claims description 7
- 238000012937 correction Methods 0.000 claims description 7
- 238000010606 normalization Methods 0.000 claims description 5
- 238000002372 labelling Methods 0.000 claims description 4
- 230000003902 lesion Effects 0.000 claims description 3
- 206010028980 Neoplasm Diseases 0.000 abstract description 10
- 230000008878 coupling Effects 0.000 abstract description 2
- 238000010168 coupling process Methods 0.000 abstract description 2
- 238000005859 coupling reaction Methods 0.000 abstract description 2
- 238000002591 computed tomography Methods 0.000 description 31
- 238000002595 magnetic resonance imaging Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 8
- 238000003709 image segmentation Methods 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000001356 surgical procedure Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 3
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 2
- 210000003484 anatomy Anatomy 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000003745 diagnosis Methods 0.000 description 2
- 238000002059 diagnostic imaging Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000012285 ultrasound imaging Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000000013 bile duct Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000005426 magnetic field effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000007170 pathology Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 238000009827 uniform distribution Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10088—Magnetic resonance imaging [MRI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30056—Liver; Hepatic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Quality & Reliability (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a multi-modal liver tumor segmentation method based on MR and CT, which comprises the steps of firstly obtaining an original abdomen MR image sequence and an original abdomen CT image sequence, preprocessing the original abdomen MR image sequence and the original abdomen CT image sequence to obtain a multi-modal image sequence, then constructing a convolutional neural network model and a data set, training the convolutional neural network model by utilizing a training set in the data set to obtain a trained convolutional neural network model, and finally inputting a test set into the trained neural network model to obtain a segmentation result comprising a liver and a liver tumor. According to the liver and liver tumor segmentation method and device, the multi-mode images are used for liver and liver tumor segmentation, multi-mode information is fused, the accuracy of the final segmentation result is improved, the liver and liver tumor segmentation task is carried out simultaneously, and the mutual coupling relationship between the liver segmentation result and the tumor segmentation result is eliminated.
Description
Technical Field
The invention belongs to the field of medical image processing, and particularly relates to a multi-modal liver tumor segmentation method based on MR and CT.
Background
In recent years, medical imaging technology has been rapidly developed, and has played an important role in clinical examination, diagnosis, and surgical plan planning as an important routine examination means. Medical imaging techniques such as X-ray imaging and Computed Tomography (CT) using X-ray principles, ultrasound imaging (US) using ultrasound reflectance principles, and Magnetic Resonance Imaging (MRI) using magnetic resonance provide clinicians with richer and more accurate information about the anatomy, structure, and case of a lesion. However, most doctors adopt manual methods in clinical diagnosis, and have the defects of large information amount, complicated diagnosis process, low efficiency and the like, so computer processing and analysis based on medical images are hot spots of domestic and foreign research. When the liver surgery is performed, the problems of complex surgery scheme, high difficulty, high risk and the like exist in the liver surgery due to the complex anatomical structure in the liver.
Particularly in the field of liver tumor surgery, it is necessary to excise the whole part containing the liver tumor, which requires the doctor to accurately locate the liver and the tumor based on the medical image data of the patient in the early stage. And making an operation scheme according to the characteristics of the traditional Chinese medicine. The current medical image segmentation method belongs to a semi-automatic segmentation method, for example, an active contour method needs to manually determine a part of contour points at the edge of a liver tumor in advance to form an initial contour, an algorithm can actively fit the boundary of the tumor, the subjective experience and knowledge of an operating doctor are very depended on, and the actual segmentation effect is not ideal. In addition, the traditional machine learning segmentation method needs to manually design and select the characteristics of liver tumor, which needs very professional mathematics and pathology related knowledge and also brings challenges to the development of model. It is worth mentioning that, in order to improve the segmentation accuracy of the tumor, the industry chooses to segment the outline of the liver first. On the basis, the tumor is segmented, so the segmentation precision of the tumor is seriously influenced by the segmentation precision of the liver; and the image source of the traditional medical image segmentation method is a single MRI or CT modality, and the accuracy of the segmentation result may not be high.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides a multi-modal liver tumor segmentation method based on MR and CT, which realizes multi-modal information fusion of MRI and CT, and simultaneously performs liver and liver tumor segmentation, and improves the accuracy of the result, and the technical scheme of the present invention is as follows:
a method of multi-modal liver lesion segmentation based on MR and CT, the method comprising:
s1, acquiring an original abdomen MR image sequence and an original abdomen CT image sequence, and preprocessing the original abdomen MR image sequence and the original abdomen CT image sequence to obtain a multi-modal image sequence;
the preprocessing operation comprises the steps of carrying out bias field correction on the original abdomen MR image sequence to obtain a first MR image sequence, and carrying out contrast adjustment on the original abdomen CT image sequence to obtain a first CT image sequence; resampling and registering the first MR image sequence and the first CT image sequence and connecting channels to obtain a multi-modal image sequence;
s2, constructing a convolutional neural network model and a data set, and training the convolutional neural network model by using a training set in the data set to obtain a trained convolutional neural network model;
the convolutional neural network is a 3D U-Net network structure and comprises an encoding part and a decoding part, wherein the decoding part comprises a first convolution branch and a second convolution branch, the first convolution branch is used for restoring a liver contour, and the second convolution branch is used for restoring a liver tumor contour;
the constructing of the data set specifically comprises: manually labeling the preprocessed multi-modal image sequence, and randomly cutting to obtain a multi-modal image sequence image block set, wherein the multi-modal image sequence image block set comprises a training set and a testing set;
and S3, inputting the test set into the trained neural network model to obtain a liver segmentation result and a liver tumor segmentation result.
Further, the performing bias field correction on the original abdomen MR image sequence to obtain the first MR image sequence includes performing bias field correction on the original abdomen MR image sequence by using a non-parameter non-uniform intensity normalization algorithm.
Further, the contrast adjustment of the original abdomen CT image sequence to obtain the first CT image sequence includes truncating pixel values in the original abdomen CT image sequence image.
Further, resampling the first MR image sequence and the first CT image sequence comprises: resampling the first MR image sequence and the first CT image sequence to [1,1,1] by a bilinear function, wherein the bilinear function is specifically:
wherein, (x, y) is the image coordinate after resampling, (x)1,y1),(x2,y2) As image coordinates before resampling, f (Q)11),f(Q12),f(Q21),f(Q22) The pixel values of the image at the upper left corner, the upper right corner, the lower left corner and the lower right corner in the image coordinate system before resampling are obtained.
Further, the artificially labeling the preprocessed multi-modal image sequence, and randomly clipping to obtain an image block set of the multi-modal image sequence, includes: randomly selecting a central point in the artificially labeled multi-modal influence sequence, and then cutting out an image block set with the same size by taking the central point as the center.
The invention has the beneficial effects that:
(1) the liver and the liver tumor are segmented by using the multi-mode images, multi-mode information is fused, and the accuracy of the final segmentation result is improved;
(2) and meanwhile, the liver and liver tumor segmentation task is carried out, and the mutual coupling relationship between the liver segmentation result and the tumor segmentation is eliminated.
Drawings
FIG. 1 is a schematic flow chart of a bile duct image segmentation method based on deep learning according to the invention;
FIG. 2 is a diagram of a 3D convolutional neural network model architecture of the present invention.
Detailed Description
The technical scheme of the invention is further described by combining the drawings and the embodiment:
the embodiment provides a multi-modal liver tumor segmentation method based on MR and CT, which comprises the following steps:
step 1, acquiring an original abdomen MR image sequence and an original abdomen CT image sequence, and preprocessing the original abdomen MR image sequence and the original abdomen CT image sequence to obtain a multi-modal image sequence.
In an embodiment of the present application, the preprocessing operation includes: the method comprises the steps of carrying out bias field correction on an original abdomen MR image sequence to obtain a first MR image sequence, wherein when the abdomen MRI image sequence is imaged, a magnetic resonance magnetic field has nonuniformity, the nonuniformity of the magnetic field is an unavoidable magnetic field effect, the nonuniformity can cause the range of image degrees in the same tissue to be nonuniform, and in order to improve the precision of subsequent results, the bias field correction needs to be carried out on an original abdomen MRI image sequence image, and the embodiment of the application adopts a nonparametric nonuniform intensity normalization (N4) algorithm to carry out bias field correction.
Meanwhile, contrast adjustment is performed on the original abdomen CT image sequence to obtain a first CT image sequence, specifically, in this embodiment, contrast adjustment is performed on all abdomen CT image sequences, the adjustment method is to truncate the pixel values of the original abdomen CT image sequence, and the target truncation range is [ -150,250 ]. That is, pixels with pixel values greater than 250 are assigned 250, and pixels with pixel values less than-150 are assigned-15. Meanwhile, in order to prevent the problem of gradient explosion in the neural network training process, the image pixel value is normalized to be 0-1, and the normalization formula is shown as formula (1):
wherein x is an original pixel value, x' is a normalized pixel value, min is a minimum image pixel value, and max is a maximum image pixel value.
And finally, resampling and registering the first MR image sequence and the first CT image sequence and carrying out channel connection to obtain a multi-mode image sequence. In the embodiment of the present application, since the abdominal MRI image sequence and the CT image are from different scanners, and therefore, the spatial resolutions of different image sequences of the same patient are different, images of different modalities need to be resampled to the same resolution, for the convenience of subsequent image processing, the abdominal MRI image sequence and the CT image are resampled to [1,1,1] according to the present invention, so as to satisfy the isotropy of the image, wherein a bilinear function is adopted as a sampling function, that is:
wherein, (x, y) is the image coordinate after resampling, (x)1,y1),(x2,y2) As image coordinates before resampling, f (Q)11),f(Q12),f(Q21),f(Q22) The pixel values of the image at the upper left corner, the upper right corner, the lower left corner and the lower right corner in the image coordinate system before resampling are obtained.
Since the abdominal MRI image sequence and the CT image are from different scanners and the two images are not consistent in the corresponding points in space, which may cause confusion in the subsequent image segmentation algorithm, the two images need to be registered. In the embodiment, an 'imregister' programming interface in MATLAB scientific computing software is adopted, wherein an abdomen MRI image sequence is selected as a reference image, a CT image is selected as a floating image, similarity transformation is selected as the type of image transformation, and mutual information is selected as a measurement standard. And finally, connecting the CT image after registration and the abdomen MRI image together in the 0 th dimension of the image to obtain an abdomen multi-mode image sequence.
Step 2, constructing a convolutional neural network model and a data set, and training the convolutional neural network model by using a training set in the data set to obtain a trained convolutional neural network model;
referring to fig. 2, the convolutional neural network 3D U-Net network structure comprises an encoding part and a decoding part, wherein the decoding part comprises a first convolution branch and a second convolution branch, the first convolution branch is used for restoring the outline of the liver, and the second convolution branch is used for restoring the outline of the liver tumor.
Wherein, the depth of the network is 4 layers, and the characteristic scale change is [32, 64, 128, 256 ]. The number of convolution kernels is [32,16,8,2] respectively, and finally the number of output channels of two prediction branches is 2, wherein one channel represents the background, and the other channel represents the target (tumor or liver). All convolution modules in the embodiment include a convolution kernel with the size of 3x3 and the step length of 1, and the input of the convolution kernel is convoluted by using the convolution kernel; then using batch normalization and finally passing through a truncated linear activation function.
In the embodiment of the application, the preprocessed multi-modal image sequence is manually labeled to obtain a multi-modal image sequence with a labeled version of the manually labeled tumor region, and the un-labeled multi-modal image sequence and the multi-modal image sequence with the corresponding labeled version are cut to obtain paired data sets for model training.
The specific process of cutting comprises randomly selecting a central point in a multi-modal image sequence, wherein the random selection algorithm meets the requirement of uniform distribution; then, taking the central point as the center, the image block with the size of [196, 196, 128] and the corresponding labeled image block are clipped out. And dividing all the image blocks and the corresponding labeled image blocks into a training set and a test set, wherein the specific division ratio is 80%/20%.
In the training process of the application, a truncated linear function is selected as an activation function of a neural network, and in order to ensure the distribution stability of activation values and accelerate the convergence of a model, a parameter initialization mode of 'kaiming' is selected. An optimizer using Adam as the model was selected, the initial learning rate was set to 0.001, and the dice coefficient was selected as the loss function of the model. Inputting the training set into a convolutional neural network model to obtain a liver contour position prediction result and a tumor prediction result, calculating the loss of the liver contour position prediction result and the loss of an labeled image block by using a dice coefficient, adding the losses of the two prediction results, calculating a gradient according to a loss value by using a back propagation algorithm, and updating the weight of the convolutional neural network model for the calculated gradient by using an Adam optimizer. The model was trained continuously, with 500 rounds of training.
And 3, inputting the test set into the trained neural network model to obtain a liver segmentation result and a liver tumor segmentation result.
In the embodiment of the application, the test set is input into the neural network model trained in the step 2, output results corresponding to two branches of the neural network model, namely a liver contour result and a liver tumor contour result, are obtained, a softmax function is used for activation, two prediction results are obtained, the two prediction results are reversely spliced according to the cutting obtaining mode in the step 2, and finally an image segmentation result containing the liver and the liver tumor is obtained.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.
Claims (5)
1. A method of multi-modal liver lesion segmentation based on MR and CT, the method comprising:
s1, acquiring an original abdomen MR image sequence and an original abdomen CT image sequence, and preprocessing the original abdomen MR image sequence and the original abdomen CT image sequence to obtain a multi-modal image sequence;
the preprocessing operation comprises the steps of carrying out bias field correction on the original abdomen MR image sequence to obtain a first MR image sequence, and carrying out contrast adjustment on the original abdomen CT image sequence to obtain a first CT image sequence; resampling and registering the first MR image sequence and the first CT image sequence and connecting channels to obtain a multi-modal image sequence;
s2, constructing a convolutional neural network model and a data set, and training the convolutional neural network model by using a training set in the data set to obtain a trained convolutional neural network model;
the convolutional neural network is a 3D U-Net network structure and comprises an encoding part and a decoding part, wherein the decoding part comprises a first convolution branch and a second convolution branch, the first convolution branch is used for restoring a liver contour, and the second convolution branch is used for restoring a liver tumor contour;
the constructing of the data set specifically comprises: manually labeling the preprocessed multi-modal image sequence, and randomly cutting to obtain a multi-modal image sequence image block set, wherein the multi-modal image sequence image block set comprises a training set and a testing set;
and S3, inputting the test set into the trained neural network model to obtain a liver segmentation result and a liver tumor segmentation result.
2. The method of claim 1, wherein the bias field correcting the original abdominal MR image sequence to obtain the first MR image sequence comprises bias field correcting the original abdominal MR image sequence using a non-parametric non-uniform intensity normalization algorithm.
3. The method of claim 1, wherein the contrast adjustment of the original abdominal CT image sequence to obtain the first CT image sequence comprises truncating pixel values in images of the original abdominal CT image sequence.
4. The method of claim 1, wherein resampling the first MR image sequence and the first CT image sequence comprises: resampling the first MR image sequence and the first CT image sequence to [1,1,1] by a bilinear function, wherein the bilinear function is specifically:
wherein, (x, y) is the image coordinate after resampling, (x)1,y1),(x2,y2) As image coordinates before resampling, f (Q)11),f(Q12),f(Q21),f(Q22) The pixel values of the image at the upper left corner, the upper right corner, the lower left corner and the lower right corner in the image coordinate system before resampling are obtained.
5. The method according to claim 1, wherein the artificially labeling the preprocessed multi-modal image sequence and performing random cropping to obtain a multi-modal image sequence image block set comprises: randomly selecting a central point in the artificially labeled multi-modal influence sequence, and then cutting out an image block set with the same size by taking the central point as the center.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110493725.9A CN113223028A (en) | 2021-05-07 | 2021-05-07 | Multi-modal liver tumor segmentation method based on MR and CT |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110493725.9A CN113223028A (en) | 2021-05-07 | 2021-05-07 | Multi-modal liver tumor segmentation method based on MR and CT |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113223028A true CN113223028A (en) | 2021-08-06 |
Family
ID=77091209
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110493725.9A Pending CN113223028A (en) | 2021-05-07 | 2021-05-07 | Multi-modal liver tumor segmentation method based on MR and CT |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113223028A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113657503A (en) * | 2021-08-18 | 2021-11-16 | 上海交通大学 | Malignant liver tumor classification method based on multi-modal data fusion |
CN114881848A (en) * | 2022-07-01 | 2022-08-09 | 浙江柏视医疗科技有限公司 | Method for converting multi-sequence MR into CT |
CN116934754A (en) * | 2023-09-18 | 2023-10-24 | 四川大学华西第二医院 | Liver image identification method and device based on graph neural network |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109903292A (en) * | 2019-01-24 | 2019-06-18 | 西安交通大学 | A kind of three-dimensional image segmentation method and system based on full convolutional neural networks |
CN109949309A (en) * | 2019-03-18 | 2019-06-28 | 安徽紫薇帝星数字科技有限公司 | A kind of CT image for liver dividing method based on deep learning |
US20200085382A1 (en) * | 2017-05-30 | 2020-03-19 | Arterys Inc. | Automated lesion detection, segmentation, and longitudinal identification |
US20200126236A1 (en) * | 2018-10-22 | 2020-04-23 | The Board Of Trustees Of The Leland Stanford Junior University | Systems and Methods for Image Segmentation using IOU Loss Functions |
-
2021
- 2021-05-07 CN CN202110493725.9A patent/CN113223028A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200085382A1 (en) * | 2017-05-30 | 2020-03-19 | Arterys Inc. | Automated lesion detection, segmentation, and longitudinal identification |
US20200126236A1 (en) * | 2018-10-22 | 2020-04-23 | The Board Of Trustees Of The Leland Stanford Junior University | Systems and Methods for Image Segmentation using IOU Loss Functions |
CN109903292A (en) * | 2019-01-24 | 2019-06-18 | 西安交通大学 | A kind of three-dimensional image segmentation method and system based on full convolutional neural networks |
CN109949309A (en) * | 2019-03-18 | 2019-06-28 | 安徽紫薇帝星数字科技有限公司 | A kind of CT image for liver dividing method based on deep learning |
Non-Patent Citations (1)
Title |
---|
刘云鹏等: "深度学习结合影像组学的肝脏肿瘤CT分割", 中国图象图形学报, vol. 25, no. 10, 31 October 2020 (2020-10-31) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113657503A (en) * | 2021-08-18 | 2021-11-16 | 上海交通大学 | Malignant liver tumor classification method based on multi-modal data fusion |
CN114881848A (en) * | 2022-07-01 | 2022-08-09 | 浙江柏视医疗科技有限公司 | Method for converting multi-sequence MR into CT |
CN116934754A (en) * | 2023-09-18 | 2023-10-24 | 四川大学华西第二医院 | Liver image identification method and device based on graph neural network |
CN116934754B (en) * | 2023-09-18 | 2023-12-01 | 四川大学华西第二医院 | Liver image identification method and device based on graph neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Torosdagli et al. | Deep geodesic learning for segmentation and anatomical landmarking | |
US20240144495A1 (en) | Method and system for processing multi-modality image | |
US8698795B2 (en) | Interactive image segmentation | |
CN111008984B (en) | Automatic contour line drawing method for normal organ in medical image | |
CN113223028A (en) | Multi-modal liver tumor segmentation method based on MR and CT | |
US8787648B2 (en) | CT surrogate by auto-segmentation of magnetic resonance images | |
US20070109299A1 (en) | Surface-based characteristic path generation | |
CN112150524B (en) | Two-dimensional and three-dimensional medical image registration method and system based on deep learning | |
CN112885453A (en) | Method and system for identifying pathological changes in subsequent medical images | |
Fajar et al. | Reconstructing and resizing 3D images from DICOM files | |
CN112785632B (en) | Cross-modal automatic registration method for DR and DRR images in image-guided radiotherapy based on EPID | |
CN106709920B (en) | Blood vessel extraction method and device | |
CN111754553A (en) | Multi-modal scanning image registration method and device, computer equipment and storage medium | |
CN115830016B (en) | Medical image registration model training method and equipment | |
JP5296981B2 (en) | Automatic registration of medical volume images in modalities using affine transformation | |
CN111127487B (en) | Real-time multi-tissue medical image segmentation method | |
CN114881914A (en) | System and method for determining three-dimensional functional liver segment based on medical image | |
CN110858412B (en) | Heart coronary artery CTA model building method based on image registration | |
CN116091466A (en) | Image analysis method, computer device, and storage medium | |
US20210256741A1 (en) | Region correction apparatus, region correction method, and region correction program | |
Ifty et al. | Implementation of liver segmentation from computed tomography (ct) images using deep learning | |
EP4254350A1 (en) | Determination of illumination parameters in medical image rendering | |
Fontanella et al. | Challenges of building medical image datasets for development of deep learning software in stroke | |
CN112767299B (en) | Multi-mode three-dimensional image registration and fusion method | |
Eichner et al. | MuSIC: Multi-Sequential Interactive Co-Registration for Cancer Imaging Data based on Segmentation Masks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20210922 Address after: 102629 Room 401, building 1, 38 Yongda Road, Daxing biomedical industrial base, Zhongguancun Science and Technology Park, Daxing District, Beijing Applicant after: Beijing precision diagnosis Medical Technology Co.,Ltd. Address before: Room 102, block B2, phase II, software new town, tianguba Road, Yuhua Street office, high tech Zone, Xi'an, Shaanxi 710000 Applicant before: Xi'an Zhizhen Intelligent Technology Co.,Ltd. |