CN110942464A - PET image segmentation method fusing 2-dimensional and 3-dimensional models - Google Patents

PET image segmentation method fusing 2-dimensional and 3-dimensional models Download PDF

Info

Publication number
CN110942464A
CN110942464A CN201911085031.0A CN201911085031A CN110942464A CN 110942464 A CN110942464 A CN 110942464A CN 201911085031 A CN201911085031 A CN 201911085031A CN 110942464 A CN110942464 A CN 110942464A
Authority
CN
China
Prior art keywords
dimensional
dimensional model
segmentation
pet image
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911085031.0A
Other languages
Chinese (zh)
Inventor
胡海根
沈雷钊
苏一平
管秋
肖杰
周乾伟
陈胜勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201911085031.0A priority Critical patent/CN110942464A/en
Publication of CN110942464A publication Critical patent/CN110942464A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

A PET image segmentation method fusing 2-dimensional and 3-dimensional models is based on a full convolution neural network pixel-by-pixel classification technology; due to the characteristics of the 3-dimensional image, the 3-dimensional model can generate a more accurate segmentation result, the segmentation result of the 2-dimensional model is fused on the basis of the segmentation of the 3-dimensional model to supplement the detail information of the 2-dimensional model, and the method is more favorable for improving the segmentation accuracy, and comprises the following specific steps: step 1, designing or modifying a 2-dimensional and 3-dimensional network structure aiming at the imaging characteristics of lymphoma; step 2, preprocessing the original data to a size and a dimension suitable for network learning; step 3, training and learning by using the 2-dimensional model and the 3-dimensional model respectively to obtain different prediction results; and 4, carrying out average value summation on the outputs of the two to obtain a final segmentation result. The method improves the accuracy of PET image segmentation and achieves higher Dice similarity coefficient index.

Description

PET image segmentation method fusing 2-dimensional and 3-dimensional models
Technical Field
The invention relates to a PET image segmentation method based on a ResUnet fusion 2-dimensional model and a 3-dimensional model, in particular to a better and more accurate segmentation result obtained by fusing a 2-dimensional model and a 3-dimensional model to a lymphoma prediction result respectively.
Background
Lymphoma is a large group of tumors with considerable heterogeneity, which, although well developed in lymph nodes, belongs to a systemic disease due to the distribution characteristics of the lymphatic system, and can invade almost any tissues and organs in the whole body. Therefore, the automated segmentation of lymphomas from PET images has presented a significant challenge.
PET is known as: positron Emission computed tomography (Positron Emission computed tomography) is a relatively advanced clinical examination imaging technique in the field of nuclear medicine. The general method is to inject a substance, which is generally necessary in biological life metabolism, such as fluorodeoxyglucose into a human body, and then reflect the condition of life metabolic activity by the accumulation of the substance in metabolism, thereby achieving the purpose of diagnosis. The mechanism is that the metabolism states of different tissues of a human body are different, glucose metabolism is vigorous and more glucose is accumulated in high-metabolism malignant tumor tissues, and the characteristics can be reflected by an image, so that the PET image segmentation can be realized through computer image identification.
With the wide application of convolutional neural networks in computer vision, the traditional image segmentation method is gradually inferior to the deep learning method in terms of segmentation effect and self-adaptation. The new concepts and methods such as semantic segmentation, instance segmentation and panorama segmentation bring a brand new visual angle for image segmentation. The segmentation methods are all realized based on a deep convolutional neural network, the effect of specifically segmenting objects in an image is achieved by realizing the classification of the pixel level of the image, and various network methods such as U-Net, Deeplab, PSPNet, Mask-RCNN and the like are developed from a Full Convolutional Network (FCN), and have different advantages in different fields. U-Net is widely used in medical image segmentation with its superior segmentation performance and simple network structure. ResUnet replaces the feature extraction of U-Net with resnet with excellent feature extraction function, and greatly improves the accuracy of segmentation. With the development of the technology, the video memory is no longer used as a constraint condition for neural network training, so that the training of the 3-dimensional network model is allowed, and the 3-dimensional network can more efficiently utilize the information of the 3-dimensional data, but the 3-dimensional network has the defect of lacking a pre-training model, so that the training process is unstable.
Disclosure of Invention
The invention aims to provide an automatic segmentation method of a PET image to fuse the advantages of a 2-dimensional network and a 3-dimensional network to obtain a better prediction result, and particularly, 3-dimensional PET data are respectively segmented into 3 2-dimensional data sets along the length, width and height directions, the 3 2-dimensional data sets are respectively input into a convolutional neural network for training, and finally 3 corresponding models are obtained, and in addition, the 3-dimensional PET data are directly input into the 3-dimensional convolutional neural network for training, so that a 3-dimensional trained model is obtained. The method comprises the steps of slicing test data along the length, width and height directions, inputting the obtained 3 data sets into corresponding network models respectively to obtain 3 corresponding prediction results, directly inputting 3-dimensional test data into a 3-dimensional model to obtain a prediction result of the 3-dimensional model, finally obtaining a final segmentation result through mean value summation, and obtaining a more accurate image segmentation prediction result by fusing the advantages of the 2-dimensional model and the 3-dimensional model.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a PET image segmentation method fusing 2-dimensional and 3-dimensional models comprises the following steps:
step 1, designing or modifying a 2-dimensional and 3-dimensional network structure according to the characteristics of a PET image;
step 2, preprocessing the original data to a size and a dimension suitable for network learning;
step 3, training and learning by using the 2-dimensional model and the 3-dimensional model respectively to obtain different prediction results;
and 4, carrying out average summation on the prediction result of the 2-dimensional model and the prediction result of the 3-dimensional model to obtain a final segmentation result.
Further, in step 1, the structure of the model network predicted by the ResUnet and ResUnet-3D neural networks is as follows:
(1) an optimizer: using an Adam optimizer that can adjust different learning rates for each different parameter, updating frequently changing parameters in smaller steps, and sparse parameters in larger steps;
(2) activation function: ReLU is selected from ResUnet as an activation function, and the ReLU can change the negative number of input into zero and output the positive number of input without change;
(3) normalization: the 2-dimensional model is normalized by Batch Normalization, the size of Batch is determined by the image resolution (32-128 is unequal), the number of parameters of the 3-dimensional model is more than two-dimensional, only 1 3-dimensional PET image is allowed to be input each time, and the Batch Normalization in the 2-dimensional model is changed into Group Normalization;
(4) evaluation criteria: finally, evaluating the standard of the segmentation effect, namely evaluating the segmentation effect by using a Dice similarity coefficient, wherein the higher the Dice similarity coefficient is, the higher the coincidence rate of the prediction result and the real label is;
(5) loss function: the Dice similarity coefficient is selected as a loss function, the cross entropy loss function has great limitation on the condition of imbalance of positive and negative samples, the Dice similarity coefficient can overcome the difficulty as the loss function, the loss function focuses on the intersection and union of a predicted foreground (lymphoma) and a foreground (lymphoma) in a real label, and the problem that the loss function is too small due to excessive background pixels is avoided;
Figure BDA0002265129740000031
(6) other parameters: the model adopts renet as an encoder, bilinear difference values in unet as a decoder, the number of training rounds is 500, the training is ended when the Dice similarity coefficient of the verification set is not improved any more in 20 rounds, and the learning rate is 0.001.
Still further, in step 2, the data processing process includes:
(2.1) cutting 3-dimensional PET data into a 48 × 96 × 480 shape to unify sizes of all data;
(2.2) for the 2-dimensional model: slicing the 3-dimensional data along the width increasing direction to obtain 2-dimensional slices which are related to the width in number and have the length multiplied by the height; slicing the 3-dimensional data along the direction of increasing the height to obtain 2-dimensional slices which are related to the height in number and have the size of length multiplied by width; slicing the 3-dimensional data along the direction of increasing the length to obtain 2-dimensional slices with the number related to the length and the width multiplied by the height;
and (2.3) normalizing all the 2-dimensional slices to obtain data finally used for training.
In the step 3, the process is as follows: and inputting the preprocessed 2-dimensional PET image and the preprocessed 3-dimensional PET image into a 2-dimensional network and a 3-dimensional network respectively for training to obtain 4 prediction results of the 3-dimensional PET image.
In the step 4, a final segmentation result is obtained by averaging and summing ((X + Y + Z +3D)/4) using the 2-dimensional model and the plurality of segmentation results (X, Y, Z, 3D) of the 3-dimensional model obtained in the step 3.
The invention has the following beneficial effects: the 3-dimensional PET image utilization rate can be greatly improved by respectively predicting through 3 view directions, more space information is obtained, the 3-dimensional data segmentation effect of the 3-dimensional model is superior to that of the 2-dimensional model, the 3-dimensional data training fitting effect of the 3-dimensional model is superior to that of the 2-dimensional model for segmenting the 2-dimensional image, the 2-dimensional model is used for controlling details to make up for the defects of the 3-dimensional model, the prediction results of the 3 view directions and the prediction results of the 3-dimensional model are summed together, the performance of the 3-dimensional view segmentation model and the 3-dimensional model is summarized, and higher segmentation precision is obtained.
Drawings
FIG. 1 is a flow chart of a PET image segmentation method fusing 2-dimensional and 3-dimensional models according to the present invention;
FIG. 2 is a network structure diagram of ResUnet to which the present invention pertains;
FIG. 3 is a network structure diagram of ResUnet-3D to which the present invention pertains
FIG. 4 is a sample view (top view direction) in the example;
FIG. 5 is a sample view (front view direction) in the example;
FIG. 6 is a sample view (left view direction) in the example;
FIG. 7 is a diagram of the predicted effect of the present invention (in the top view direction).
Fig. 8 is a diagram (front view direction) of the predicted effect of the present invention.
FIG. 9 is a diagram of the predicted effect of the present invention (left view direction).
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1 to 9, a PET image segmentation method fusing a 2-dimensional model and a 3-dimensional model, the method comprising the steps of:
step 1, designing or modifying a 2-dimensional and 3-dimensional network structure aiming at the characteristics of a PET image so as to be suitable for the segmentation characteristics of lymphoma;
modifications are made on the ResUnet network and the ResUnet-3D network with reference to fig. 2 and 3 to adapt it to the segmentation of lymphomas:
(1.1) when slicing is carried out in a direction with the height as the slicing direction, the resolution of the obtained 2-dimensional slice is low, excessive down-sampling layers can cause information loss, and the number of down-sampling layers of a ResUnet network and a ResUnet-3D network is reduced to 3 layers;
(1.2) because the positive sample (pixel considered as lymph cancer) and the negative sample (background) in the PET image are extremely unbalanced, the loss function should overcome the unbalance as much as possible, and a Dice loss function can be used to replace a cross entropy loss function;
Figure BDA0002265129740000041
step 2, preprocessing the original data to a size and a dimension suitable for network learning;
referring to fig. 4, 5, 6. PET slices in 3 view directions respectively corresponding to the same 3-dimensional PET image;
(2.1) since there are many pixels irrelevant to segmentation in the PET image with a resolution of 168 × 168, and the pixel value is 0, we clipped the 3-dimensional PET data and removed the irrelevant pixels. The size of the cut 3-dimensional PET image is 48 multiplied by 96 multiplied by 480;
(2.2) slicing in the height direction, cutting the 3-dimensional PET picture into 480 2-dimensional pictures of 48 × 96; slicing in the longitudinal direction, and cutting the 3-dimensional PET image into 96 48 × 480 2-dimensional pictures; slicing in the width direction, and cutting the 3-dimensional PET image into 48 96 × 480 2-dimensional pictures;
(2.3) normalizing all the obtained 2-dimensional pictures, setting X as an input picture, and normalizing the pictures
X=(X-X.min)/(X.max-X.min)
The resulting PET images are shown in FIG. 4, FIG. 5, and FIG. 6, which are respectively a top view, a front view, and a side view of a slice.
Step 3, 2-dimensional data in 3 view directions are respectively input into a network for training, for a 3-dimensional network, 3-dimensional PET data are directly input and prediction results of the 3-dimensional network are obtained, and the 4 prediction results are subjected to mean value summation to obtain final segmentation results;
referring to fig. 7, fig. 8, fig. 9 correspond to the prediction results of fig. 4, fig. 5, fig. 6, respectively, wherein the red color represents the pixels in the real label that are lymphoma, but the network predicts normal pixels. Green represents the pixels in the network predicted dimensional lymphoma, but normal in the true label. Yellow represents pixels where the true label is lymphoma and the network also predicts fibroid.
And respectively inputting 2-dimensional PET in 3 view directions into the modified ResUnet for training, and inputting 3-dimensional PET data into the modified ResUnet-3D for training, wherein the data of 80 patients are randomly selected as a training set, and the data of 29 patients are selected as a test set. After passing through a coder and a decoder of the network, the output prediction result is normalized to a continuous value between 0 and 1 after passing through a sigmoid function, the Dice loss is calculated by the prediction result and the label, and then the calculated loss is propagated reversely to update parameters. Training was stopped when the model was not rising for 20 epochs on the validation set, Dice.
And (3) inputting the test data which are processed in the same way into the corresponding models obtained after the training is finished respectively to obtain 3 prediction results (X, Y and Z) of the same 3-dimensional PET and 3D of the 3-dimensional models respectively. The 4 prediction results are respectively combined with the spatial information of the 2-dimensional model in each direction and the 3-dimensional information of the 3-dimensional model, and the 4 prediction results are combined, (X + Y + Z +3D)/4 is used as the final prediction result to be output.
In the step 3, the ResUnet neural network trains the data set to realize lymphoma prediction, and the ResUnet neural network and the ResUnet-3D are adopted, and the preprocessed data set and the corresponding labels are used as the input of the ResUnet neural network and the ResUnet-3D neural network to obtain a final prediction model after network training. The network parameters of the ResUnet neural network prediction model and the ResUnet-3D neural network prediction model are as follows:
(1) an optimizer: using the Adam function in the pytore framework, where the initial learning rate is 0.0001 and the weight decay is 0;
(2) activation function: the method comprises the following steps of selecting a ReLU as an activation function, wherein the gradient of the activation function is 1 when the activation function is propagated in the reverse direction, and the problems of gradient explosion and gradient disappearance cannot occur;
(3) evaluation criteria: the accuracy is generally not used as an evaluation index in a medical image, because the negative samples are too many, the proportion of the positive samples is small, and the accuracy is difficult to describe the quality of the positive sample segmentation. The Dice similarity coefficient is a common evaluation standard in medicine, and the Dice similarity coefficient is used as an evaluation index because the Dice similarity coefficient is more concentrated on intersections and unions between predicted positive samples and real positive samples and is less concerned about the quality of negative sample segmentation;
(4) loss function: in a common segmentation method, cross entropy is adopted as a loss function of a segmentation task, but in the task, positive and negative samples are extremely unbalanced, so that a very small loss can be obtained by using the cross entropy as the loss function, and therefore, the updating amplitude is small during returning, but the Dice similarity coefficient is not large. And the Dice similarity coefficient as a loss function can more grasp the part of the prediction intersected with the reality, and is easier to optimize. The Dice loss function formula is as follows;
Figure BDA0002265129740000061
(5) normalization: BN is selected from ResUnet as a normalization method, the output of each layer can be changed into the same distribution with the average value of 0 and the variance of 1, and gradient explosion and gradient disappearance are avoided in this way. The specific formula of BN is as follows:
Figure BDA0002265129740000071
wherein X is the input of BN layer, E [ X ] represents the expectation of input characteristic, Var [ X ] represents the variance of input characteristic, in order to not reduce the characteristic expression capability of the neural network, regulating parameters a and b are introduced to each neuron, a and b are automatically adjusted by the neural network, and Y is the output of BN layer.
The Normalization mode of ResUnet-3D is (Group Normalization) GN, and the Normalization on the Group is from 0 to 1, so that the problem of small batch caused by large apparent memory occupied by 3-dimensional data can be mainly solved.

Claims (5)

1. A PET image segmentation method fusing 2 and 3 dimensional models, characterized in that the method comprises the steps of:
step 1, designing or modifying 2-dimensional and 3-dimensional network structures aiming at the characteristics of 3-dimensional PET images and lymphomas;
step 2, preprocessing original 3-dimensional PET data to a size and a dimension suitable for network learning;
step 3, training and learning by using the 2-dimensional model and the 3-dimensional model respectively to obtain different prediction results;
and 4, carrying out average summation on the prediction result of the 2-dimensional model and the prediction result of the 3-dimensional model to obtain a final segmentation result.
2. The PET image segmentation method fusing the 2-dimensional model and the 3-dimensional model according to claim 1, characterized in that: in the step 1, the network structures of the ResUnet and ResUnet-3D neural network prediction model are as follows:
(1) an optimizer: using an Adam optimizer that can adjust different learning rates for each different parameter, updating frequently changing parameters in smaller steps, and sparse parameters in larger steps;
(2) activation function: ReLU is selected from ResUnet as an activation function, and the ReLU can change the negative number of input into zero and output the positive number of input without change;
(3) normalization: the 2-dimensional model is normalized by Batch Normalization, and the size of Batch is determined by the resolution of the image; the number of parameters of the 3-dimensional model is more than that of the two-dimensional model, only 1 3-dimensional PET image is allowed to be input each time, and the Batchnormalization in the 2-dimensional model is changed into the Group Normalization;
(4) evaluation criteria: finally, evaluating the standard of the segmentation effect, namely evaluating the segmentation effect by using a Dice similarity coefficient, wherein the higher the Dice similarity coefficient is, the higher the coincidence rate of the prediction result and the real label is;
(5) loss function: the Dice similarity coefficient is selected as a loss function, the cross entropy loss function has great limitation on the condition of imbalance of positive and negative samples, the Dice similarity coefficient is taken as the loss function to overcome the difficulty, the loss function focuses on the intersection and the union of the predicted lymphoma foreground and the lymphoma foreground in the real label, and the problem that the loss function is too small due to excessive background pixels is avoided;
(6) other parameters: the model adopts renet as an encoder, bilinear difference values in unet as a decoder, the number of training rounds is 500, the training is ended when the Dice similarity coefficient of the verification set is not improved any more in 20 rounds, and the learning rate is 0.001.
3. A PET image segmentation method fusing 2-and 3-dimensional models according to claim 1 or 2, characterized in that: in step 2, the data processing process includes:
(2.1) cutting 3-dimensional PET data into a 48 × 96 × 480 shape to unify sizes of all data;
(2.2) for the 2-dimensional model: slicing the 3-dimensional data along the width increasing direction to obtain 2-dimensional slices which are related to the width in number and have the length multiplied by the height; slicing the 3-dimensional data along the direction of increasing the height to obtain 2-dimensional slices which are related to the height in number and have the size of length multiplied by width; slicing the 3-dimensional data along the direction of increasing the length to obtain 2-dimensional slices with the number related to the length and the width multiplied by the height;
and (2.3) normalizing all the 2-dimensional slices to obtain data finally used for training.
4. The PET image segmentation method fusing 2-dimensional and 3-dimensional models according to claim 1 or 2, wherein in the step 3, the process is as follows: and inputting the preprocessed 2-dimensional PET image and the preprocessed 3-dimensional PET image into a 2-dimensional network and a 3-dimensional network respectively for training to obtain 4 prediction results of the 3-dimensional PET image.
5. The PET image segmentation method fusing 2-dimensional and 3-dimensional models according to claim 4, characterized in that: and (4) obtaining a final segmentation result by means summation ((X + Y + Z +3D)/4) by using the 2-dimensional model and the plurality of segmentation results (X, Y, Z, 3D) of the 3-dimensional model obtained in the step 4.
CN201911085031.0A 2019-11-08 2019-11-08 PET image segmentation method fusing 2-dimensional and 3-dimensional models Pending CN110942464A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911085031.0A CN110942464A (en) 2019-11-08 2019-11-08 PET image segmentation method fusing 2-dimensional and 3-dimensional models

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911085031.0A CN110942464A (en) 2019-11-08 2019-11-08 PET image segmentation method fusing 2-dimensional and 3-dimensional models

Publications (1)

Publication Number Publication Date
CN110942464A true CN110942464A (en) 2020-03-31

Family

ID=69907351

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911085031.0A Pending CN110942464A (en) 2019-11-08 2019-11-08 PET image segmentation method fusing 2-dimensional and 3-dimensional models

Country Status (1)

Country Link
CN (1) CN110942464A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461130A (en) * 2020-04-10 2020-07-28 视研智能科技(广州)有限公司 High-precision image semantic segmentation algorithm model and segmentation method
CN112465754A (en) * 2020-11-17 2021-03-09 云润大数据服务有限公司 3D medical image segmentation method and device based on layered perception fusion and storage medium
CN113066081A (en) * 2021-04-15 2021-07-02 哈尔滨理工大学 Breast tumor molecular subtype detection method based on three-dimensional MRI (magnetic resonance imaging) image
CN113160208A (en) * 2021-05-07 2021-07-23 西安智诊智能科技有限公司 Liver lesion image segmentation method based on cascade hybrid network
CN117351215A (en) * 2023-12-06 2024-01-05 上海交通大学宁波人工智能研究院 Artificial shoulder joint prosthesis design system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035355A (en) * 2017-07-10 2018-12-18 上海联影医疗科技有限公司 System and method for PET image reconstruction
CN109035255A (en) * 2018-06-27 2018-12-18 东南大学 A kind of sandwich aorta segmentation method in the CT image based on convolutional neural networks
CN109118491A (en) * 2018-07-30 2019-01-01 深圳先进技术研究院 A kind of image partition method based on deep learning, system and electronic equipment
CN109376611A (en) * 2018-09-27 2019-02-22 方玉明 A kind of saliency detection method based on 3D convolutional neural networks
US20190130562A1 (en) * 2017-11-02 2019-05-02 Siemens Healthcare Gmbh 3D Anisotropic Hybrid Network: Transferring Convolutional Features from 2D Images to 3D Anisotropic Volumes
CN110047080A (en) * 2019-03-12 2019-07-23 天津大学 A method of the multi-modal brain tumor image fine segmentation based on V-Net

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035355A (en) * 2017-07-10 2018-12-18 上海联影医疗科技有限公司 System and method for PET image reconstruction
US20190130562A1 (en) * 2017-11-02 2019-05-02 Siemens Healthcare Gmbh 3D Anisotropic Hybrid Network: Transferring Convolutional Features from 2D Images to 3D Anisotropic Volumes
CN109035255A (en) * 2018-06-27 2018-12-18 东南大学 A kind of sandwich aorta segmentation method in the CT image based on convolutional neural networks
CN109118491A (en) * 2018-07-30 2019-01-01 深圳先进技术研究院 A kind of image partition method based on deep learning, system and electronic equipment
CN109376611A (en) * 2018-09-27 2019-02-22 方玉明 A kind of saliency detection method based on 3D convolutional neural networks
CN110047080A (en) * 2019-03-12 2019-07-23 天津大学 A method of the multi-modal brain tumor image fine segmentation based on V-Net

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WENDONG XU等: "Liver Segmentation in CT based on ResUNet with 3D Probabilistic and Geometric Post Process", 《2019 IEEE 4TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING (ICSIP)》 *
王继伟 等: "基于3D_ResUnet肝脏CT图像分割的临床应用研究", 《中国数字医学》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111461130A (en) * 2020-04-10 2020-07-28 视研智能科技(广州)有限公司 High-precision image semantic segmentation algorithm model and segmentation method
CN111461130B (en) * 2020-04-10 2021-02-09 视研智能科技(广州)有限公司 High-precision image semantic segmentation algorithm model and segmentation method
CN112465754A (en) * 2020-11-17 2021-03-09 云润大数据服务有限公司 3D medical image segmentation method and device based on layered perception fusion and storage medium
CN113066081A (en) * 2021-04-15 2021-07-02 哈尔滨理工大学 Breast tumor molecular subtype detection method based on three-dimensional MRI (magnetic resonance imaging) image
CN113160208A (en) * 2021-05-07 2021-07-23 西安智诊智能科技有限公司 Liver lesion image segmentation method based on cascade hybrid network
CN117351215A (en) * 2023-12-06 2024-01-05 上海交通大学宁波人工智能研究院 Artificial shoulder joint prosthesis design system and method
CN117351215B (en) * 2023-12-06 2024-02-23 上海交通大学宁波人工智能研究院 Artificial shoulder joint prosthesis design system and method

Similar Documents

Publication Publication Date Title
CN110942464A (en) PET image segmentation method fusing 2-dimensional and 3-dimensional models
CN111311592B (en) Three-dimensional medical image automatic segmentation method based on deep learning
CN108776969B (en) Breast ultrasound image tumor segmentation method based on full convolution network
CN111798462B (en) Automatic delineation method of nasopharyngeal carcinoma radiotherapy target area based on CT image
CN109584244B (en) Hippocampus segmentation method based on sequence learning
CN111488921B (en) Intelligent analysis system and method for panoramic digital pathological image
CN110889852B (en) Liver segmentation method based on residual error-attention deep neural network
CN111462206B (en) Monocular structure light depth imaging method based on convolutional neural network
US11562491B2 (en) Automatic pancreas CT segmentation method based on a saliency-aware densely connected dilated convolutional neural network
CN111179269B (en) PET image segmentation method based on multi-view and three-dimensional convolution fusion strategy
CN110689543A (en) Improved convolutional neural network brain tumor image segmentation method based on attention mechanism
CN109493346A (en) It is a kind of based on the gastric cancer pathology sectioning image dividing method more lost and device
CN112116605A (en) Pancreas CT image segmentation method based on integrated depth convolution neural network
CN111932529B (en) Image classification and segmentation method, device and system
CN109886929B (en) MRI tumor voxel detection method based on convolutional neural network
CN110942465A (en) ResUnet-based 3-view PET image segmentation method
CN109919915A (en) Retinal fundus images abnormal area detection method and equipment based on deep learning
CN114119525A (en) Method and system for segmenting cell medical image
CN114332098A (en) Carotid artery unstable plaque segmentation method based on multi-sequence magnetic resonance image
CN116664590B (en) Automatic segmentation method and device based on dynamic contrast enhancement magnetic resonance image
CN116091524B (en) Detection and segmentation method for target in complex background
CN117197454A (en) Liver and liver tumor data segmentation method and system
CN117422871A (en) Lightweight brain tumor segmentation method and system based on V-Net
CN116721253A (en) Abdominal CT image multi-organ segmentation method based on deep learning
CN116030043A (en) Multi-mode medical image segmentation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200331

RJ01 Rejection of invention patent application after publication