WO2023124888A1 - 基于pet/mr成像系统的自动脑区分割方法及装置 - Google Patents

基于pet/mr成像系统的自动脑区分割方法及装置 Download PDF

Info

Publication number
WO2023124888A1
WO2023124888A1 PCT/CN2022/137720 CN2022137720W WO2023124888A1 WO 2023124888 A1 WO2023124888 A1 WO 2023124888A1 CN 2022137720 W CN2022137720 W CN 2022137720W WO 2023124888 A1 WO2023124888 A1 WO 2023124888A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
module
pet
layer
mri
Prior art date
Application number
PCT/CN2022/137720
Other languages
English (en)
French (fr)
Inventor
胡战利
黄振兴
刘涵
郑海荣
梁栋
刘新
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2023124888A1 publication Critical patent/WO2023124888A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/005Specific pre-processing for tomographic reconstruction, e.g. calibration, source positioning, rebinning, scatter correction, retrospective gating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Definitions

  • the present disclosure relates to medical image segmentation, in particular to an automatic brain region segmentation method and device based on a PET/MR imaging system.
  • Integrated positron emission tomography (PET)/magnetic resonance imaging (MRI) is a new multimodal imaging system that integrates PET and MRI. It realizes the simultaneous acquisition of two different devices in the same space, which not only combines the high-resolution soft tissue and multi-parameter multifunctional imaging characteristics of the MRI system, but also combines the high sensitivity of radiotracer metabolism and the quantitative data characteristics of the PET system .
  • the accuracy of brain segmentation has a great impact on clinical diagnosis, and is of great value for the diagnosis of cerebrovascular diseases, Alzheimer's disease, epilepsy, Parkinson's, neurodegenerative diseases, neuropsychiatric drug research and brain function research.
  • the main purpose of the present invention is to propose a method or device for automatic brain region segmentation based on a PET/MRI imaging system, which method or device can retain individual specificity, and has the respective advantages of PET images and MRI images, It is beneficial to improve the overall segmentation accuracy.
  • a kind of automatic brain region segmentation method based on PET/MRI imaging system that the present invention proposes, described method comprises the following steps:
  • the Unet model is a 7-layer Unet model
  • the 7-layer Unet model includes 14 convolution modules; in the first 6 convolution modules, a downsampling module is set behind each convolution module; from the 8th convolution module to the 13th convolution module, each An upsampling module is set in front of the convolution module;
  • the down-sampling module uses maximum pooling to compress the input image
  • the upsampling module uses deconvolution to enlarge the input image
  • each convolutional module has two convolutional layers for feature extraction, and a normalization layer and an activation layer are sequentially added to each convolutional layer;
  • the 14th convolutional module has only one convolutional layer, which is used to output various brain region segmentation results.
  • the normalization method adopted by the normalization layer is instance normalization, and the function of the activation layer selects the Leaky ReLU activation function.
  • brain region labels there are 43 brain region labels, including 42 brain region segmentation labels and 1 background.
  • the 7-layer Unet model uses the following loss function to measure the effect of each model training, the smaller the loss function is, the closer the brain region segmentation result of the current model to the PET image is to the MRI image label the corresponding value;
  • ⁇ and ⁇ are the weights of the cross-entropy loss function and the Dice loss function respectively, both of which are set to 1; M is the number of brain segmentation categories; N is the total number of pixels on each slice;
  • p i,j is the predicted value of the i-th pixel of the output image on the j-th brain region segmentation category
  • g i,j is the true value of the i-th pixel of the MRI slice image on the j-th brain region segmentation category.
  • the loss function is optimized using an Adam optimizer.
  • the present invention proposes an automatic brain region segmentation device based on a PET/MR imaging system, the device comprising a preprocessing module and a Unet module;
  • the preprocessing module includes a registration unit, a slice unit, a data normalization unit and a one-hot encoding unit;
  • the registration unit use the MRI image and PET image of the same person as a sample; after the MRI image in each sample is processed by removing the skull and drawing labels, it is used as a template to perform registration processing on the PET image, so that PET images share labels with MRI images;
  • the slicing unit slicing the registered image according to the cross-section
  • the data normalization unit normalize the MRI slice images and PET slice images
  • the one-hot encoding unit set the position of the channel number corresponding to the label category to 1, and set other positions to 0;
  • the Unet module the MRI image and the slice of the PET image are used as the input of the Unet model, and the brain region segmentation result after feature fusion is obtained.
  • the Unet module is a 7-layer Unet model
  • the 7-layer Unet model includes 14 convolution modules; in the first 6 convolution modules, a downsampling module is set behind each convolution module; from the 8th convolution module to the 13th convolution module, each An upsampling module is set in front of the convolution module;
  • the down-sampling module uses maximum pooling to compress the input image
  • the upsampling module uses deconvolution to enlarge the input image
  • each convolutional module has two convolutional layers for feature extraction, and a normalization layer and an activation layer are sequentially added to each convolutional layer;
  • the 14th convolution module has only one convolution layer, which is used to output the results of brain segmentation;
  • the normalization method adopted by the normalization layer is instance normalization, and the function of the activation layer selects the Leaky ReLU activation function.
  • the device there are 43 brain region labels, including 42 brain region segmentation labels and 1 background.
  • the 7-layer Unet model uses the following loss function to measure the effect of each model training, the smaller the loss function is, the closer the brain segmentation result of the current model to the PET image is to the MRI image label the corresponding value;
  • ⁇ and ⁇ are the weights of the cross-entropy loss function and the Dice loss function respectively, both of which are set to 1; M is the number of brain segmentation categories; N is the total number of pixels on each slice;
  • p i,j is the predicted value of the i-th pixel of the output image on the j-th brain region segmentation category
  • g i,j is the true value of the i-th pixel of the MRI slice image on the j-th brain region segmentation category.
  • the loss function is optimized using an Adam optimizer.
  • the invention preserves individual specificity by processing MRI images into templates; improves the generalization ability of the algorithm by using labels as model input; and improves the overall segmentation accuracy by fusing PET/MR features.
  • Fig. 1 is a schematic diagram of the Unet model in one embodiment of the present disclosure
  • 1 is input for 2 convolutions; 2 is the maximum pooling layer; 3 inverse convolution 4 output after 1 convolution;
  • FIG. 2 is a cross-sectional slice of the coronal plane of all class labels corresponding to the gold standard in one embodiment of the present disclosure
  • FIG. 3 is a cross-sectional slice of the coronal plane of all class labels corresponding to a single-channel MRI image in one embodiment of the present disclosure
  • FIG. 4 is a cross-sectional slice of the coronal plane of all class labels corresponding to a single-channel MRI image in one embodiment of the present disclosure
  • Fig. 5 is a cross-sectional slice of the coronal plane of all class labels corresponding to PET/MR dual input in one embodiment of the present disclosure.
  • the novel multi-modality imaging system used is an integrated positron emission tomography (PET)/magnetic resonance imaging (MRI), which integrates PET and MRI and realizes two different devices in the same Simultaneous acquisition in space not only combines the high-resolution soft tissue and multi-parameter multifunctional imaging characteristics of the MRI system, but also combines the high sensitivity and quantitative data characteristics of the radiotracer metabolism of the PET system.
  • PET positron emission tomography
  • MRI magnetic resonance imaging
  • Such a system can acquire two kinds of images at the same time.
  • the features of the MRI image and the features of the PET image can be fused, which can preserve the characteristics of individual differences while improving the accuracy and accuracy of PET brain region segmentation.
  • the specific implementation steps are as follows:
  • the MRI image of the same brain is used as a template, and the PET image of the brain is segmented to preserve individual variability.
  • the registered PET images share the same labels as the MRI images. Specifically, when performing label processing in MRI images, 42 labels and 1 background label are set in the brain area, and the registered PET images share these labels.
  • PET and MR image slices as the input of the 7-layer Unet model for training and learning can be used to improve the generalization ability of the model and improve the accuracy of brain region segmentation.
  • the 7-layer Unet model learns the features of PET molecular imaging function and high-resolution MRI soft tissue on the one hand, and fuses the features of MRI images and PET images on the other hand to output the results of brain region segmentation.
  • the Unet model is designed as a 7-layer Unet model, as shown in FIG. 1 .
  • the left side of the model is used for feature extraction, and the right side is used for fusion features.
  • the 7-layer Unet model uses 14 convolution modules, 6 down-sampling modules and 6 up-sampling modules. Specifically, in the first 6 convolution modules, a downsampling module is set behind each convolution module; from the 8th convolution module to the 13th convolution module, an upsampling module is set in front of each convolution module.
  • the down-sampling module uses maximum pooling to compress the input image; the up-sampling module uses deconvolution to enlarge the input image.
  • each convolutional module has two convolutional layers for feature extraction.
  • a normalization layer is added after each convolutional layer, and the normalization method is instance normalization.
  • an activation layer is added after the normalization layer, and the function of the activation layer selects the Leaky ReLU activation function.
  • the number of output channels of the convolution module is set to 43, including 42 types of labels and 1 background label.
  • the parameter settings of the 7-layer Unet model and 1 convolutional model are shown in Table 1.
  • the fifth downsampling module 2 ⁇ 2 1 ⁇ 1 - 480 480 The 6th convolution module 3 ⁇ 3 1 ⁇ 1 1 ⁇ 1 480 480
  • the sixth downsampling module 2 ⁇ 2 1 ⁇ 1 - 480 480 The 7th convolution module 3 ⁇ 3 1 ⁇ 1 1 ⁇ 1 480 480
  • the first upsampling module 2 ⁇ 1 2 ⁇ 1 - 480 480 The 8th convolution module 3 ⁇ 3 1 ⁇ 1 1 ⁇ 1 960 480
  • the second upsampling module 2 ⁇ 2 1 ⁇ 1 - 480 480 The 9th convolution module 3 ⁇ 3 1 ⁇ 1 1 ⁇ 1 960 480
  • the third upsampling module 2 ⁇ 2 1 ⁇ 1 - 480 256
  • the 10th convolution module 3 ⁇ 3 1 ⁇ 1 1 ⁇ 1 512 256
  • the 4th upsampling module 2 ⁇ 2 1 ⁇ 1 - 256 128 The 11th convolution module 3 ⁇ 3 1 ⁇ 1 1 ⁇ 1 256 128
  • the left and right convolution modules of each layer of the 7-layer Unet model include a 3 ⁇ 3 convolution kernel, the left side is used to extract two kinds of image features for learning, and the right side Used to fuse two image features.
  • the right convolution module of the first layer also includes a 1 ⁇ 1 convolution kernel. Since the corresponding labels have been one-hot encoded, it is convenient and fast to calculate the dice value of each type of label, thereby improving the 7-layer Unet model. The speed of brain segmentation.
  • 1 is the input of 2 convolutions; 2 is the maximum pooling layer; 3 inverse convolution 4 is output after 1 convolution; on the left side of the 7-layer Unet model, each layer of input must first pass
  • the two convolution operations of the convolution module perform feature extraction, and then perform the downsampling processing of the downsampling module.
  • the MRI image and PET image with a size of 256 ⁇ 192 are used as input, after the feature information is extracted by the first layer of convolution module, and the feature information is compressed by the downsampling module, the input of the second layer is obtained, and the image size becomes 128 ⁇ 96.
  • the resulting image sizes are 64 ⁇ 48, 32 ⁇ 24, 16 ⁇ 12, 8 ⁇ 6, 4 ⁇ 6.
  • the upsampling module is used for upsampling processing, and the upsampling is used to enlarge the image.
  • Input the enlarged image features and the feature image obtained by the sixth layer on the left into the convolution module on the right, and the input image of the fifth layer on the right can be obtained, and the obtained image size is 8 ⁇ 6.
  • the size of the obtained pictures is 16 ⁇ 12 in turn, 32 ⁇ 24, 64 ⁇ 48, 128 ⁇ 96, 256 ⁇ 192.
  • the convolution operation is performed by the 14th convolution module, and 43 brain region segmentation results are obtained.
  • ⁇ and ⁇ are the weights of the cross-entropy loss function and the Dice loss function respectively, both of which are set to 1; M is the number of brain segmentation categories 43; N is the total number of pixels on each slice;
  • p i,j is the predicted value of the i-th pixel of the output image on the j-th brain region segmentation category
  • g i,j is the true value of the i-th pixel of the MRI slice image on the j-th brain region segmentation category.
  • the loss function is optimized using Adam optimizer.
  • the MRI image and PET image of the case are acquired at the same time, the brain region segmentation is performed on the MRI image and the PET image respectively using the existing technology, and the brain region segmentation image is obtained by the method of the present invention.
  • Figure 2- Figure 5 is a cross-sectional view of the coronal plane with all labels
  • Figure 2 is a schematic diagram of the gold standard
  • Figure 3 is a schematic diagram of a cross-sectional slice with only MRI images as input and corresponding output
  • Figure 4 is a cross-sectional view with only PET images as input and corresponding output Schematic diagram of a plane slice
  • FIG. 5 is a schematic diagram of a cross-sectional slice that uses MRI images and PET images as common inputs and corresponding outputs in the present invention.
  • an automatic brain region segmentation device based on a PET/MR imaging system includes a preprocessing module and a Unet module;
  • the preprocessing module includes a registration unit and a slicing unit , a data normalization unit and a one-hot encoding unit;
  • the registration unit use the MRI image and PET image of the same person as a sample; after the MRI image in each sample is subjected to skull removal and label processing, the It is used as a template to perform registration processing on the PET image, so that the PET image and the MRI image share a label;
  • the slicing unit slices the registered image according to the cross section;
  • the data normalization unit slices the MRI image Images, PET slice images, and labels are normalized;
  • the one-hot encoding unit set the position corresponding to the channel number of the label category to 1, and set other positions to 0;
  • the Unet module combine the MRI image and the PET image The slice is used as the input of the Unet
  • the Unet module adopts a 7-layer Unet model;
  • the 7-layer Unet model includes 14 convolution modules; in the first 6 convolution modules, a downsampling module is set behind each convolution module; from the 8th From the convolution module to the 13th convolution module, an upsampling module is set in front of each convolution module; the downsampling module uses maximum pooling to reduce the input image; the upsampling module uses deconvolution to input The image is enlarged; in the first 13 convolution modules, each convolution module has two convolution layers for feature extraction, and a normalization layer and an activation layer are added to each convolution layer in turn; the 14th convolution The module has only one convolutional layer, which is used to output the segmentation results of each brain region; the normalization method adopted by the normalization layer is instance normalization, and the function of the activation layer selects the Leaky ReLU activation function. There are 43 brain region labels, including 42 brain region segmentation labels and 1 background. Specifically, the parameter settings of the 7
  • the 7-layer Unet model uses the following loss function to measure the effect of each model training, the smaller the loss function is, the closer the brain segmentation result of the current model to the PET image is to the MRI image label the corresponding value;
  • ⁇ , ⁇ are the weights of the cross-entropy loss function and the Dice loss function respectively, both of which are set to 1; M is the number of brain segmentation categories; N is the total number of pixels on each slice; p i,j is The predicted value of the i-th pixel of the output image on the j-th brain region segmentation category; g i,j is the real value of the i-th pixel of the MRI slice image on the j-th brain region segmentation category.
  • the loss function is optimized using an Adam optimizer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Image Processing (AREA)
  • Nuclear Medicine (AREA)

Abstract

本发明涉及一种基于PET/MR成像系统的自动脑区分割方法及装置,将MRI图像和PET图像特征进行融合,提高脑区分割精度与准确率。所述方法将同一人的MRI图像和PET图像作为一个样本;将每个样本中的MRI图像进行去颅骨、绘制标签处理后,将其作为模板对PET图像进行配准处理,使得PET图像与MRI图像共用标签;对配准后的图像,按照横断面进行切片处理,并对切片图像进行数据归一化处理以及对标签进行独热编码;建立一个具有两个输入通道一个输出通道的Unet模型;将MRI图像与PET图像的切片作为Unet模型的输入,得到特征融合后的脑区分割结果。本发明通过融合PET/MR双模态的特征,保留个体特异性,提高脑区分割精度与准确率。

Description

基于PET/MR成像系统的自动脑区分割方法及装置 技术领域
本公开涉及医学图像分割,具体涉及一种基于PET/MR成像系统的自动脑区分割方法及装置。
背景技术
一体化正电子发射断层成像(PET)/核磁共振成像(MRI)是将PET和MRI整合成一体的新型多模态影像系统。它实现了两种不同设备在相同空间内同时采集,既结合了MRI系统的软组织高分辨率与多参数多功能成像特性,又结合了PET系统的放射性示踪剂代谢高灵敏度以及数据定量化特性。脑区分割的准确与否对临床诊断影响重大,对于脑血管疾病、老年痴呆、癫痫、帕金森、神经退行性疾病的诊断以及神经精神药物研究与脑功能研究等具有重要价值。
传统方法和分割工具采取标准脑模板的方法,忽略了个体差异。大部分算法基于少数脑区或者脑肿瘤分割,将这些算法用于对整个脑区分割时,算法鲁棒性差。当前由于PET分辨率远低于MRI,大多数脑区分割算法研究都集中在MRI上,少有研究用PET实现脑区分割的算法,但是PET属于功能成像,侧重于细胞活动的病例检测应用,对其脑区分割有非常大的实际应用价值。
发明内容
有鉴于此,本发明的主要目的是提出一种基于PET/MRI成像系统的自动脑区分割的方法或者装置,该方法或者装置能够保留个体特异性,并具有PET图像和MRI图像各自的优点,有利于提高整体的分割准确率。
第一方面,本发明提出的一种基于PET/MRI成像系统的自动脑区分割方法,所述方法包括下述步骤:
S100、将同一人的MRI图像和PET图像作为一个样本;在将每个样本中的MRI图像进行去颅骨、绘制标签处理后,将其作为模板对PET图像进行配准处 理,使得PET图像与MRI图像共用标签;
S200、对配准后的图像,按照横断面进行切片处理,并对切片图像进行数据归一化处理以及对标签进行独热编码;
S300、建立一个具有两个输入通道的Unet模型;
S400、将MRI图像与PET图像的切片作为Unet模型的输入,得到特征融合后的脑区分割结果。
优选地,在所述方法中,所述Unet模型为7层Unet模型;
所述7层Unet模型包括14个卷积模块;前6个卷积模块中,每个卷积模块后面设置一个下采样模块;从第8个卷积模块到第13个卷积模块,每个卷积模块前面设置一个上采样模块;
所述下采样模块采用最大池化对输入图像进行压缩处理;
所述上采样模块采用逆卷积对输入图像进行放大处理;
前13个卷积模块中,每个卷积模块有两个卷积层用于特征提取,在每个卷积层依次增加归一化层和激活层;
第14个卷积模块只有一个卷积层,用于输出各类脑区分割结果。
所述归一化层采用的归一化方法为实例归一化,所述激活层的函数选择Leaky ReLU激活函数。
优选地,在所述方法中,所述脑区标签为43个,包括42个脑区分割标签和1个背景。
优选地,在所述方法中,所述7层Unet模型采用下述损失函数来衡量每次模型训练的效果,损失函数越小,表明当前模型对PET图像的脑区分割结果越接近MRI图像标签对应的值;
Loss=αL de+βL ce
Figure PCTCN2022137720-appb-000001
Figure PCTCN2022137720-appb-000002
上式中:
α,β分别是交叉熵损失函数和Dice损失函数的权重,都设置为1;M是脑区分割类别的数量;N为每张切片上的像素点总数;
p i,j是输出图像的第i个像素在第j个脑区分割类别上的预测值;
g i,j是MRI的切片图像的第i个像素在第j个脑区分割类别上的真实值。
优选地,在所述方法中,所述损失函数使用Adam优化器来优化。
第二方面,本发明提出了一种基于PET/MR成像系统的自动脑区分割装置,所述装置包括预处理模块、Unet模块;
所述预处理模块包括配准单元、切片单元、数据归一化单元以及独热编码单元;
所述配准单元:将同一人的MRI图像和PET图像作为一个样本;在将每个样本中的MRI图像进行去颅骨、绘制标签处理后,将其作为模板对PET图像进行配准处理,使得PET图像与MRI图像共用标签;
所述切片单元:将配准后的图像,按照横断面进行切片处理;
所述数据归一化单元:将MRI切片图像、PET切片图像进行归一化处理;
所述独热编码单元:将标签类别对应的通道数所在的位置设置为1,其它位置设置为0;
所述Unet模块:将MRI图像与PET图像的切片作为Unet模型的输入,得到特征融合后的脑区分割结果。
优选地,在所述装置中,所述Unet模块为7层Unet模型;
所述7层Unet模型包括14个卷积模块;前6个卷积模块中,每个卷积模块后面设置一个下采样模块;从第8个卷积模块到第13个卷积模块,每个卷积模块前面设置一个上采样模块;
所述下采样模块采用最大池化对输入图像进行压缩处理;
所述上采样模块采用逆卷积对输入图像进行放大处理;
前13个卷积模块中,每个卷积模块有两个卷积层用于特征提取,在每个卷 积层依次增加归一化层和激活层;
第14个卷积模块只有一个卷积层,用于输出脑区分割结果;
所述归一化层采用的归一化方法为实例归一化,所述激活层的函数选择Leaky ReLU激活函数。
优选地,在所述装置中,所述脑区标签为43个,包括42个脑区分割标签和1个背景。
优选地,在所述装置中,所述7层Unet模型采用下述损失函数来衡量每次模型训练的效果,损失函数越小,表明当前模型对PET图像的脑区分割结果越接近MRI图像标签对应的值;
Loss=αL de+βL ce
Figure PCTCN2022137720-appb-000003
Figure PCTCN2022137720-appb-000004
上式中:
α,β分别是交叉熵损失函数和Dice损失函数的权重,都设置为1;M是脑区分割类别的数量;N为每张切片上的像素点总数;
p i,j是输出图像的第i个像素在第j个脑区分割类别上的预测值;
g i,j是MRI的切片图像的第i个像素在第j个脑区分割类别上的真实值。
优选地,在所述装置中,所述损失函数使用Adam优化器来优化。
与现有技术相比:
本发明通过将MRI图像处理成模板,保留个体特异性;通过将标签作为模型输入,提高算法的泛化能力;通过融合PET/MR的特征,可以提高整体的分割准确率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所 需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开一个实施例中的Unet模型示意图;
其中:①为进行2次卷积输入;②为最大池化层;③逆卷积④进行1次卷积后输出;
图2为本公开一个实施例中关于金标准对应的全类标签冠状面的横断面切片;
图3为本公开一个实施例中关于单通道MRI图像对应的全类标签冠状面的横断面切片;
图4为本公开一个实施例中关于单通道MRI图像对应的全类标签冠状面的横断面切片;
图5为本公开一个实施例中关于PET/MR双输入对应的全类标签冠状面的横断面切片。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的说明书和权利要求书的术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含。例如,包含了一系列步骤或设备的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或设备,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其他步骤或设备。
为使本申请的目的、技术方案和优点更加清楚,下面以具体实施例对本发明的技术方案进行详细说明。下面几个具体实施例可以相互结合,对于相同或 相似的概念或过程可能在某些实施例不再赘述。
在一个实施例中,使用的新型多模态影像系统是一体化正电子发射断层成像(PET)/核磁共振成像(MRI),它将PET和MRI整合成一体,实现了两种不同设备在相同空间内同时采集,既结合了MRI系统的软组织高分辨率与多参数多功能成像特性,又结合了PET系统的放射性示踪剂代谢高灵敏度以及数据定量化特性。
这种系统可以同时获取两种图像。通过采用下述方法,可以将MRI图像的特征和PET图像的特征进行融合,可以保留个体差异特征的同时,提高PET的脑区分割精度与准确率,具体实施步骤如下:
S100、将同一人的MRI图像和PET图像作为一个样本;在将每个样本中的MRI图像进行去颅骨、绘制标签处理后,将其作为模板对PET图像进行配准处理,使得PET图像与MRI图像共用标签;
S200、对配准后的图像,按照横断面进行切片处理,并对切片图像进行数据归一化处理以及对标签进行独热编码;
S300、建立一个具有两个输入通道的Unet模型;
S400、将MRI图像与PET图像的切片作为Unet模型的输入,得到特征融合后的脑区分割结果。
由于不同的人,脑部结构有不同的差异,如果采用脑模板,会忽略掉个体个异性。因此在这个实施例中,使用同一脑的MRI图像作为模板,对该脑的PET图像进行脑区分割,保留个体各异性。通过配准后的PET图像,与MRI图像共用相同标签。具体地,在MRI图像中进行标签处理时,在脑区设置42个标签,1个背景标签,配准后的PET图像共享这些标签。将PET和MR图像切片作为7层Unet模型的输入,进行训练学习,能够用于提高模型的泛化能力,提高对脑区分割的准确性。
7层Unet模型一方面学习PET分子成像功能以及MRI软组织高分辨率这些特征,另一方面将MRI图像和PET图像进行特征融合,输出脑区分割结果。
在一个优选地实施例中,将Unet模型设计为一个7层Unet模型,如图1所示。该模型的左边用于特征提取,右边用于融合特征。需要理解的是,术语“左”、“右”等仅是为了便于描述本申请和简化描述,不能理解为对本申请的限制。所述7层Unet模型使用14个卷积模块、6个下采样模块以及6个上采样模块。具体地,前6个卷积模块中,每个卷积模块后面设置一个下采样模块;从第8个卷积模块到第13个卷积模块,每个卷积模块前面设置一个上采样模块。所述下采样模块采用最大池化对输入图像进行压缩处理;所述上采样模块采用逆卷积对输入图像进行放大处理。前13个卷积模块中,每个卷积模块有两个卷积层用于特征提取。为了加速神经网络训练,加速收敛速度及稳定性,在每个卷积层后面增加归一化层,归一化方法为实例归一化。为了加快收敛速度以及解决梯度消失的问题,在归一层后面添加激活层,所述激活层的函数选择Leaky ReLU激活函数。为了方便快速地计算每类标签的Dice损失函数值,在第13个卷积模块之后,增加一个卷积模块,该模块仅有一个卷积层,用于将各个脑区分割标签作为对应通道输出,该卷积模块的输出通道数设置为43,包括42类标签以及1个背景标签。该7层Unet模型和1个卷积模型的参数设置如表1所示。
表1
模块 核大小 步长 填充 输入通道 输出通道
第1个卷积模块 3×3 1×1 1×1 2 32
第1个下采样模块 2×2 1×1 - 32 32
第2个卷积模块 3×3 1×1 1×1 32 64
第2个下采样模块 2×2 1×1 - 64 64
第3个卷积模块 3×3 1×1 1×1 64 128
第3个下采样模块 2×2 1×1 - 128 128
第4个卷积模块 3×3 1×1 1×1 128 256
第4个下采样模块 2×2 1×1 - 256 256
第5个卷积模块 3×3 1×1 1×1 256 480
第5个下采样模块 2×2 1×1 - 480 480
第6个卷积模块 3×3 1×1 1×1 480 480
第6个下采样模块 2×2 1×1 - 480 480
第7个卷积模块 3×3 1×1 1×1 480 480
第1个上采样模块 2×1 2×1 - 480 480
第8个卷积模块 3×3 1×1 1×1 960 480
第2个上采样模块 2×2 1×1 - 480 480
第9个卷积模块 3×3 1×1 1×1 960 480
第3个上采样模块 2×2 1×1 - 480 256
第10个卷积模块 3×3 1×1 1×1 512 256
第4个上采样模块 2×2 1×1 - 256 128
第11个卷积模块 3×3 1×1 1×1 256 128
第5个上采样模块 2×2 1×1 - 128 64
第12个卷积模块 3×3 1×1 1×1 128 64
第6个上采样模块 2×2 1×1 - 64 32
第13个卷积模块 3×3 1×1 1×1 64 32
第14个卷积模块 1×1 1×1 - 32 43
从上表看出,在本实施例中,7层Unet模型每一层的左右卷积模块都各包括一个3×3的卷积核,左侧用于提取两种图像特征进行学习,右侧用于融合两种图像特征。第一层的右侧卷积模块还包括一个1×1的卷积核,由于对应的标签已经过独热编码,因而可以方便快速地计算每一类标签的dice值,从而提高7层Unet模型的脑区分割速度。
图1中,①为进行2次卷积输入;②为最大池化层;③逆卷积④进行1次卷积后输出;在7层Unet模型的左侧,每一层输入时首先要通过卷积模块的两次卷积运算,进行特征提取,然后进行下采样模块下采样处理。按表1参数配置,将大小为256×192的MRI图像和PET图像作为输入,经过第一层卷积模块提取特征信息、下采样模块压缩特征信息之后,得到第二层的输入,图像大小变为128×96。对于左侧第二层到左侧第六层,每层经过卷积模块和下采样模 块处理后,得到的图片大小依次为64×48,32×24,16×12,8×6,4×6。
将大小为4×6的图像经第七个卷积模块卷积运算之后,进行上采样模块上采样处理,上采样用于放大图像。将放大后的图像特征和左侧的第六层得到的特征图像输入右侧的卷积模块,可以得到右侧第五层的输入图像,得到的图像大小为8×6。接下来,对于右侧第五层到右侧第一层,每层依次经过上采样模块放大图像处理以及和对应层的特征图像进行卷积模块处理后,得到的图片大小依次为16×12,32×24,64×48,128×96,256×192。最后经第14个卷积模块进行卷积操作,得到43个脑区分割结果。
在7层Unet模型进行训练时,用于优化模型的损失函数如下:
Loss=αL de+βL ce
Figure PCTCN2022137720-appb-000005
Figure PCTCN2022137720-appb-000006
上式中:
α,β分别是交叉熵损失函数和Dice损失函数的权重,都设置为1;M是脑区分割类别的数量43;N为每张切片上的像素点总数;
p i,j是输出图像的第i个像素在第j个脑区分割类别上的预测值;
g i,j是MRI的切片图像的第i个像素在第j个脑区分割类别上的真实值。
损失函数越小,表明当前模型对PET图像的脑区分割结果越接近MRI图像标签对应的值。优选地,损失函数使用Adam优化器来优化。
在一个实施例中,同时获取了病例的MRI图像、PET图像,使用现有技术分别对MRI图像、PET图像进行脑区分割,以及本发明的方法获取脑区分割图像。图2-图5为全类标签冠状面横断面切片,图2为金标准示意图,图3为仅MRI图像作为输入相应输出的横断面切片示意图,图4为仅PET图像作为输入相应输出的横断面切片示意图,图5为本发明中将MRI图像、PET图像作为共同输入相应输出的横断面切片示意图。
在一个实施例中,根据本发明方法实现了一种基于PET/MR成像系统的自动脑区分割装置,所述装置包括预处理模块、Unet模块;所述预处理模块包括配准单元、切片单元、数据归一化单元以及独热编码单元;所述配准单元:将同一人的MRI图像和PET图像作为一个样本;在将每个样本中的MRI图像进行去颅骨、绘制标签处理后,将其作为模板对PET图像进行配准处理,使得PET图像与MRI图像共用标签;所述切片单元:将配准后的图像,按照横断面进行切片处理;所述数据归一化单元:将MRI切片图像、PET切片图像、标签进行归一化处理;所述独热编码单元:将标签类别对应通道数所在的位置设置为1,其它位置设置为0;所述Unet模块:将MRI图像与PET图像的切片作为Unet模型的输入,得到特征融合后的脑区分割结果。。
优选地,所述Unet模块采用7层Unet模型;所述7层Unet模型包括14个卷积模块;前6个卷积模块中,每个卷积模块后面设置一个下采样模块;从第8个卷积模块到第13个卷积模块,每个卷积模块前面设置一个上采样模块;所述下采样模块采用最大池化对输入图像进行缩小处理;所述上采样模块采用逆卷积对输入图像进行放大处理;前13个卷积模块中,每个卷积模块有两个卷积层用于特征提取,在每个卷积层依次增加归一化层和激活层;第14个卷积模块只有一个卷积层,用于输出各脑区分割结果;所述归一化层采用的归一化方法为实例归一化,所述激活层的函数选择Leaky ReLU激活函数。脑区标签为43个,分别为42个脑区分割标签和1个背景。具体地,所述7层Unet模型,以及1个卷积模型的参数设置同表1相同。
优选地,在所述装置中,所述7层Unet模型采用下述损失函数来衡量每次模型训练的效果,损失函数越小,表明当前模型对PET图像的脑区分割结果越接近MRI图像标签对应的值;
Loss=αL de+βL ce
Figure PCTCN2022137720-appb-000007
Figure PCTCN2022137720-appb-000008
上式中:α,β分别是交叉熵损失函数和Dice损失函数的权重,都设置为1;M是脑区分割类别的数量;N为每张切片上的像素点总数;p i,j是输出图像的第i个像素在第j个脑区分割类别上的预测值;g i,j是MRI的切片图像的第i个像素在第j个脑区分割类别上的真实值。
优选地,在所述装置中,所述损失函数使用Adam优化器来优化。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本公开可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CPU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。但是,对本公开而言更多情况下,软件程序实现是更佳的实施方式。
尽管以上结合附图对本发明的实施方案进行了描述,但本发明并不局限于上述的具体实施方案和应用领域,上述的具体实施方案仅仅是示意性的、指导性的,而不是限制性的。本领域的普通技术人员在本说明书的启示下和在不脱离本发明权利要求所保护的范围的情况下,还可以做出很多种的形式,这些均属于本发明保护之列。

Claims (10)

  1. 一种基于PET/MR成像系统的自动脑区分割方法,其特征在于,所述方法包括下述步骤:
    S100、将同一人的MRI图像和PET图像作为一个样本;在将每个样本中的MRI图像进行去颅骨、绘制标签处理后,将其作为模板对PET图像进行配准处理,使得PET图像与MRI图像共用标签;
    S200、对配准后的图像,按照横断面进行切片处理,并对切片图像进行数据归一化处理以及对标签进行独热编码;
    S300、建立一个具有两个输入通道的Unet模型;
    S400、将MRI图像与PET图像的切片作为Unet模型的输入,得到特征融合后的脑区分割结果。
  2. 根据权利要求1所述的方法,其特征在于,所述Unet模型为7层Unet模型;
    所述7层Unet模型包括14个卷积模块;前6个卷积模块中,每个卷积模块后面设置一个下采样模块;从第8个卷积模块到第13个卷积模块,每个卷积模块前面设置一个上采样模块;
    所述下采样模块采用最大池化对输入图像进行缩小处理;
    所述上采样模块采用逆卷积对输入图像进行放大处理;
    前13个卷积模块中,每个卷积模块有两个卷积层用于特征提取,在每个卷积层依次增加归一化层和激活层;
    第14个卷积模块只有一个卷积层,用于输出脑区分割标签;
    所述归一化层采用的归一化方法为实例归一化,所述激活层的函数选择Leaky ReLU激活函数。
  3. 根据权利要求2所述的方法,其特征在于,所述脑区标签为43个,包括42个脑区分割标签和1个背景。
  4. 根据权利要求2所述的方法,其特征在于,所述7层Unet模型采用下述损失函数来衡量每次模型训练的效果,损失函数越小,表明当前模型对PET 图像的脑区分割结果越接近MRI图像标签对应的值;
    Loss=αL de+βL ce
    Figure PCTCN2022137720-appb-100001
    Figure PCTCN2022137720-appb-100002
    上式中:
    α,β分别是交叉熵损失函数和Dice损失函数的权重,都设置为1;M是脑区分割类别的数量;N为每张切片上的像素点总数;
    p i,j是输出图像的第i个像素在第j个脑区分割类别上的预测值;
    g i,j是MRI的切片图像的第i个像素在第j个脑区分割类别上的真实值。
  5. 根据权利要求6所述的方法,其特征在于,所述损失函数使用Adam优化器来优化。
  6. 一种基于PET/MR成像系统的自动脑区分割装置,其特征在于,所述装置包括预处理模块、Unet模块;
    所述预处理模块包括配准单元、切片单元、数据归一化单元以及独热编码单元;
    所述配准单元:将同一人的MRI图像和PET图像作为一个样本;在将每个样本中的MRI图像进行去颅骨、绘制标签处理后,将其作为模板对PET图像进行配准处理,使得PET图像与MRI图像共用标签;
    所述切片单元:将配准后的图像,按照横断面进行切片处理;
    所述数据归一化单元:将MRI切片图像、PET切片图像进行归一化处理;
    所述独热编码单元:将标签类别对应通道数所在的位置设置为1,其它位置设置为0;
    所述Unet模块:将MRI图像与PET图像的切片作为Unet模型的输入,得到特征融合后的脑区分割结果。
  7. 根据权利要求6所述的装置,其特征在于,所述Unet模块为7层Unet 模型;
    所述7层Unet模型包括14个卷积模块;前6个卷积模块中,每个卷积模块后面设置一个下采样模块;从第8个卷积模块到第13个卷积模块,每个卷积模块前面设置一个上采样模块;
    所述下采样模块采用最大池化对输入图像进行压缩处理;
    所述上采样模块采用逆卷积对输入图像进行放大处理;
    前13个卷积模块中,每个卷积模块有两个卷积层用于特征提取,在每个卷积层依次增加归一化层和激活层;
    第14个卷积模块只有一个卷积层,用于输出各类脑区分割结果;
    所述归一化层采用的归一化方法为实例归一化,所述激活层的函数选择Leaky ReLU激活函数。
  8. 根据权利要求6所述的装置,其特征在于,所述脑区标签为43个,包括42个脑区分割标签和1个背景。
  9. 根据权利要求6所述的装置,其特征在于,所述7层Unet模型采用下述损失函数来衡量每次模型训练的效果,损失函数越小,表明当前模型对PET图像的脑区分割结果越接近MRI图像标签对应的真实值;
    Loss=αL de+βL ce
    Figure PCTCN2022137720-appb-100003
    Figure PCTCN2022137720-appb-100004
    上式中:
    α,β分别是交叉熵损失函数和Dice损失函数的权重,都设置为1;M是脑区分割类别的数量;N为每张切片上的像素点总数;
    p i,j是输出图像的第i个像素在第j个脑区分割类别上的预测值;
    g i,j是MRI的切片图像的第i个像素在第j个脑区分割类别上的真实值。
  10. 根据权利要求6所述的装置,其特征在于,所述损失函数使用Adam 优化器来优化。
PCT/CN2022/137720 2021-12-31 2022-12-08 基于pet/mr成像系统的自动脑区分割方法及装置 WO2023124888A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111683008.9A CN114463456A (zh) 2021-12-31 2021-12-31 基于pet/mr成像系统的自动脑区分割方法及装置
CN202111683008.9 2021-12-31

Publications (1)

Publication Number Publication Date
WO2023124888A1 true WO2023124888A1 (zh) 2023-07-06

Family

ID=81407794

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/137720 WO2023124888A1 (zh) 2021-12-31 2022-12-08 基于pet/mr成像系统的自动脑区分割方法及装置

Country Status (2)

Country Link
CN (1) CN114463456A (zh)
WO (1) WO2023124888A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463456A (zh) * 2021-12-31 2022-05-10 深圳先进技术研究院 基于pet/mr成像系统的自动脑区分割方法及装置
CN115018836A (zh) * 2022-08-08 2022-09-06 四川大学 一种癫痫病灶自动分割与预测方法、系统及设备

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949318A (zh) * 2019-03-07 2019-06-28 西安电子科技大学 基于多模态影像的全卷积神经网络癫痫病灶分割方法
AU2020103905A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning
CN112508775A (zh) * 2020-12-10 2021-03-16 深圳先进技术研究院 基于循环生成对抗网络的mri-pet图像模态转换方法及系统
CN113096166A (zh) * 2019-12-17 2021-07-09 上海美杰医疗科技有限公司 一种医学影像配准的方法和装置
CN113538495A (zh) * 2020-04-17 2021-10-22 成都连心医疗科技有限责任公司 一种基于多模态影像的颞叶勾画方法、勾画系统、计算设备和存储介质
CN114463456A (zh) * 2021-12-31 2022-05-10 深圳先进技术研究院 基于pet/mr成像系统的自动脑区分割方法及装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109949318A (zh) * 2019-03-07 2019-06-28 西安电子科技大学 基于多模态影像的全卷积神经网络癫痫病灶分割方法
CN113096166A (zh) * 2019-12-17 2021-07-09 上海美杰医疗科技有限公司 一种医学影像配准的方法和装置
CN113538495A (zh) * 2020-04-17 2021-10-22 成都连心医疗科技有限责任公司 一种基于多模态影像的颞叶勾画方法、勾画系统、计算设备和存储介质
AU2020103905A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Unsupervised cross-domain self-adaptive medical image segmentation method based on deep adversarial learning
CN112508775A (zh) * 2020-12-10 2021-03-16 深圳先进技术研究院 基于循环生成对抗网络的mri-pet图像模态转换方法及系统
CN114463456A (zh) * 2021-12-31 2022-05-10 深圳先进技术研究院 基于pet/mr成像系统的自动脑区分割方法及装置

Also Published As

Publication number Publication date
CN114463456A (zh) 2022-05-10

Similar Documents

Publication Publication Date Title
WO2023124888A1 (zh) 基于pet/mr成像系统的自动脑区分割方法及装置
Kohl et al. Adversarial networks for the detection of aggressive prostate cancer
CN111488914B (zh) 一种基于多任务学习的阿尔茨海默症分类及预测系统
Quan et al. An effective convolutional neural network for classifying red blood cells in malaria diseases
Ceritoglu et al. Large deformation diffeomorphic metric mapping registration of reconstructed 3D histological section images and in vivo MR images
Woo et al. Fully automatic segmentation of acute ischemic lesions on diffusion-weighted imaging using convolutional neural networks: comparison with conventional algorithms
CN111429474B (zh) 基于混合卷积的乳腺dce-mri图像病灶分割模型建立及分割方法
Wang et al. RP-Net: a 3D convolutional neural network for brain segmentation from magnetic resonance imaging
CN111951288B (zh) 一种基于深度学习的皮肤癌病变分割方法
WO2023168912A1 (zh) 一种基于多关系功能连接矩阵的疾病预测系统及装置
CN112164082A (zh) 基于3d卷积神经网络分割多模态mr脑部图像方法
WO2020248898A1 (zh) 图像处理方法、装置、设备和存储介质
CN112348785B (zh) 一种癫痫病灶定位方法及系统
Rueckert et al. Learning clinically useful information from images: Past, present and future
Yang et al. Large-scale brain functional network integration for discrimination of autism using a 3-D deep learning model
WO2021212715A1 (zh) 精神分裂症分类识别方法、运行控制装置及医疗设备
CN112950644B (zh) 基于深度学习的新生儿大脑图像分割方法及模型构建方法
Kang et al. Fusion of brain PET and MRI images using tissue-aware conditional generative adversarial network with joint loss
CN110674773A (zh) 一种痴呆症的识别系统、装置及存储介质
Todoroki et al. Automatic detection of focal liver lesions in multi-phase CT images using a multi-channel & multi-scale CNN
Zhou Modality-level cross-connection and attentional feature fusion based deep neural network for multi-modal brain tumor segmentation
Basnet et al. A deep dense residual network with reduced parameters for volumetric brain tissue segmentation from MR images
Zhuang et al. APRNet: A 3D anisotropic pyramidal reversible network with multi-modal cross-dimension attention for brain tissue segmentation in MR images
CN111199801B (zh) 一种用于识别病历的疾病类型的模型的构建方法及应用
Hasegawa et al. Automatic segmentation of liver tumor in multiphase CT images by mask R-CNN

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22914143

Country of ref document: EP

Kind code of ref document: A1