WO2020001217A1 - 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法 - Google Patents

一种基于卷积神经网络的ct图像中带夹层主动脉分割方法 Download PDF

Info

Publication number
WO2020001217A1
WO2020001217A1 PCT/CN2019/088835 CN2019088835W WO2020001217A1 WO 2020001217 A1 WO2020001217 A1 WO 2020001217A1 CN 2019088835 W CN2019088835 W CN 2019088835W WO 2020001217 A1 WO2020001217 A1 WO 2020001217A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional
neural network
convolutional neural
aorta
Prior art date
Application number
PCT/CN2019/088835
Other languages
English (en)
French (fr)
Inventor
陈阳
吕天翎
杨冠羽
罗立民
Original Assignee
东南大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东南大学 filed Critical 东南大学
Publication of WO2020001217A1 publication Critical patent/WO2020001217A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • the invention relates to a method for segmenting aorta with CT in a CT image, and in particular to a method for segmenting aorta with CT in a CT image based on a convolutional neural network, and belongs to the field of computer vision and image segmentation technology.
  • Aortic dissection refers to a pathological phenomenon in which blood flows between the aortic intima and the aortic wall due to the damage of the aortic intimal layer, forcing the two to separate.
  • the cause is often related to high blood pressure or reduced blood vessel wall strength caused by injury, heart surgery, or some conditions.
  • the incidence of aortic dissection is low, the mortality rate is extremely high.
  • the interval between onset and death is extremely short. Without treatment, half of patients with acute type A aortic dissection will die within three days, and more than 10% of patients with type A aortic dissection will die within 30 days.
  • the diagnosis of aortic dissection is mainly based on Computed Tomography Angiography (CTA).
  • the morphological characteristics of dissection aorta such as the size and location of the primary portal, the diameter of the true and false cavities, and the curvature of the aorta, have important implications for diagnosis, customized treatment planning, and risk assessment. At present, how to calculate these morphological features is still a very challenging problem, and segmenting the aorta with dissection in CT images is the first step to solve this problem.
  • the existing blood vessel segmentation algorithms can be divided into four categories, namely algorithms based on blood vessel enhancement filtering, algorithms based on centerline tracking, algorithms based on geometric model of blood vessels, and algorithms based on machine learning.
  • the algorithm based on vascular enhancement filtering mainly uses some filters based on the eigenvalues of the Hessian matrix, such as the Frangi filter, to enhance the vascular region, and then uses some basic image segmentation algorithms such as threshold segmentation or region growth to obtain Target vessel.
  • Some of these methods can be fully automatic, but due to the lack of information related to the vascular topology, the segmentation results often have a large number of misclassifications. At the same time, vascular lesions such as soft plaques and calcification will seriously affect the segmentation results.
  • the main feature of the algorithm based on centerline tracking is to extract the centerline of the blood vessel before segmenting the blood vessel, and then expand the blood vessel area from the centerline.
  • This type of algorithm can better express the topological structure of blood vessels, but usually requires manual marking of at least the centerline point, which cannot be fully automatic.
  • the method based on the vascular geometric model uses geometric models such as three-dimensional cylinders to model the blood vessels, and then optimizes the parameters of the geometric model to accurately obtain the vascular segmentation results.
  • geometric models such as three-dimensional cylinders to model the blood vessels, and then optimizes the parameters of the geometric model to accurately obtain the vascular segmentation results.
  • the calculations of these algorithms are mostly complicated, and the segmentation is time-consuming.
  • such algorithms are often sensitive to the initial model, and usually need to manually mark the initial model to get better results.
  • Machine learning-based methods achieve the purpose of segmenting blood vessels by training statistical learning models such as support vector machines and neural networks. Such methods often have the advantages of fast segmentation speed and high accuracy.
  • the disadvantages are that training of statistical models requires a large amount of training data, and manual labeling of blood vessel regions in the training set requires a lot of manpower.
  • Convolutional Neural Network Algorithms based on Convolutional Neural Network (CNN) can be classified as the above-mentioned machine learning-based algorithms. In recent years, such algorithms have attracted widespread attention in various fields of medical imaging, and have made remarkable achievements in image classification, image segmentation, and image registration. Convolutional neural networks are developed from the basis of neural networks. The main difference between the two is that convolutional neural networks use convolutional layers as feature extractors, while feature extractors of ordinary neural networks are composed of fully connected layers. . In 2014, Long and others at the University of California, Berkeley proposed Fully Convolutional Network (FCN), which is a type of convolutional neural network model widely used in the field of image segmentation.
  • FCN Fully Convolutional Network
  • FCN replaces the fully connected layer in CNN with convolutional and deconvolutional layers. This change preserves two-dimensional spatial information and enables it to perform two-dimensional dense prediction.
  • the proposal of this structure allows the network to lift the restriction on the size of the input picture and can input pictures of any size.
  • FCN greatly reduces the network parameters, reduces the risk of network overfitting, and has a processing speed. Obvious improvement, so almost all the latest semantic segmentation networks have adopted this structure.
  • 3D CT data segmentation based on convolutional neural networks There are two basic ideas for 3D CT data segmentation based on convolutional neural networks.
  • the first is to directly use three-dimensional full convolutional neural network models to process three-dimensional data.
  • This method can make full use of the 3D information in the data, but the problem is that the 3D CT volume data often has a large amount of data, and the existing GPU video memory is not enough to support directly constructing a network and completing training on the original size volume data.
  • a solution is to downsample the original data first, but this method inevitably brings another problem.
  • the lower resolution of the input image causes the accuracy of the segmentation to decrease.
  • the second idea is to treat the three-dimensional volume data as a stack of two-dimensional images and train a two-dimensional fully convolutional neural network to segment each two-dimensional image separately.
  • the advantage of this idea is that the resolution of the input image is retained, but the disadvantage is that the three-dimensional information of the image is lost.
  • this two-dimensional convolutional neural network-based method is extremely unstable in some specific regions and performs well in other regions.
  • the present invention proposes a CT aortic segmentation aortic segmentation algorithm combining a three-dimensional convolutional neural network and a two-dimensional convolutional neural network.
  • This method uses a three-dimensional convolutional neural network to divide the three-dimensional volume data into two parts, and then uses two two-dimensional convolutional neural networks to separate the two parts to obtain the final segmentation result.
  • the present invention proposes a method for aortic segmentation with dissection in CT images based on a convolutional neural network, including the following steps:
  • Step 1 CT image of aorta with dissection Get the corresponding artificially labeled image
  • Step 2 CT image of aorta with dissection And the corresponding manually labeled image
  • the training set T 3D of the three-dimensional convolutional neural network and the training set of two two-dimensional neural networks are calculated with
  • Step 3 Use the obtained three-dimensional network training set T 3D to train a three-dimensional convolutional neural network N 3D to obtain a three-dimensional model M 3D , and simultaneously use the obtained two-dimensional network training set.
  • Step 4. Clinical 3D CT images to be segmented Pre-processing to obtain pre-processed 3D CT images
  • Step 5 Convert the pre-processed 3D CT image
  • the trained three-dimensional model M 3D is input to obtain a preliminary block label A 3D .
  • Step 6 Process the preliminary block mark A 3D to obtain a fine block mark
  • Step 7 Label according to fine block 3D CT image to be segmented Divided into two parts by fault with Input the corresponding trained two-dimensional model layer by layer. with To get the corresponding two sets of feature value images with
  • Step 8 Combining two sets of feature value images with An overall feature value image F 3D is obtained , and threshold segmentation is performed on F 3D to obtain a final segmentation result S 3D .
  • the method of the present invention firstly divides the three-dimensional CT data into two types according to the position of the fault with respect to the aorta by using a three-dimensional convolutional neural network model.
  • two two-dimensional convolutional neural networks are used to segment the two types of faults to obtain aortic segmentation results.
  • the invention can segment the aorta with dissection in the CT image with high accuracy.
  • the two-dimensional convolutional neural network used in the present invention contains three parts, firstly two branches, one for extracting preliminary aortic segmentation results, one for extracting aortic boundaries, and finally a convolutional neural network. The results of the previous two branches are fused to obtain the final segmentation result. This design greatly improves the segmentation accuracy of the algorithm's blood vessel boundaries and dissections.
  • FIG. 1 is a three-dimensional volume rendering image with dissection CT image data and a corresponding aortic artificial marker according to an embodiment of the present invention, where (a) is a CT image and (b) an aortic artificial marker.
  • FIG. 2 is a schematic diagram of the overall process of the present invention.
  • FIG. 3 is a standard schematic diagram of dividing three-dimensional volume data into two parts according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a basic structure of a two-dimensional convolutional neural network used in the present invention.
  • FIG. 5 is a flowchart of post-processing of a three-dimensional model according to the present invention.
  • FIG. 6 is an axial clinical CT image and a locally enlarged image of the aortic region in the embodiment of the present invention, in which (a) is an axial CT image; and (b) is a locally enlarged image of the aortic region.
  • FIG. 7 is an axial image of a segmentation result obtained by using the method of the present invention and a locally enlarged image of a corresponding aortic region in an embodiment of the present invention, where (a) is an axial CT image; (b) is a partial enlargement of the aortic region Image.
  • FIG. 8 is a three-dimensional volume rendering image of a segmentation result obtained by using the method of the present invention in an embodiment of the present invention.
  • the present invention proposes a method for segmenting aorta with CT in a CT image based on a convolutional neural network.
  • a voxel in a CT image with a sandwich aorta obtained is labeled.
  • the 3D convolutional neural network and the two 2D neural network training sets are calculated from the dissection aortic CT image and the corresponding artificially labeled image, and the obtained training set is used to respectively train the 3D neural network and
  • the two-dimensional neural network obtains a trained three-dimensional model and two two-dimensional models.
  • the 3D CT image to be segmented is pre-processed to obtain a pre-processed 3D CT image.
  • the pre-processed 3D CT images are input into the trained 3D model to obtain preliminary block marks.
  • the preliminary block mark is processed to obtain a fine block mark.
  • the segmented 3D CT image is divided into two parts according to layers according to the fine block marks, and they are input to the corresponding trained 2D neural network layer by layer to obtain the corresponding two sets of feature value images.
  • the two sets of eigenvalue images are combined and the threshold segmentation is used to obtain the final aortic segmentation result with dissection.
  • Step 1 CT image of dissecting aorta Get the corresponding artificially labeled image
  • the method for obtaining the corresponding manually labeled image from the CT image of the dissection aorta includes, but is not limited to, manual manual labeling and manual modification and refinement after preliminary segmentation by other blood vessel segmentation methods.
  • Step 2 CT image of aorta with dissection And the corresponding manually labeled image
  • a training set T 3D of two-dimensional convolutional neural network and two training sets of two-dimensional neural network are calculated.
  • a three-dimensional volume drawing image with dissection CT image data and a corresponding aortic artificial marker is drawn, where (a) is a CT image and (b) an aortic artificial marker.
  • FIG. 3 is a standard schematic diagram of three-dimensional volume data divided into two parts according to an embodiment of the present invention, where a CT image of aorta with dissection is shown Each axial slice is labeled according to whether it contains the ascending aorta or aortic arch, and a one-dimensional label array is obtained.
  • Step 3 Use the obtained three-dimensional network training set T 3D to train a three-dimensional convolutional neural network N 3D to obtain a three-dimensional model M 3D , and simultaneously use the obtained two-dimensional network training set.
  • the three-dimensional convolutional neural network N 3D is a three-dimensional full convolutional neural network. It consists of a batch normalization layer. The input of 3D convolutional neural network N 3D is reduced 3D volume data Target output is a reduced one-digit label array The training is supervised by the loss function 3D .
  • 2D convolutional neural network with For a two-dimensional fully convolutional neural network with the same structure, it should consist of one or more two-dimensional convolutional layers, a striped convolution layer or a pooling layer, a transposed convolution layer, The activation layer and the batch normalization layer are composed, and its basic structure is shown in FIG. 4.
  • 2D convolutional neural network It can be divided into three parts: two branches N area and N edge are used to extract the preliminary blood vessel segmentation result and blood vessel boundary, respectively.
  • the inputs of the two branches N area and N edge are two-dimensional CT tomographic images
  • the target outputs are manually labeled images Border image with blood vessels Image of blood vessel boundary Artificially labeled images after morphological expansion Images with artificial tags Do the difference, and the two branches are trained by the loss function with Monitor it.
  • the fusion part N fusion is used to fuse the results of the first two parts to obtain a more accurate two-dimensional blood vessel segmentation result.
  • Its input is the output of N area O area and the output of N edge O edge
  • the target output is an artificially labeled image.
  • Loss function Supervised training The loss function of the entire network is the weighted sum of the above three loss functions, that is,
  • the two branch N area and N edge inputs are two-dimensional CT tomographic images
  • the target outputs are manually labeled images Border image with blood vessels Image of blood vessel boundary Artificially labeled images after morphological expansion Images with artificial tags Do the difference, and the two branches are trained by the loss function with Monitor it.
  • the output of the fusion part N fusion target is an artificially labeled image Loss function Supervised training.
  • the loss function of the entire network is the weighted sum of the above three loss functions, that is,
  • the above activation layer is a non-linear activation layer.
  • Available activation functions include, but are not limited to, a ReLU function, a sigmoid function, a LeakyReLU function, a PReLU function, and the like.
  • the above loss function loss 3D All are loss functions suitable for image segmentation tasks.
  • the loss functions that can be used include but are not limited to L2 loss function, cross entropy loss function, dice loss function, normalized dice loss function, and so on.
  • the resulting model M 3D with Contains the corresponding network structure and parameters of each layer in the trained network.
  • Step 4. Clinical 3D CT images to be segmented Pre-processing to obtain pre-processed 3D CT images
  • preprocessing refers to the same three-dimensional interpolation operation as in step 2 to divide the clinical three-dimensional CT image to be segmented.
  • Step 5 Convert the pre-processed 3D CT image
  • the trained three-dimensional model M 3D is input to obtain a preliminary block label A 3D .
  • the output initial block mark A 3D is a one-dimensional array of length nz.
  • Step 6 Process the preliminary block mark A 3D to obtain a fine block mark
  • processing steps include thresholds, one-dimensional morphological dilation, one-dimensional interpolation, and the like.
  • the specific processing steps flowchart is shown in FIG. 5.
  • Step 7 Label according to fine block 3D CT image to be segmented Divided into two parts by axial fault with Input the corresponding trained two-dimensional model layer by layer. with To get the corresponding two sets of feature value images with
  • the axial fault corresponding to the position marked 1 in the middle is classified as The axial fault corresponding to the position marked as 0 is classified as
  • Step 8 Combining two sets of feature value images with An overall feature value image F 3D is obtained , and threshold segmentation is performed on F 3D to obtain a final segmentation result S 3D .
  • threshold segmentation is used to obtain the final segmentation result.
  • the threshold value used in the threshold segmentation of the present invention is 0.5, that is, a part with a feature value greater than or equal to 0.5 in the feature image is marked as 1, that is, a target, and a part less than 0.5 is marked as 0, that is, the background.
  • FIG. 6 it is an axial clinical CT image and a locally enlarged image of the aortic region in the embodiment of the present invention, where (a) is an axial CT image; and (b) is a locally enlarged image of the aortic region.
  • FIG. 7 is an axial image of a segmentation result obtained by using the method of the present invention and a locally enlarged image of a corresponding aortic region in an embodiment of the present invention, and the region indicated by R is the segmentation result, where (a) is an axial CT image; b) A locally enlarged image of the aortic region.
  • FIG. 8 is a three-dimensional volume rendering image of a segmentation result obtained by using the method of the present invention in an embodiment of the present invention. The results show that the automatic CT aortic segmentation method proposed by the present invention can automatically segment the aortic region from the CT images of aortic dissection patients, providing a good basis for medical diagnosis and treatment planning and subsequent research and analysis. .
  • each block in these structural diagrams and / or block diagrams and / or flow diagrams and the blocks in these structural diagrams and / or block diagrams and / or flow diagrams can be implemented by computer program instructions.
  • These computer program instructions can be provided to a processor of a general-purpose computer, professional computer, or other programmable data processing method to generate a machine, and the instructions executed by the computer or other programmable data processing method's processor create a structure for implementing the structure Diagrams and / or block diagrams and / or flow diagrams specified in boxes or multiple boxes.

Abstract

一种基于卷积神经网络的CT图像中带夹层主动脉分割方法。该方法结合三维卷积神经网络和二维卷积神经网络的CT带夹层主动脉分割算法,使用三维卷积神经网络将三维体数据分成两部分,再使用两个二维卷积神经网络分别对这两部分进行分割,得到最终的分割结果。该方法可以有效的从包含带夹层主动脉的CT图像中分割出带夹层的主动脉,克服了传统的单纯使用三维全卷积神经网络由于输入图像分辨率与GPU显存容量之间的矛盾导致分割精度上的不足,以及单纯使用二维卷积神经网络由于丢失三维信息导致的分割效果不稳定的缺陷,具有良好的分割效果。

Description

一种基于卷积神经网络的CT图像中带夹层主动脉分割方法 技术领域
本发明涉及一种CT图像中带夹层主动脉分割方法,尤其涉及一种基于卷积神经网络的CT图像中带夹层主动脉分割方法,属于计算机视觉、图像分割技术领域。
背景技术
主动脉夹层(Aorta Dissection,AD)是指由主动脉内膜层的破损导致血液在主动脉内膜与主动脉壁之间流动从而迫使两者分离的病理现象。其成因往往与高血压或是由受伤、心脏手术或是一些病症引起的血管壁强度降低相关。主动脉夹层虽然发病率低,但其死亡率极高,同时,发病到死亡之间的时间间隔极短。如果不进行治疗,半数急性A型主动脉夹层患者将会在三日内死亡,而超过10%的B型主动脉夹层患者会在30天内死亡。对于主动脉夹层的诊断主要基于CT灌注成像(Computed Tomography Angiography,CTA)。而带夹层主动脉的形态特征,如主入口(primary entry)的大小和位置、真假腔的直径以及主动脉的曲率,都对诊断、定制治疗计划和风险评估有着重要的意义。目前,如何计算这些形态特征仍是一个极具挑战性的问题,而对CT图像中带夹层主动脉的分割正是解决这一问题的第一步。
现有的血管分割算法主要可以分为四大类,即基于血管增强滤波的算法、基于中心线跟踪的算法、基于血管几何模型的算法以及基于机器学习的算法。
基于血管增强滤波的算法主要使用一些建立在海森矩阵特征值之上的滤波器,例如Frangi滤波器,来增强血管区域,之后再使用一些基本的图像分割算法如阈值分割或区域生长等来得到目标血管。这类方法大多可以做到全自动,但由于缺少与血管拓扑结构相关的信息,分割结果往往存在大量的误分,同时,软斑块、钙化等血管病变会严重影响分割结果。
基于中心线跟踪的算法的主要特点在于在分割血管之前首先提取血管中心线,再由中心线向外扩展得到血管区域。这类算法可以较好的表达血管的拓扑结构,但通常需要人工标记至少中心线点,无法做到全自动。
基于血管几何模型的方法使用三维圆柱等几何模型来对血管建模,再通过优化几何模型的参数来精确的得到血管分割结果。这类算法的计算大多比较复杂, 分割比较费时,此外,这类算法往往对初始模型比较敏感,通常需要通过人工标记初始模型来得到比较好的结果。
基于机器学习的方法通过训练统计学习模型,如支持向量机、神经网络等,来达到分割血管的目的。这类方法往往具有分割速度快、准确率高等优点,其缺点在于统计模型的训练需要大量训练数据,而人工标记训练集中的血管区域需要耗费大量的人力。
基于卷积神经网络(Convolutional Neural Network,CNN)的算法可以被归类为上述的基于机器学习的算法。近年来,这类算法在医学成像的各个领域都吸引了广泛的关注,并且在图像分类、图像分割以及图像配准等领域取得了令人瞩目的成就。卷积神经网络从神经网络的基础之上发展而来,两者的主要区别在于,卷积神经网络使用卷积层作为特征抽取器,而普通神经网络的特征提取器则是由全连接层构成。2014年,加州大学伯克利分校的Long等人提出了全卷积神经网络(Fully Convolutional Network,FCN)是一类广泛应用于图像分割领域的卷积神经网络模型。相比于传统CNN,FCN用卷积层和反卷积层替换了CNN中的全连接层,这种改变保留了二维的空间信息,使得其能够进行二维的密集预测。这种结构的提出使得网络可以解除对输入图片大小的限制,能够输入任意尺寸的图片。而且相比于图像块分类方法(含有全连接层,而且全连接层包含了大部分的参数量),FCN大大降低了网络的参数,降低了网络过拟合的风险,而且在处理速度上也有明显的提升,因此在后来几乎所有的最新语义分割网络都采用了这种结构。
基于卷积神经网络的三维CT数据分割有两种基本思路。第一种是直接使用三维的全卷积神经网络模型处理三维数据。这种方法可以完整的利用数据中的三维信息,但其问题在于三维CT体数据往往数据量较大,现有的GPU显存不足以支持直接在原始尺寸的体数据上构建网络并完成训练。对此,一个解决方法是先对原始数据进行降采样,但这种方法又不可避免的带来了另一个问题,输入图像的分辨率较低导致分割的精度降低。第二种思路是将三维体数据看作二维图像的堆叠,并训练二维的全卷积神经网络对每一层二维图像分别进行分割。这种思路的优点在于保留了输入图像的分辨率,但其缺点在于丢失了图像的三维信息。在实验中,我们发现这种基于二维卷积神经网络的方法在某些特定的区域分割效果极不稳定,而在其他区域表现良好。
发明内容
技术问题:为了克服传统的单纯使用三维全卷积神经网络由于输入图像分辨率与GPU显存容量之间的矛盾导致分割精度上的不足以及单纯使用二维卷积神经网络由于丢失三维信息导致的分割效果不稳定,本发明提出了一种结合三维卷积神经网络和二维卷积神经网络的CT带夹层主动脉分割算法。这种方法使用三维卷积神经网络将三维体数据分成两部分,再使用两个二维卷积神经网络分别对两部分进行分割得到最终的分割结果。
技术方案:本发明提出一种基于卷积神经网络的CT图像中带夹层主动脉分割方法,包括以下步骤:
步骤1、由带夹层主动脉CT图像
Figure PCTCN2019088835-appb-000001
获取对应的人工标记图像
Figure PCTCN2019088835-appb-000002
步骤2、由带夹层主动脉CT图像
Figure PCTCN2019088835-appb-000003
以及对应的人工标记图像
Figure PCTCN2019088835-appb-000004
计算得到三维卷积神经网络的训练集T 3D和两个二维神经网络的训练集
Figure PCTCN2019088835-appb-000005
Figure PCTCN2019088835-appb-000006
步骤3、利用得到的三维网络训练集T 3D训练三维卷积神经网络N 3D得到三维模型M 3D,同时利用得到的二维网络训练集
Figure PCTCN2019088835-appb-000007
Figure PCTCN2019088835-appb-000008
分别训练对应的二维卷积神经网络
Figure PCTCN2019088835-appb-000009
Figure PCTCN2019088835-appb-000010
得到二维模型
Figure PCTCN2019088835-appb-000011
Figure PCTCN2019088835-appb-000012
步骤4、对待分割的临床三维CT图像
Figure PCTCN2019088835-appb-000013
进行预处理,得到预处理后的三维CT图像
Figure PCTCN2019088835-appb-000014
步骤5、将预处理后的三维CT图像
Figure PCTCN2019088835-appb-000015
输入训练好的三维模型M 3D中,得到初步的分块标记A 3D
步骤6、对初步的分块标记A 3D进行处理,得到精细的分块标记
Figure PCTCN2019088835-appb-000016
步骤7、根据精细的分块标记
Figure PCTCN2019088835-appb-000017
将待分割的三维CT图像
Figure PCTCN2019088835-appb-000018
按断层分为两部分
Figure PCTCN2019088835-appb-000019
Figure PCTCN2019088835-appb-000020
分别逐层输入到对应的训练好的二维模型
Figure PCTCN2019088835-appb-000021
Figure PCTCN2019088835-appb-000022
中,得到对应的两组特征值图像
Figure PCTCN2019088835-appb-000023
Figure PCTCN2019088835-appb-000024
步骤8、结合两组特征值图像
Figure PCTCN2019088835-appb-000025
Figure PCTCN2019088835-appb-000026
得到整体的特征值图像F 3D,并对F 3D做阈值分割得到最终的分割结果S 3D
有益效果:与现有技术相比,本发明方法首先通过三维的卷积神经网络模型将三维CT数据按断层相对于主动脉的位置分为两类,在这两类的断层中,主动脉区域具有不同的形状特征,再使用两个二维的卷积神经网络分别对两类的断层 进行分割,以得到主动脉分割结果。本发明可以以较高的准确率分割CT图像中的带夹层主动脉。此外,本发明中使用的二维卷积神经网络包含三个部分,首先是两个分支,一个用于提取初步的主动脉分割结果,一个用于提取主动脉边界,最后是一个卷积神经网络将之前两个分支的结果进行融合,得到最终的分割结果。这种设计大大地提高了算法血管边界以及夹层处的分割准确率。
附图说明
图1为本发明实施例中带夹层CT图像数据与对应的主动脉人工标记的三维体绘制图像,其中的(a)为CT图像;(b)为主动脉人工标记。
图2为本发明整体流程示意图。
图3为本发明实施例中将三维体数据分为两部分的标准示意图。
图4为本发明使用的二维卷积神经网络基本结构示意图。
图5为本发明三维模型后处理的流程图。
图6为本发明实施例中轴向临床CT图像与主动脉区域局部放大的图像,其中的(a)为轴向CT图像;(b)为主动脉区域局部放大的图像。
图7为本发明实施例中使用本发明方法得到的分割结果的轴向图像以及对应的主动脉区域局部放大的图像,其中(a)为轴向CT图像;(b)为主动脉区域局部放大的图像。
图8为本发明实施例中使用本发明方法得到的分割结果的三维体绘制图像。
具体实施方式
下面结合说明书附图针对本发明的具体实施方式作进一步详细的说明。
本技术领域技术人员可以理解的是,除非另外定义,这里使用的所有术语(包括技术术语和科学术语)具有与本发明所属领域中的普通技术人员的一般理解相同的意义。还应该理解的是,诸如通用字典中定义的那些术语应该被理解为具有与现有技术的上下文中的意义一致的意义,并且除非像这里一样定义,不会用理想化或过于正式的含义来解释。
为解决本发明所要解决的计算问题,本发明提出一种基于卷积神经网络的CT图像中带夹层主动脉分割方法,首先,对获取的带夹层主动脉CT图像中的体素进行打标签操作,获得人工标记图像;之后由带夹层主动脉CT图像以及对 应的人工标记图像计算得到三维卷积神经网络和两个二维神经网络的训练集,利用得到的训练集分别对应训练三维神经网络和二维神经网络,得到训练好的三维模型和两个二维模型。对待分割的三维CT图像进行预处理,得到预处理后的三维CT图像。将预处理后的三维CT图像输入训练好的三维模型中,得到初步的分块标记。对初步的分块标记进行处理,得到精细的分块标记。根据精细的分块标记将带分割的三维CT图像按层分为两部分,分别逐层输入到对应的训练好的二维神经网络中,得到对应的两组特征值图像。将两组特征值图像结合,并使用阈值分割得到最终的带夹层主动脉分割结果。
如图2所示,以下结合具体实施步骤,进一步详细介绍本发明提出的基于卷积神经网络的CT图像中带夹层主动脉分割方法:
步骤1、由带夹层主动脉的CT图像
Figure PCTCN2019088835-appb-000027
获取对应的人工标记图像
Figure PCTCN2019088835-appb-000028
具体的,由带夹层主动脉的CT图像获取对应人工标记图像的方法包括但不局限于纯手工标记和用其他血管分割方法初步分割后手工修改细化。
步骤2、由带夹层主动脉CT图像
Figure PCTCN2019088835-appb-000029
以及对应的人工标记图像
Figure PCTCN2019088835-appb-000030
计算得到一个三维卷积神经网络的训练集T 3D和两个二维神经网络的训练集
Figure PCTCN2019088835-appb-000031
Figure PCTCN2019088835-appb-000032
如图1所示,为带夹层CT图像数据与对应的主动脉人工标记的三维体绘制图像,其中的(a)为CT图像;(b)为主动脉人工标记。
具体的,将带夹层主动脉CT图像
Figure PCTCN2019088835-appb-000033
按特定的分辨率dx×dy×dz进行三维插值得到缩小的三维体数据
Figure PCTCN2019088835-appb-000034
加入三维卷积神经网络的训练集T 3D中作为输入对象。如图3为本发明实施例中将三维体数据分为两部分的标准示意图,其中,为带夹层主动脉CT图像
Figure PCTCN2019088835-appb-000035
每一轴向断层按其是否包含升主动脉或主动脉弓赋予标签,得到一维标签数组
Figure PCTCN2019088835-appb-000036
将包含升主动脉或主动脉弓的断层
Figure PCTCN2019088835-appb-000037
及其对应的人工标记图像断层
Figure PCTCN2019088835-appb-000038
加入二维神经网络的训练集
Figure PCTCN2019088835-appb-000039
分别作为输入对象和目标输出,将仅包含降主动脉的断层
Figure PCTCN2019088835-appb-000040
及其对应的人工标记图像断层
Figure PCTCN2019088835-appb-000041
加入二维神经网络的训练集
Figure PCTCN2019088835-appb-000042
分别作为输入对象和目标输出。将一维标签数组
Figure PCTCN2019088835-appb-000043
按分辨率dz进行插值得到缩小的一位标签数组
Figure PCTCN2019088835-appb-000044
Figure PCTCN2019088835-appb-000045
加入三维卷积神经网络的训练集T 3D中作为输入对象
Figure PCTCN2019088835-appb-000046
对应的目标输出。
步骤3、利用得到的三维网络训练集T 3D训练三维卷积神经网络N 3D得到三维 模型M 3D,同时利用得到的二维网络训练集
Figure PCTCN2019088835-appb-000047
Figure PCTCN2019088835-appb-000048
分别训练对应的二维卷积神经网络
Figure PCTCN2019088835-appb-000049
Figure PCTCN2019088835-appb-000050
分别得到二维模型
Figure PCTCN2019088835-appb-000051
Figure PCTCN2019088835-appb-000052
具体的,三维卷积神经网络N 3D是一种三维全卷积神经网络,应由一个到多个三维卷积层、条纹卷积(strided convolution)层或池化(pooling)层、激活层和批归一化(batch normalization)层组成。三维卷积神经网络N 3D的输入为缩小的三维体数据
Figure PCTCN2019088835-appb-000053
目标输出为缩小的一位标签数组
Figure PCTCN2019088835-appb-000054
由损失函数loss 3D监督训练。
二维卷积神经网络
Figure PCTCN2019088835-appb-000055
Figure PCTCN2019088835-appb-000056
为具有相同结构的二维全卷积神经网络,应由一个到多个二维卷积层、条纹卷积(strided convolution)层或池化(pooling)层、反卷积(transposed convolution)层、激活层和批归一化(batch normalization)层组成,其基本结构如图4所示。二维卷积神经网络
Figure PCTCN2019088835-appb-000057
可分为三个部分:两个分支N area和N edge分别用于提取初步血管分割结果和血管边界。两个分支N area和N edge的输入均为二维CT断层图像
Figure PCTCN2019088835-appb-000058
目标输出分别为人工标记图像
Figure PCTCN2019088835-appb-000059
与血管边界图像
Figure PCTCN2019088835-appb-000060
其中血管边界图像
Figure PCTCN2019088835-appb-000061
由形态学膨胀后的人工标记图像
Figure PCTCN2019088835-appb-000062
与人工标记图像
Figure PCTCN2019088835-appb-000063
做差得到,两个分支的训练分别由损失函数
Figure PCTCN2019088835-appb-000064
Figure PCTCN2019088835-appb-000065
进行监督。融合部分N fusion用于将前两部分的结果进行融合以得到更精确的二维血管分割结果,其输入为N area的输出O area与N edge的输出O edge,目标输出为人工标记图像
Figure PCTCN2019088835-appb-000066
由损失函数
Figure PCTCN2019088835-appb-000067
监督训练。整个网络的损失函数为上述三个损失函数的加权和,即
Figure PCTCN2019088835-appb-000068
二维卷积神经网络
Figure PCTCN2019088835-appb-000069
的两个分支N area和N edge输入均为二维CT断层图像
Figure PCTCN2019088835-appb-000070
目标输出分别为人工标记图像
Figure PCTCN2019088835-appb-000071
与血管边界图像
Figure PCTCN2019088835-appb-000072
其中血管边界图像
Figure PCTCN2019088835-appb-000073
由形态学膨胀后的人工标记图像
Figure PCTCN2019088835-appb-000074
与人工标记图像
Figure PCTCN2019088835-appb-000075
做差得到,两个分支的训练分别由损失函数
Figure PCTCN2019088835-appb-000076
Figure PCTCN2019088835-appb-000077
进行监督。融合部分N fusion目标输出为人工标记图像
Figure PCTCN2019088835-appb-000078
由损失函数
Figure PCTCN2019088835-appb-000079
监督训练。同样,整个网络的损失函数为上述三个损失函数的加权和,即
Figure PCTCN2019088835-appb-000080
上述激活层为非线性激活层,可用的激活函数包含但不限于ReLU函数、sigmoid函数、LeakyReLU函数、PReLU函数等。上述损失函数loss 3D
Figure PCTCN2019088835-appb-000081
Figure PCTCN2019088835-appb-000082
均为适用于图像分割任务的损失函数,可以使用的损失函数包含但不限于L2损失函数、交叉熵损失函数、dice损失函数、归一化的dice损失函数等。得到的模型M 3D
Figure PCTCN2019088835-appb-000083
Figure PCTCN2019088835-appb-000084
包含对应的网络结构以及训练得到的网络中每一层的参数。
步骤4、对待分割的临床三维CT图像
Figure PCTCN2019088835-appb-000085
进行预处理,得到预处理后的三维CT图像
Figure PCTCN2019088835-appb-000086
具体的,预处理指与步骤2中相同的三维插值操作,将待分割的临床三维CT图像
Figure PCTCN2019088835-appb-000087
按分辨率dx×dy×dz进行三维插值得到缩小的三维体数据
Figure PCTCN2019088835-appb-000088
步骤5、将预处理后的三维CT图像
Figure PCTCN2019088835-appb-000089
输入训练好的三维模型M 3D中,得到初步的分块标记A 3D
具体的,假设预处理后的三维CT图像
Figure PCTCN2019088835-appb-000090
大小为nx×ny×nz,则输出的初步的分块标记A 3D为长度为nz的一维数组。
步骤6、对初步的分块标记A 3D进行处理,得到精细的分块标记
Figure PCTCN2019088835-appb-000091
具体的,处理的步骤包括阈值、一维形态学膨胀、一维插值等,具体的处理步骤流程图如图5所示。
步骤7、根据精细的分块标记
Figure PCTCN2019088835-appb-000092
将待分割的三维CT图像
Figure PCTCN2019088835-appb-000093
按轴向断层分为两部分
Figure PCTCN2019088835-appb-000094
Figure PCTCN2019088835-appb-000095
分别逐层输入到对应的训练好的二维模型
Figure PCTCN2019088835-appb-000096
Figure PCTCN2019088835-appb-000097
中,得到对应的两组特征值图像
Figure PCTCN2019088835-appb-000098
Figure PCTCN2019088835-appb-000099
具体的,将精细的分块标记
Figure PCTCN2019088835-appb-000100
中标记为1的位置所对应的轴向断层归为
Figure PCTCN2019088835-appb-000101
将标记为0的位置所对应的轴向断层归为
Figure PCTCN2019088835-appb-000102
步骤8、结合两组特征值图像
Figure PCTCN2019088835-appb-000103
Figure PCTCN2019088835-appb-000104
得到整体的特征值图像F 3D,并对F 3D做阈值分割得到最终的分割结果S 3D
具体的,将两组特征值图像
Figure PCTCN2019088835-appb-000105
Figure PCTCN2019088835-appb-000106
在z方向上堆叠,
Figure PCTCN2019088835-appb-000107
在上,
Figure PCTCN2019088835-appb-000108
在下,得到整体的特征值图像F 3D
最后采用阈值分割得到最终的分割结果。本发明的阈值分割中使用的阈值为0.5,即将特征图像中特征值大于等于0.5的部分标为1,即目标,小于0.5的部分标为0,即背景。
如图6所示,为本发明实施例中轴向临床CT图像与主动脉区域局部放大的图像,其中的(a)为轴向CT图像;(b)为主动脉区域局部放大的图像。图7为本发明实施例中使用本发明方法得到的分割结果的轴向图像以及对应的主动脉区域局部放大的图像,R所指示区域为分割结果,其中(a)为轴向CT图像;(b)为主动脉区域局部放大的图像。图8为本发明实施例中使用本发明方法得到的分割结果的三维体绘制图像。结果显示,本发明提出的全自动的CT带夹层主动脉分割方法,能自动从主动脉夹层患者的CT图像中分割出主动脉区域,为医学诊断和治疗计划以及后续的研究分析提供良好的基础。
本技术领域技术人员可以理解的是,可以用计算机程序指令来实现这些结构图和/或框图和/或流图中的每个框以及这些结构图和/或框图和/或流图中的框的组合。可以将这些计算机程序指令提供给通用计算机、专业计算机或其他可编程数据处理方法的处理器来生成机器,从而通过计算机或其他可编程数据处理方法的处理器来执行的指令创建了用于实现结构图和/或框图和/或流图的框或多个框中指定的方法。
本技术领域技术人员可以理解的是,本发明中已经讨论过的各种操作、方法、流程中的步骤、措施、方案可以被交替、更改、组合或删除。进一步地,具有本发明中已经讨论过的各种操作、方法、流程中的其他步骤、措施、方案也可以被交替、更改、重排、分解、组合或删除。进一步地,现有技术中的具有与本发明中公开的各种操作、方法、流程中的步骤、措施、方案也可以被交替、更改、重排、分解、组合或删除。
上面结合附图对本发明的实施方式作了详细地说明,但是本发明并不局限于上述实施方式,在本领域普通技术人员所具备的知识范围内,还可以在不脱离本发明宗旨的前提下做出各种变化。

Claims (10)

  1. 一种基于卷积神经网络的CT图像中带夹层主动脉分割方法,其特征在于,包括以下步骤:
    步骤1、获取带夹层主动脉CT图像
    Figure PCTCN2019088835-appb-100001
    以及对应的人工标记图像
    Figure PCTCN2019088835-appb-100002
    步骤2、由带夹层主动脉CT图像
    Figure PCTCN2019088835-appb-100003
    以及对应的人工标记图像
    Figure PCTCN2019088835-appb-100004
    计算得到三维卷积神经网络的训练集T 3D和两个二维神经网络的训练集
    Figure PCTCN2019088835-appb-100005
    Figure PCTCN2019088835-appb-100006
    步骤3、利用得到的三维网络训练集T 3D训练三维卷积神经网络N 3D得到三维模型M 3D,同时利用得到的二维网络训练集
    Figure PCTCN2019088835-appb-100007
    Figure PCTCN2019088835-appb-100008
    分别训练对应的二维卷积神经网络
    Figure PCTCN2019088835-appb-100009
    Figure PCTCN2019088835-appb-100010
    得到二维模型
    Figure PCTCN2019088835-appb-100011
    Figure PCTCN2019088835-appb-100012
    步骤4、对待分割的临床三维CT图像
    Figure PCTCN2019088835-appb-100013
    进行预处理,得到预处理后的三维CT图像
    Figure PCTCN2019088835-appb-100014
    步骤5、将预处理后的三维CT图像
    Figure PCTCN2019088835-appb-100015
    输入训练好的三维模型M 3D中,得到初步的分块标记A 3D
    步骤6、对初步的分块标记A 3D进行后处理,得到精细的分块标记
    Figure PCTCN2019088835-appb-100016
    步骤7、根据精细的分块标记
    Figure PCTCN2019088835-appb-100017
    将待分割的三维CT图像
    Figure PCTCN2019088835-appb-100018
    按断层分为两部分
    Figure PCTCN2019088835-appb-100019
    Figure PCTCN2019088835-appb-100020
    分别逐层输入到对应的训练好的二维模型
    Figure PCTCN2019088835-appb-100021
    Figure PCTCN2019088835-appb-100022
    中,得到对应的两组特征值图像
    Figure PCTCN2019088835-appb-100023
    Figure PCTCN2019088835-appb-100024
    步骤8、结合两组特征值图像
    Figure PCTCN2019088835-appb-100025
    Figure PCTCN2019088835-appb-100026
    得到整体的特征值图像F 3D,并对F 3D做阈值分割得到最终的分割结果S 3D
  2. 如权利要求1所述基于卷积神经网络的CT图像中带夹层主动脉分割方法,其特征在于,步骤一中带夹层主动脉CT图像
    Figure PCTCN2019088835-appb-100027
    中应包含升主动脉,主动脉弓与降主动脉。
  3. 如权利要求1所述基于卷积神经网络的CT图像中带夹层主动脉分割方法,其特征在于,步骤二中的计算得到三维卷积神经网络的训练集T 3D过程为:将带夹层主动脉CT图像
    Figure PCTCN2019088835-appb-100028
    以及对应的人工标记图像
    Figure PCTCN2019088835-appb-100029
    插值到统一的较低分辨率dx×dy×dz;
    计算得到两个二维神经网络的训练集
    Figure PCTCN2019088835-appb-100030
    Figure PCTCN2019088835-appb-100031
    的过程为:将带夹层主动脉 CT图像
    Figure PCTCN2019088835-appb-100032
    的轴向断层,根据是否包含升主动脉和主动脉弓部分分为两类,将对应的CT图像和人工标记断层分别加入训练集
    Figure PCTCN2019088835-appb-100033
    Figure PCTCN2019088835-appb-100034
    具体为:将包含升主动脉或主动脉弓的断层
    Figure PCTCN2019088835-appb-100035
    及其对应的人工标记图像断层
    Figure PCTCN2019088835-appb-100036
    加入二维神经网络的训练集
    Figure PCTCN2019088835-appb-100037
    将仅包含降主动脉的断层
    Figure PCTCN2019088835-appb-100038
    及其对应的人工标记图像断层
    Figure PCTCN2019088835-appb-100039
    加入二维神经网络的训练集
    Figure PCTCN2019088835-appb-100040
  4. 如权利要求1所述基于卷积神经网络的CT图像中带夹层主动脉分割方法,其特征在于,步骤三中的三维卷积神经网络N 3D为三维全卷积神经网络,输入为插值后的三维数据,输出为一维数组;二维卷积神经网络
    Figure PCTCN2019088835-appb-100041
    Figure PCTCN2019088835-appb-100042
    为具有相同结构的二维全卷积神经网络,输入为原始大小的二维CT图像断层,输出为与输入图像大小相同的二维分割结果图像断层。
  5. 如权利要求4所述基于卷积神经网络的CT图像中带夹层主动脉分割方法,其特征在于,二维全卷积神经网络,由三个全卷积神经网络N area、N edge和N fusion组成,其中N area输入原始大小的二维CT图像断层得到初步分割结果,N edge输入原始大小的二维CT图像断层得到边界提取结果,N fusion输入前两个网络的结果得到精细的分割结果。
  6. 如权利要求1所述基于卷积神经网络的CT图像中带夹层主动脉分割方法,其特征在于,步骤4中的预处理为将待分割的临床三维CT图像
    Figure PCTCN2019088835-appb-100043
    插值到统一的较低分辨率dx×dy×dz,得到预处理后的三维CT图像
    Figure PCTCN2019088835-appb-100044
  7. 如权利要求1所述基于卷积神经网络的CT图像中带夹层主动脉分割方法,其特征在于,步骤6中的后处理应包含阈值分割、一维形态学膨胀和一维插值步骤。
  8. 如权利要求1所述基于卷积神经网络的CT图像中带夹层主动脉分割方法,其特征在于,步骤7中的根据精细的分块标记
    Figure PCTCN2019088835-appb-100045
    将带分割的三维CT图像
    Figure PCTCN2019088835-appb-100046
    按断层分为两部分
    Figure PCTCN2019088835-appb-100047
    Figure PCTCN2019088835-appb-100048
    具体操作为将
    Figure PCTCN2019088835-appb-100049
    中标记为1的断层加入
    Figure PCTCN2019088835-appb-100050
    标记为0的断层加入
    Figure PCTCN2019088835-appb-100051
  9. 如权利要求1所述基于卷积神经网络的CT图像中带夹层主动脉分割方法,其特征在于,步骤8中结合两组特征值图像
    Figure PCTCN2019088835-appb-100052
    Figure PCTCN2019088835-appb-100053
    得到整体的特征值图像F 3D, 是将两组特征值图像
    Figure PCTCN2019088835-appb-100054
    Figure PCTCN2019088835-appb-100055
    在z方向上堆叠,
    Figure PCTCN2019088835-appb-100056
    在上,
    Figure PCTCN2019088835-appb-100057
    在下。
  10. 如权利要求1所述基于卷积神经网络的CT图像中带夹层主动脉分割方法,其特征在于,步骤8中的阈值分割中使用的阈值为0.5。
PCT/CN2019/088835 2018-06-27 2019-05-28 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法 WO2020001217A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810677366.0 2018-06-27
CN201810677366.0A CN109035255B (zh) 2018-06-27 2018-06-27 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法

Publications (1)

Publication Number Publication Date
WO2020001217A1 true WO2020001217A1 (zh) 2020-01-02

Family

ID=64610793

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/088835 WO2020001217A1 (zh) 2018-06-27 2019-05-28 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法

Country Status (2)

Country Link
CN (1) CN109035255B (zh)
WO (1) WO2020001217A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111354005A (zh) * 2020-02-28 2020-06-30 浙江德尚韵兴医疗科技有限公司 基于卷积神经网络的全自动胎儿心超影像三血管分割方法
CN111667488A (zh) * 2020-04-20 2020-09-15 浙江工业大学 一种基于多角度U-Net的医学图像分割方法
CN111915556A (zh) * 2020-06-22 2020-11-10 杭州深睿博联科技有限公司 一种基于双分支网络的ct图像病变检测方法、系统、终端及存储介质
CN112330708A (zh) * 2020-11-24 2021-02-05 沈阳东软智能医疗科技研究院有限公司 图像处理方法、装置、存储介质及电子设备
CN112884775A (zh) * 2021-01-20 2021-06-01 推想医疗科技股份有限公司 一种分割方法、装置、设备及介质
CN114742917A (zh) * 2022-04-25 2022-07-12 桂林电子科技大学 一种基于卷积神经网络的ct图像分割方法
CN115631301A (zh) * 2022-10-24 2023-01-20 东华理工大学 基于改进全卷积神经网络的土石混合体图像三维重建方法
CN116958556A (zh) * 2023-08-01 2023-10-27 东莞理工学院 用于椎体和椎间盘分割的双通道互补脊柱图像分割方法
WO2024066711A1 (zh) * 2022-09-26 2024-04-04 中国人民解放军总医院第一医学中心 一种基于聚焦学习的ct血管造影智能成像方法
CN111915556B (zh) * 2020-06-22 2024-05-14 杭州深睿博联科技有限公司 一种基于双分支网络的ct图像病变检测方法、系统、终端及存储介质

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035255B (zh) * 2018-06-27 2021-07-02 东南大学 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法
CN109816661B (zh) * 2019-03-22 2022-07-01 电子科技大学 一种基于深度学习的牙齿ct图像分割方法
CN110148114A (zh) * 2019-04-02 2019-08-20 成都真实维度科技有限公司 一种基于2d断层扫描图数据集的深度学习模型训练方法
CN110135454A (zh) * 2019-04-02 2019-08-16 成都真实维度科技有限公司 一种基于3d断层扫描图数据集的深度学习模型训练方法
CN110610458B (zh) * 2019-04-30 2023-10-20 北京联合大学 一种基于岭回归的gan图像增强交互处理方法及系统
US11475561B2 (en) 2019-06-20 2022-10-18 The Cleveland Clinic Foundation Automated identification of acute aortic syndromes in computed tomography images
CN110349143B (zh) * 2019-07-08 2022-06-14 上海联影医疗科技股份有限公司 一种确定管状组织感兴趣区的方法、装置、设备及介质
CN110942464A (zh) * 2019-11-08 2020-03-31 浙江工业大学 一种融合2维和3维模型的pet图像分割方法
CN111489360A (zh) * 2020-03-18 2020-08-04 上海商汤智能科技有限公司 一种图像分割方法及相关设备
CN115769251A (zh) * 2020-06-29 2023-03-07 苏州润迈德医疗科技有限公司 基于深度学习获取主动脉图像的系统
CN114073536A (zh) * 2020-08-12 2022-02-22 通用电气精准医疗有限责任公司 灌注成像系统及方法
CN112365498B (zh) * 2020-12-10 2024-01-23 南京大学 一种针对二维图像序列中多尺度多形态目标的自动检测方法
CN112446877B (zh) * 2020-12-14 2022-11-11 清华大学 一种三维图像中多分支管状结构分割与标记方法
CN113096238B (zh) * 2021-04-02 2022-05-17 杭州柳叶刀机器人有限公司 一种x射线图模拟方法、装置、电子设备及存储介质
CN113160208A (zh) * 2021-05-07 2021-07-23 西安智诊智能科技有限公司 一种基于级联混合网络的肝脏病变图像分割方法
CN115908920B (zh) * 2022-11-21 2023-10-03 浙江大学 基于卷积神经网络的急性主动脉综合征ct图像分类方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107492097A (zh) * 2017-08-07 2017-12-19 北京深睿博联科技有限责任公司 一种识别mri图像感兴趣区域的方法及装置
CN107563983A (zh) * 2017-09-28 2018-01-09 上海联影医疗科技有限公司 图像处理方法以及医学成像设备
CN108198184A (zh) * 2018-01-09 2018-06-22 北京理工大学 造影图像中血管分割的方法和系统
CN109035255A (zh) * 2018-06-27 2018-12-18 东南大学 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976384A (zh) * 2016-05-16 2016-09-28 天津工业大学 基于GVF Snake模型的人体胸腹腔CT图像主动脉分割方法
CN106023198A (zh) * 2016-05-16 2016-10-12 天津工业大学 基于Hessian矩阵的人体胸腹腔CT图像主动脉夹层提取方法
WO2018068153A1 (en) * 2016-10-14 2018-04-19 Di Martino Elena Methods, systems, and computer readable media for evaluating risks associated with vascular pathologies

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107492097A (zh) * 2017-08-07 2017-12-19 北京深睿博联科技有限责任公司 一种识别mri图像感兴趣区域的方法及装置
CN107563983A (zh) * 2017-09-28 2018-01-09 上海联影医疗科技有限公司 图像处理方法以及医学成像设备
CN108198184A (zh) * 2018-01-09 2018-06-22 北京理工大学 造影图像中血管分割的方法和系统
CN109035255A (zh) * 2018-06-27 2018-12-18 东南大学 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111354005A (zh) * 2020-02-28 2020-06-30 浙江德尚韵兴医疗科技有限公司 基于卷积神经网络的全自动胎儿心超影像三血管分割方法
CN111667488A (zh) * 2020-04-20 2020-09-15 浙江工业大学 一种基于多角度U-Net的医学图像分割方法
CN111667488B (zh) * 2020-04-20 2023-07-28 浙江工业大学 一种基于多角度U-Net的医学图像分割方法
CN111915556A (zh) * 2020-06-22 2020-11-10 杭州深睿博联科技有限公司 一种基于双分支网络的ct图像病变检测方法、系统、终端及存储介质
CN111915556B (zh) * 2020-06-22 2024-05-14 杭州深睿博联科技有限公司 一种基于双分支网络的ct图像病变检测方法、系统、终端及存储介质
CN112330708A (zh) * 2020-11-24 2021-02-05 沈阳东软智能医疗科技研究院有限公司 图像处理方法、装置、存储介质及电子设备
CN112330708B (zh) * 2020-11-24 2024-04-23 沈阳东软智能医疗科技研究院有限公司 图像处理方法、装置、存储介质及电子设备
CN112884775A (zh) * 2021-01-20 2021-06-01 推想医疗科技股份有限公司 一种分割方法、装置、设备及介质
CN112884775B (zh) * 2021-01-20 2022-02-22 推想医疗科技股份有限公司 一种分割方法、装置、设备及介质
CN114742917A (zh) * 2022-04-25 2022-07-12 桂林电子科技大学 一种基于卷积神经网络的ct图像分割方法
CN114742917B (zh) * 2022-04-25 2024-04-26 桂林电子科技大学 一种基于卷积神经网络的ct图像分割方法
WO2024066711A1 (zh) * 2022-09-26 2024-04-04 中国人民解放军总医院第一医学中心 一种基于聚焦学习的ct血管造影智能成像方法
CN115631301B (zh) * 2022-10-24 2023-07-28 东华理工大学 基于改进全卷积神经网络的土石混合体图像三维重建方法
CN115631301A (zh) * 2022-10-24 2023-01-20 东华理工大学 基于改进全卷积神经网络的土石混合体图像三维重建方法
CN116958556B (zh) * 2023-08-01 2024-03-19 东莞理工学院 用于椎体和椎间盘分割的双通道互补脊柱图像分割方法
CN116958556A (zh) * 2023-08-01 2023-10-27 东莞理工学院 用于椎体和椎间盘分割的双通道互补脊柱图像分割方法

Also Published As

Publication number Publication date
CN109035255A (zh) 2018-12-18
CN109035255B (zh) 2021-07-02

Similar Documents

Publication Publication Date Title
WO2020001217A1 (zh) 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法
CN107563983B (zh) 图像处理方法以及医学成像设备
CN109063710B (zh) 基于多尺度特征金字塔的3d cnn鼻咽癌分割方法
CN108198184B (zh) 造影图像中血管分割的方法和系统
Tobon-Gomez et al. Benchmark for algorithms segmenting the left atrium from 3D CT and MRI datasets
WO2021244661A1 (zh) 确定图像中血管信息的方法和系统
CN104992430B (zh) 基于卷积神经网络的全自动的三维肝脏分割方法
EP3660785A1 (en) Method and system for providing an at least 3-dimensional medical image segmentation of a structure of an internal organ
Enokiya et al. Automatic liver segmentation using U-Net with Wasserstein GANs
CN111091573B (zh) 基于深度学习的ct影像肺血管的分割方法及系统
CN111612743B (zh) 一种基于ct图像的冠状动脉中心线提取方法
CN109584244B (zh) 一种基于序列学习的海马体分割方法
CN111798462A (zh) 一种基于ct图像的鼻咽癌放疗靶区自动勾画方法
CN109727253A (zh) 基于深度卷积神经网络自动分割肺结节的辅助检测方法
Chen et al. Pathological lung segmentation in chest CT images based on improved random walker
CN110288611A (zh) 基于注意力机制和全卷积神经网络的冠状血管分割方法
CN111028248A (zh) 一种基于ct图像的静动脉分离方法及装置
CN110570394B (zh) 医学图像分割方法、装置、设备及存储介质
CN112308846B (zh) 血管分割方法、装置及电子设备
Fan et al. Lung nodule detection based on 3D convolutional neural networks
CN112258514A (zh) 一种ct影像肺血管的分割方法
Ravichandran et al. 3D inception U-Net for aorta segmentation using computed tomography cardiac angiography
Lyu et al. Dissected aorta segmentation using convolutional neural networks
US20220301224A1 (en) Systems and methods for image segmentation
Pang et al. A modified scheme for liver tumor segmentation based on cascaded FCNs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19826600

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19826600

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.06.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19826600

Country of ref document: EP

Kind code of ref document: A1