WO2020001217A1 - 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法 - Google Patents
一种基于卷积神经网络的ct图像中带夹层主动脉分割方法 Download PDFInfo
- Publication number
- WO2020001217A1 WO2020001217A1 PCT/CN2019/088835 CN2019088835W WO2020001217A1 WO 2020001217 A1 WO2020001217 A1 WO 2020001217A1 CN 2019088835 W CN2019088835 W CN 2019088835W WO 2020001217 A1 WO2020001217 A1 WO 2020001217A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- dimensional
- neural network
- convolutional neural
- aorta
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Definitions
- the invention relates to a method for segmenting aorta with CT in a CT image, and in particular to a method for segmenting aorta with CT in a CT image based on a convolutional neural network, and belongs to the field of computer vision and image segmentation technology.
- Aortic dissection refers to a pathological phenomenon in which blood flows between the aortic intima and the aortic wall due to the damage of the aortic intimal layer, forcing the two to separate.
- the cause is often related to high blood pressure or reduced blood vessel wall strength caused by injury, heart surgery, or some conditions.
- the incidence of aortic dissection is low, the mortality rate is extremely high.
- the interval between onset and death is extremely short. Without treatment, half of patients with acute type A aortic dissection will die within three days, and more than 10% of patients with type A aortic dissection will die within 30 days.
- the diagnosis of aortic dissection is mainly based on Computed Tomography Angiography (CTA).
- the morphological characteristics of dissection aorta such as the size and location of the primary portal, the diameter of the true and false cavities, and the curvature of the aorta, have important implications for diagnosis, customized treatment planning, and risk assessment. At present, how to calculate these morphological features is still a very challenging problem, and segmenting the aorta with dissection in CT images is the first step to solve this problem.
- the existing blood vessel segmentation algorithms can be divided into four categories, namely algorithms based on blood vessel enhancement filtering, algorithms based on centerline tracking, algorithms based on geometric model of blood vessels, and algorithms based on machine learning.
- the algorithm based on vascular enhancement filtering mainly uses some filters based on the eigenvalues of the Hessian matrix, such as the Frangi filter, to enhance the vascular region, and then uses some basic image segmentation algorithms such as threshold segmentation or region growth to obtain Target vessel.
- Some of these methods can be fully automatic, but due to the lack of information related to the vascular topology, the segmentation results often have a large number of misclassifications. At the same time, vascular lesions such as soft plaques and calcification will seriously affect the segmentation results.
- the main feature of the algorithm based on centerline tracking is to extract the centerline of the blood vessel before segmenting the blood vessel, and then expand the blood vessel area from the centerline.
- This type of algorithm can better express the topological structure of blood vessels, but usually requires manual marking of at least the centerline point, which cannot be fully automatic.
- the method based on the vascular geometric model uses geometric models such as three-dimensional cylinders to model the blood vessels, and then optimizes the parameters of the geometric model to accurately obtain the vascular segmentation results.
- geometric models such as three-dimensional cylinders to model the blood vessels, and then optimizes the parameters of the geometric model to accurately obtain the vascular segmentation results.
- the calculations of these algorithms are mostly complicated, and the segmentation is time-consuming.
- such algorithms are often sensitive to the initial model, and usually need to manually mark the initial model to get better results.
- Machine learning-based methods achieve the purpose of segmenting blood vessels by training statistical learning models such as support vector machines and neural networks. Such methods often have the advantages of fast segmentation speed and high accuracy.
- the disadvantages are that training of statistical models requires a large amount of training data, and manual labeling of blood vessel regions in the training set requires a lot of manpower.
- Convolutional Neural Network Algorithms based on Convolutional Neural Network (CNN) can be classified as the above-mentioned machine learning-based algorithms. In recent years, such algorithms have attracted widespread attention in various fields of medical imaging, and have made remarkable achievements in image classification, image segmentation, and image registration. Convolutional neural networks are developed from the basis of neural networks. The main difference between the two is that convolutional neural networks use convolutional layers as feature extractors, while feature extractors of ordinary neural networks are composed of fully connected layers. . In 2014, Long and others at the University of California, Berkeley proposed Fully Convolutional Network (FCN), which is a type of convolutional neural network model widely used in the field of image segmentation.
- FCN Fully Convolutional Network
- FCN replaces the fully connected layer in CNN with convolutional and deconvolutional layers. This change preserves two-dimensional spatial information and enables it to perform two-dimensional dense prediction.
- the proposal of this structure allows the network to lift the restriction on the size of the input picture and can input pictures of any size.
- FCN greatly reduces the network parameters, reduces the risk of network overfitting, and has a processing speed. Obvious improvement, so almost all the latest semantic segmentation networks have adopted this structure.
- 3D CT data segmentation based on convolutional neural networks There are two basic ideas for 3D CT data segmentation based on convolutional neural networks.
- the first is to directly use three-dimensional full convolutional neural network models to process three-dimensional data.
- This method can make full use of the 3D information in the data, but the problem is that the 3D CT volume data often has a large amount of data, and the existing GPU video memory is not enough to support directly constructing a network and completing training on the original size volume data.
- a solution is to downsample the original data first, but this method inevitably brings another problem.
- the lower resolution of the input image causes the accuracy of the segmentation to decrease.
- the second idea is to treat the three-dimensional volume data as a stack of two-dimensional images and train a two-dimensional fully convolutional neural network to segment each two-dimensional image separately.
- the advantage of this idea is that the resolution of the input image is retained, but the disadvantage is that the three-dimensional information of the image is lost.
- this two-dimensional convolutional neural network-based method is extremely unstable in some specific regions and performs well in other regions.
- the present invention proposes a CT aortic segmentation aortic segmentation algorithm combining a three-dimensional convolutional neural network and a two-dimensional convolutional neural network.
- This method uses a three-dimensional convolutional neural network to divide the three-dimensional volume data into two parts, and then uses two two-dimensional convolutional neural networks to separate the two parts to obtain the final segmentation result.
- the present invention proposes a method for aortic segmentation with dissection in CT images based on a convolutional neural network, including the following steps:
- Step 1 CT image of aorta with dissection Get the corresponding artificially labeled image
- Step 2 CT image of aorta with dissection And the corresponding manually labeled image
- the training set T 3D of the three-dimensional convolutional neural network and the training set of two two-dimensional neural networks are calculated with
- Step 3 Use the obtained three-dimensional network training set T 3D to train a three-dimensional convolutional neural network N 3D to obtain a three-dimensional model M 3D , and simultaneously use the obtained two-dimensional network training set.
- Step 4. Clinical 3D CT images to be segmented Pre-processing to obtain pre-processed 3D CT images
- Step 5 Convert the pre-processed 3D CT image
- the trained three-dimensional model M 3D is input to obtain a preliminary block label A 3D .
- Step 6 Process the preliminary block mark A 3D to obtain a fine block mark
- Step 7 Label according to fine block 3D CT image to be segmented Divided into two parts by fault with Input the corresponding trained two-dimensional model layer by layer. with To get the corresponding two sets of feature value images with
- Step 8 Combining two sets of feature value images with An overall feature value image F 3D is obtained , and threshold segmentation is performed on F 3D to obtain a final segmentation result S 3D .
- the method of the present invention firstly divides the three-dimensional CT data into two types according to the position of the fault with respect to the aorta by using a three-dimensional convolutional neural network model.
- two two-dimensional convolutional neural networks are used to segment the two types of faults to obtain aortic segmentation results.
- the invention can segment the aorta with dissection in the CT image with high accuracy.
- the two-dimensional convolutional neural network used in the present invention contains three parts, firstly two branches, one for extracting preliminary aortic segmentation results, one for extracting aortic boundaries, and finally a convolutional neural network. The results of the previous two branches are fused to obtain the final segmentation result. This design greatly improves the segmentation accuracy of the algorithm's blood vessel boundaries and dissections.
- FIG. 1 is a three-dimensional volume rendering image with dissection CT image data and a corresponding aortic artificial marker according to an embodiment of the present invention, where (a) is a CT image and (b) an aortic artificial marker.
- FIG. 2 is a schematic diagram of the overall process of the present invention.
- FIG. 3 is a standard schematic diagram of dividing three-dimensional volume data into two parts according to an embodiment of the present invention.
- FIG. 4 is a schematic diagram of a basic structure of a two-dimensional convolutional neural network used in the present invention.
- FIG. 5 is a flowchart of post-processing of a three-dimensional model according to the present invention.
- FIG. 6 is an axial clinical CT image and a locally enlarged image of the aortic region in the embodiment of the present invention, in which (a) is an axial CT image; and (b) is a locally enlarged image of the aortic region.
- FIG. 7 is an axial image of a segmentation result obtained by using the method of the present invention and a locally enlarged image of a corresponding aortic region in an embodiment of the present invention, where (a) is an axial CT image; (b) is a partial enlargement of the aortic region Image.
- FIG. 8 is a three-dimensional volume rendering image of a segmentation result obtained by using the method of the present invention in an embodiment of the present invention.
- the present invention proposes a method for segmenting aorta with CT in a CT image based on a convolutional neural network.
- a voxel in a CT image with a sandwich aorta obtained is labeled.
- the 3D convolutional neural network and the two 2D neural network training sets are calculated from the dissection aortic CT image and the corresponding artificially labeled image, and the obtained training set is used to respectively train the 3D neural network and
- the two-dimensional neural network obtains a trained three-dimensional model and two two-dimensional models.
- the 3D CT image to be segmented is pre-processed to obtain a pre-processed 3D CT image.
- the pre-processed 3D CT images are input into the trained 3D model to obtain preliminary block marks.
- the preliminary block mark is processed to obtain a fine block mark.
- the segmented 3D CT image is divided into two parts according to layers according to the fine block marks, and they are input to the corresponding trained 2D neural network layer by layer to obtain the corresponding two sets of feature value images.
- the two sets of eigenvalue images are combined and the threshold segmentation is used to obtain the final aortic segmentation result with dissection.
- Step 1 CT image of dissecting aorta Get the corresponding artificially labeled image
- the method for obtaining the corresponding manually labeled image from the CT image of the dissection aorta includes, but is not limited to, manual manual labeling and manual modification and refinement after preliminary segmentation by other blood vessel segmentation methods.
- Step 2 CT image of aorta with dissection And the corresponding manually labeled image
- a training set T 3D of two-dimensional convolutional neural network and two training sets of two-dimensional neural network are calculated.
- a three-dimensional volume drawing image with dissection CT image data and a corresponding aortic artificial marker is drawn, where (a) is a CT image and (b) an aortic artificial marker.
- FIG. 3 is a standard schematic diagram of three-dimensional volume data divided into two parts according to an embodiment of the present invention, where a CT image of aorta with dissection is shown Each axial slice is labeled according to whether it contains the ascending aorta or aortic arch, and a one-dimensional label array is obtained.
- Step 3 Use the obtained three-dimensional network training set T 3D to train a three-dimensional convolutional neural network N 3D to obtain a three-dimensional model M 3D , and simultaneously use the obtained two-dimensional network training set.
- the three-dimensional convolutional neural network N 3D is a three-dimensional full convolutional neural network. It consists of a batch normalization layer. The input of 3D convolutional neural network N 3D is reduced 3D volume data Target output is a reduced one-digit label array The training is supervised by the loss function 3D .
- 2D convolutional neural network with For a two-dimensional fully convolutional neural network with the same structure, it should consist of one or more two-dimensional convolutional layers, a striped convolution layer or a pooling layer, a transposed convolution layer, The activation layer and the batch normalization layer are composed, and its basic structure is shown in FIG. 4.
- 2D convolutional neural network It can be divided into three parts: two branches N area and N edge are used to extract the preliminary blood vessel segmentation result and blood vessel boundary, respectively.
- the inputs of the two branches N area and N edge are two-dimensional CT tomographic images
- the target outputs are manually labeled images Border image with blood vessels Image of blood vessel boundary Artificially labeled images after morphological expansion Images with artificial tags Do the difference, and the two branches are trained by the loss function with Monitor it.
- the fusion part N fusion is used to fuse the results of the first two parts to obtain a more accurate two-dimensional blood vessel segmentation result.
- Its input is the output of N area O area and the output of N edge O edge
- the target output is an artificially labeled image.
- Loss function Supervised training The loss function of the entire network is the weighted sum of the above three loss functions, that is,
- the two branch N area and N edge inputs are two-dimensional CT tomographic images
- the target outputs are manually labeled images Border image with blood vessels Image of blood vessel boundary Artificially labeled images after morphological expansion Images with artificial tags Do the difference, and the two branches are trained by the loss function with Monitor it.
- the output of the fusion part N fusion target is an artificially labeled image Loss function Supervised training.
- the loss function of the entire network is the weighted sum of the above three loss functions, that is,
- the above activation layer is a non-linear activation layer.
- Available activation functions include, but are not limited to, a ReLU function, a sigmoid function, a LeakyReLU function, a PReLU function, and the like.
- the above loss function loss 3D All are loss functions suitable for image segmentation tasks.
- the loss functions that can be used include but are not limited to L2 loss function, cross entropy loss function, dice loss function, normalized dice loss function, and so on.
- the resulting model M 3D with Contains the corresponding network structure and parameters of each layer in the trained network.
- Step 4. Clinical 3D CT images to be segmented Pre-processing to obtain pre-processed 3D CT images
- preprocessing refers to the same three-dimensional interpolation operation as in step 2 to divide the clinical three-dimensional CT image to be segmented.
- Step 5 Convert the pre-processed 3D CT image
- the trained three-dimensional model M 3D is input to obtain a preliminary block label A 3D .
- the output initial block mark A 3D is a one-dimensional array of length nz.
- Step 6 Process the preliminary block mark A 3D to obtain a fine block mark
- processing steps include thresholds, one-dimensional morphological dilation, one-dimensional interpolation, and the like.
- the specific processing steps flowchart is shown in FIG. 5.
- Step 7 Label according to fine block 3D CT image to be segmented Divided into two parts by axial fault with Input the corresponding trained two-dimensional model layer by layer. with To get the corresponding two sets of feature value images with
- the axial fault corresponding to the position marked 1 in the middle is classified as The axial fault corresponding to the position marked as 0 is classified as
- Step 8 Combining two sets of feature value images with An overall feature value image F 3D is obtained , and threshold segmentation is performed on F 3D to obtain a final segmentation result S 3D .
- threshold segmentation is used to obtain the final segmentation result.
- the threshold value used in the threshold segmentation of the present invention is 0.5, that is, a part with a feature value greater than or equal to 0.5 in the feature image is marked as 1, that is, a target, and a part less than 0.5 is marked as 0, that is, the background.
- FIG. 6 it is an axial clinical CT image and a locally enlarged image of the aortic region in the embodiment of the present invention, where (a) is an axial CT image; and (b) is a locally enlarged image of the aortic region.
- FIG. 7 is an axial image of a segmentation result obtained by using the method of the present invention and a locally enlarged image of a corresponding aortic region in an embodiment of the present invention, and the region indicated by R is the segmentation result, where (a) is an axial CT image; b) A locally enlarged image of the aortic region.
- FIG. 8 is a three-dimensional volume rendering image of a segmentation result obtained by using the method of the present invention in an embodiment of the present invention. The results show that the automatic CT aortic segmentation method proposed by the present invention can automatically segment the aortic region from the CT images of aortic dissection patients, providing a good basis for medical diagnosis and treatment planning and subsequent research and analysis. .
- each block in these structural diagrams and / or block diagrams and / or flow diagrams and the blocks in these structural diagrams and / or block diagrams and / or flow diagrams can be implemented by computer program instructions.
- These computer program instructions can be provided to a processor of a general-purpose computer, professional computer, or other programmable data processing method to generate a machine, and the instructions executed by the computer or other programmable data processing method's processor create a structure for implementing the structure Diagrams and / or block diagrams and / or flow diagrams specified in boxes or multiple boxes.
Abstract
Description
Claims (10)
- 一种基于卷积神经网络的CT图像中带夹层主动脉分割方法,其特征在于,包括以下步骤:
- 如权利要求4所述基于卷积神经网络的CT图像中带夹层主动脉分割方法,其特征在于,二维全卷积神经网络,由三个全卷积神经网络N area、N edge和N fusion组成,其中N area输入原始大小的二维CT图像断层得到初步分割结果,N edge输入原始大小的二维CT图像断层得到边界提取结果,N fusion输入前两个网络的结果得到精细的分割结果。
- 如权利要求1所述基于卷积神经网络的CT图像中带夹层主动脉分割方法,其特征在于,步骤6中的后处理应包含阈值分割、一维形态学膨胀和一维插值步骤。
- 如权利要求1所述基于卷积神经网络的CT图像中带夹层主动脉分割方法,其特征在于,步骤8中的阈值分割中使用的阈值为0.5。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810677366.0 | 2018-06-27 | ||
CN201810677366.0A CN109035255B (zh) | 2018-06-27 | 2018-06-27 | 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020001217A1 true WO2020001217A1 (zh) | 2020-01-02 |
Family
ID=64610793
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/088835 WO2020001217A1 (zh) | 2018-06-27 | 2019-05-28 | 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109035255B (zh) |
WO (1) | WO2020001217A1 (zh) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111354005A (zh) * | 2020-02-28 | 2020-06-30 | 浙江德尚韵兴医疗科技有限公司 | 基于卷积神经网络的全自动胎儿心超影像三血管分割方法 |
CN111667488A (zh) * | 2020-04-20 | 2020-09-15 | 浙江工业大学 | 一种基于多角度U-Net的医学图像分割方法 |
CN111915556A (zh) * | 2020-06-22 | 2020-11-10 | 杭州深睿博联科技有限公司 | 一种基于双分支网络的ct图像病变检测方法、系统、终端及存储介质 |
CN112330708A (zh) * | 2020-11-24 | 2021-02-05 | 沈阳东软智能医疗科技研究院有限公司 | 图像处理方法、装置、存储介质及电子设备 |
CN112884775A (zh) * | 2021-01-20 | 2021-06-01 | 推想医疗科技股份有限公司 | 一种分割方法、装置、设备及介质 |
CN114742917A (zh) * | 2022-04-25 | 2022-07-12 | 桂林电子科技大学 | 一种基于卷积神经网络的ct图像分割方法 |
CN115631301A (zh) * | 2022-10-24 | 2023-01-20 | 东华理工大学 | 基于改进全卷积神经网络的土石混合体图像三维重建方法 |
CN116958556A (zh) * | 2023-08-01 | 2023-10-27 | 东莞理工学院 | 用于椎体和椎间盘分割的双通道互补脊柱图像分割方法 |
WO2024066711A1 (zh) * | 2022-09-26 | 2024-04-04 | 中国人民解放军总医院第一医学中心 | 一种基于聚焦学习的ct血管造影智能成像方法 |
CN111915556B (zh) * | 2020-06-22 | 2024-05-14 | 杭州深睿博联科技有限公司 | 一种基于双分支网络的ct图像病变检测方法、系统、终端及存储介质 |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109035255B (zh) * | 2018-06-27 | 2021-07-02 | 东南大学 | 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法 |
CN109816661B (zh) * | 2019-03-22 | 2022-07-01 | 电子科技大学 | 一种基于深度学习的牙齿ct图像分割方法 |
CN110148114A (zh) * | 2019-04-02 | 2019-08-20 | 成都真实维度科技有限公司 | 一种基于2d断层扫描图数据集的深度学习模型训练方法 |
CN110135454A (zh) * | 2019-04-02 | 2019-08-16 | 成都真实维度科技有限公司 | 一种基于3d断层扫描图数据集的深度学习模型训练方法 |
CN110610458B (zh) * | 2019-04-30 | 2023-10-20 | 北京联合大学 | 一种基于岭回归的gan图像增强交互处理方法及系统 |
US11475561B2 (en) | 2019-06-20 | 2022-10-18 | The Cleveland Clinic Foundation | Automated identification of acute aortic syndromes in computed tomography images |
CN110349143B (zh) * | 2019-07-08 | 2022-06-14 | 上海联影医疗科技股份有限公司 | 一种确定管状组织感兴趣区的方法、装置、设备及介质 |
CN110942464A (zh) * | 2019-11-08 | 2020-03-31 | 浙江工业大学 | 一种融合2维和3维模型的pet图像分割方法 |
CN111489360A (zh) * | 2020-03-18 | 2020-08-04 | 上海商汤智能科技有限公司 | 一种图像分割方法及相关设备 |
CN115769251A (zh) * | 2020-06-29 | 2023-03-07 | 苏州润迈德医疗科技有限公司 | 基于深度学习获取主动脉图像的系统 |
CN114073536A (zh) * | 2020-08-12 | 2022-02-22 | 通用电气精准医疗有限责任公司 | 灌注成像系统及方法 |
CN112365498B (zh) * | 2020-12-10 | 2024-01-23 | 南京大学 | 一种针对二维图像序列中多尺度多形态目标的自动检测方法 |
CN112446877B (zh) * | 2020-12-14 | 2022-11-11 | 清华大学 | 一种三维图像中多分支管状结构分割与标记方法 |
CN113096238B (zh) * | 2021-04-02 | 2022-05-17 | 杭州柳叶刀机器人有限公司 | 一种x射线图模拟方法、装置、电子设备及存储介质 |
CN113160208A (zh) * | 2021-05-07 | 2021-07-23 | 西安智诊智能科技有限公司 | 一种基于级联混合网络的肝脏病变图像分割方法 |
CN115908920B (zh) * | 2022-11-21 | 2023-10-03 | 浙江大学 | 基于卷积神经网络的急性主动脉综合征ct图像分类方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107492097A (zh) * | 2017-08-07 | 2017-12-19 | 北京深睿博联科技有限责任公司 | 一种识别mri图像感兴趣区域的方法及装置 |
CN107563983A (zh) * | 2017-09-28 | 2018-01-09 | 上海联影医疗科技有限公司 | 图像处理方法以及医学成像设备 |
CN108198184A (zh) * | 2018-01-09 | 2018-06-22 | 北京理工大学 | 造影图像中血管分割的方法和系统 |
CN109035255A (zh) * | 2018-06-27 | 2018-12-18 | 东南大学 | 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105976384A (zh) * | 2016-05-16 | 2016-09-28 | 天津工业大学 | 基于GVF Snake模型的人体胸腹腔CT图像主动脉分割方法 |
CN106023198A (zh) * | 2016-05-16 | 2016-10-12 | 天津工业大学 | 基于Hessian矩阵的人体胸腹腔CT图像主动脉夹层提取方法 |
WO2018068153A1 (en) * | 2016-10-14 | 2018-04-19 | Di Martino Elena | Methods, systems, and computer readable media for evaluating risks associated with vascular pathologies |
-
2018
- 2018-06-27 CN CN201810677366.0A patent/CN109035255B/zh active Active
-
2019
- 2019-05-28 WO PCT/CN2019/088835 patent/WO2020001217A1/zh active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107492097A (zh) * | 2017-08-07 | 2017-12-19 | 北京深睿博联科技有限责任公司 | 一种识别mri图像感兴趣区域的方法及装置 |
CN107563983A (zh) * | 2017-09-28 | 2018-01-09 | 上海联影医疗科技有限公司 | 图像处理方法以及医学成像设备 |
CN108198184A (zh) * | 2018-01-09 | 2018-06-22 | 北京理工大学 | 造影图像中血管分割的方法和系统 |
CN109035255A (zh) * | 2018-06-27 | 2018-12-18 | 东南大学 | 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法 |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111354005A (zh) * | 2020-02-28 | 2020-06-30 | 浙江德尚韵兴医疗科技有限公司 | 基于卷积神经网络的全自动胎儿心超影像三血管分割方法 |
CN111667488A (zh) * | 2020-04-20 | 2020-09-15 | 浙江工业大学 | 一种基于多角度U-Net的医学图像分割方法 |
CN111667488B (zh) * | 2020-04-20 | 2023-07-28 | 浙江工业大学 | 一种基于多角度U-Net的医学图像分割方法 |
CN111915556A (zh) * | 2020-06-22 | 2020-11-10 | 杭州深睿博联科技有限公司 | 一种基于双分支网络的ct图像病变检测方法、系统、终端及存储介质 |
CN111915556B (zh) * | 2020-06-22 | 2024-05-14 | 杭州深睿博联科技有限公司 | 一种基于双分支网络的ct图像病变检测方法、系统、终端及存储介质 |
CN112330708A (zh) * | 2020-11-24 | 2021-02-05 | 沈阳东软智能医疗科技研究院有限公司 | 图像处理方法、装置、存储介质及电子设备 |
CN112330708B (zh) * | 2020-11-24 | 2024-04-23 | 沈阳东软智能医疗科技研究院有限公司 | 图像处理方法、装置、存储介质及电子设备 |
CN112884775A (zh) * | 2021-01-20 | 2021-06-01 | 推想医疗科技股份有限公司 | 一种分割方法、装置、设备及介质 |
CN112884775B (zh) * | 2021-01-20 | 2022-02-22 | 推想医疗科技股份有限公司 | 一种分割方法、装置、设备及介质 |
CN114742917A (zh) * | 2022-04-25 | 2022-07-12 | 桂林电子科技大学 | 一种基于卷积神经网络的ct图像分割方法 |
CN114742917B (zh) * | 2022-04-25 | 2024-04-26 | 桂林电子科技大学 | 一种基于卷积神经网络的ct图像分割方法 |
WO2024066711A1 (zh) * | 2022-09-26 | 2024-04-04 | 中国人民解放军总医院第一医学中心 | 一种基于聚焦学习的ct血管造影智能成像方法 |
CN115631301B (zh) * | 2022-10-24 | 2023-07-28 | 东华理工大学 | 基于改进全卷积神经网络的土石混合体图像三维重建方法 |
CN115631301A (zh) * | 2022-10-24 | 2023-01-20 | 东华理工大学 | 基于改进全卷积神经网络的土石混合体图像三维重建方法 |
CN116958556B (zh) * | 2023-08-01 | 2024-03-19 | 东莞理工学院 | 用于椎体和椎间盘分割的双通道互补脊柱图像分割方法 |
CN116958556A (zh) * | 2023-08-01 | 2023-10-27 | 东莞理工学院 | 用于椎体和椎间盘分割的双通道互补脊柱图像分割方法 |
Also Published As
Publication number | Publication date |
---|---|
CN109035255A (zh) | 2018-12-18 |
CN109035255B (zh) | 2021-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020001217A1 (zh) | 一种基于卷积神经网络的ct图像中带夹层主动脉分割方法 | |
CN107563983B (zh) | 图像处理方法以及医学成像设备 | |
CN109063710B (zh) | 基于多尺度特征金字塔的3d cnn鼻咽癌分割方法 | |
CN108198184B (zh) | 造影图像中血管分割的方法和系统 | |
Tobon-Gomez et al. | Benchmark for algorithms segmenting the left atrium from 3D CT and MRI datasets | |
WO2021244661A1 (zh) | 确定图像中血管信息的方法和系统 | |
CN104992430B (zh) | 基于卷积神经网络的全自动的三维肝脏分割方法 | |
EP3660785A1 (en) | Method and system for providing an at least 3-dimensional medical image segmentation of a structure of an internal organ | |
Enokiya et al. | Automatic liver segmentation using U-Net with Wasserstein GANs | |
CN111091573B (zh) | 基于深度学习的ct影像肺血管的分割方法及系统 | |
CN111612743B (zh) | 一种基于ct图像的冠状动脉中心线提取方法 | |
CN109584244B (zh) | 一种基于序列学习的海马体分割方法 | |
CN111798462A (zh) | 一种基于ct图像的鼻咽癌放疗靶区自动勾画方法 | |
CN109727253A (zh) | 基于深度卷积神经网络自动分割肺结节的辅助检测方法 | |
Chen et al. | Pathological lung segmentation in chest CT images based on improved random walker | |
CN110288611A (zh) | 基于注意力机制和全卷积神经网络的冠状血管分割方法 | |
CN111028248A (zh) | 一种基于ct图像的静动脉分离方法及装置 | |
CN110570394B (zh) | 医学图像分割方法、装置、设备及存储介质 | |
CN112308846B (zh) | 血管分割方法、装置及电子设备 | |
Fan et al. | Lung nodule detection based on 3D convolutional neural networks | |
CN112258514A (zh) | 一种ct影像肺血管的分割方法 | |
Ravichandran et al. | 3D inception U-Net for aorta segmentation using computed tomography cardiac angiography | |
Lyu et al. | Dissected aorta segmentation using convolutional neural networks | |
US20220301224A1 (en) | Systems and methods for image segmentation | |
Pang et al. | A modified scheme for liver tumor segmentation based on cascaded FCNs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19826600 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19826600 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 01.06.2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19826600 Country of ref document: EP Kind code of ref document: A1 |