CN113362332A - Depth network segmentation method for coronary artery lumen contour under OCT image - Google Patents

Depth network segmentation method for coronary artery lumen contour under OCT image Download PDF

Info

Publication number
CN113362332A
CN113362332A CN202110638265.4A CN202110638265A CN113362332A CN 113362332 A CN113362332 A CN 113362332A CN 202110638265 A CN202110638265 A CN 202110638265A CN 113362332 A CN113362332 A CN 113362332A
Authority
CN
China
Prior art keywords
oct image
segmentation
branch
convolution
coronary artery
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110638265.4A
Other languages
Chinese (zh)
Inventor
孙玉宝
陈思华
吴敏
乔馨霆
陈勋豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Information Science and Technology
Original Assignee
Nanjing University of Information Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Information Science and Technology filed Critical Nanjing University of Information Science and Technology
Priority to CN202110638265.4A priority Critical patent/CN113362332A/en
Publication of CN113362332A publication Critical patent/CN113362332A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention provides a depth network segmentation method of a coronary artery lumen contour under an OCT image, which adopts a double-branch depth convolution network structure, wherein one branch divides a lumen region mask corresponding to the OCT image, the other branch predicts the lumen inner wall contour, joint learning is carried out between two tasks, and the coupled result is the final segmentation contour. The invention adopts a double-branch depth convolution network, performs combined learning by utilizing the correlation between two tasks, and fuses the results of the two branches to obtain a final segmentation contour, so that the robustness of the shape change of the inner wall of the lumen can be maintained, the contour of the inner wall of the blood vessel can be accurately positioned, and the accurate segmentation of the lumen contour of the coronary artery under an OCT image can be realized.

Description

Depth network segmentation method for coronary artery lumen contour under OCT image
Technical Field
The invention belongs to the technical field of data information processing, and particularly relates to an automatic segmentation method for a coronary artery lumen contour under an OCT image.
Background
With the aggravation of the aging process in China, the incidence of cardiovascular diseases is increasing. The existing diagnosis and treatment means can only observe the outline of a lesion part through radiography and is not enough to meet the requirement of accurately positioning a focus. Optical Coherence Tomography (OCT) is a major technical breakthrough following X-CT and MRI technologies in recent years. The principle of OCT imaging is similar to ultrasound, and uses reflected near infrared light as an imaging medium to form an image. Optical coherence tomography is widely applied to high-resolution tomography inside a coronary artery lumen to obtain a high-resolution blood vessel inner wall image, focus such as intimal tear, plaque prolapse and the like can be revealed through observing pictures, and thrombus formation cause can be revealed through time sequence sampling. The method is convenient for doctors to accurately sample and plays an important role in interventional therapy of coronary arteries. .
However, in the face of increasingly more patients, close to the manual diagnosis and treatment of doctors, the requirement of quick diagnosis and treatment cannot be met. And with the increase of workload, people are inevitably fatigued. Misdiagnosis due to fatigue is not favorable for the treatment of the patient. In modern medicine, reasoning about a disease focus is often made based on a series of data such as apparent symptoms of a patient, physiological tests and the like by a doctor. The method for automatically segmenting the inner wall of the coronary artery in the OCT image is constructed, quantitative standards can be provided for distinguishing coronary artery lumen stenosis, stent falling and other diseases, diagnosis and treatment of doctors are assisted, diagnosis and treatment efficiency of the doctors can be greatly improved, and misdiagnosis rate caused by fatigue is reduced.
Watershed, Level-set and the like are common image segmentation algorithms, but good segmentation results cannot be obtained by directly applying the methods to the OCT image segmentation problem, and due to the influence of imaging probes, vessel bifurcation and vessel deformation, the methods cannot segment complete vessel inner wall profiles, have the problems of unclosed profiles after segmentation, low positioning accuracy and the like, and cannot meet the clinical high-precision segmentation requirements.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, provides a depth network segmentation algorithm for the lumen contour of a coronary artery under an OCT image, realizes accurate segmentation of a contour region in an OCT image slice, and provides an accurate diagnosis basis for a doctor to make a scheme.
In order to achieve the purpose, the invention provides a depth network segmentation method of a coronary artery lumen contour under an OCT image, the segmentation method adopts a double-branch depth convolution network structure, one branch divides a lumen region mask corresponding to the OCT image, the other branch predicts the lumen inner wall contour, joint learning is carried out between the two tasks, and the coupled result is the final segmentation contour.
More specifically, the deep network segmentation method of the present invention specifically comprises the following steps:
s101, collecting a coronary artery OCT image slice scanning sequence, and constructing a coronary artery data set under an OCT image; labeling the contour region of each OCT image slice and the contour of the inner wall of the lumen by labeling software, and generating a corresponding binary mask image;
s102, respectively preprocessing the OCT image slices of the coronary artery and the corresponding binarization mask images in the S101, and then respectively dividing the preprocessed OCT image slices and the corresponding binarization mask images into a training set, a verification set and a test set according to the ratio of 6:2: 2;
s103, designing a double-branch deep convolution network segmentation model and designing a loss function for segmentation;
s104, selecting a proper optimization learning method, setting related hyper-parameters, and training the dual-branch deep convolution network segmentation model in the S103 by utilizing a training set and a verification set;
s105, after training is finished, selecting a coronary artery OCT image slice from the test set, inputting a double-branch depth convolution network segmentation model, loading the trained model weight for segmentation, generating a probability map of a lumen/background, segmenting the inner wall of a blood vessel of the coronary artery OCT image slice, and generating a segmented binary mask map.
Preferably, the design of the two-branch deep convolutional network segmentation model adopted in S301 is to reduce the feature map scale through the encoder module pair first for a given input feature map. And then enlarging the characteristic diagram scale through a decoder module until the characteristic diagram scale is the same as the input scale. The design loss function constrains the network. The characteristic diagram output by the network is divided into two branches, and different post-processing is carried out on the two branches. And finally, the result output by the network is the coupling of the segmentation results of the two branches.
Preferably, in the segmentation model of the dual-branch deep convolutional network in S103, for a given input feature map, sequentially passing through four encoder blocks and four decoder blocks, then performing channel number dimensionality reduction through 1x1 convolution, and finally inputting to dual-branch joint processing to output a probability map;
each encoder block is composed of two convolution layers with convolution kernel size of (3x3) and one max pool layer, and is used for extracting image semantic features. Each decoder is composed of two convolution layers with convolution kernel of (3x3) and one deconvolution layer with convolution kernel of (2x2) and is used for enlarging neuron receptive field to obtain high-order semantic information. The encoder block and the decoder block perform jump connection between the feature maps of the same resolution; then, the channel number dimension reduction is carried out through the convolution of 3x3, and finally, the probability map is output through the joint processing of double branches. The probability map is defined as the probability that each pixel on the image belongs to luminal/non-luminal, which is in the range of (0, 1).
Preferably, the loss function is L ═ LB+LM,LMIs a Dice coeffient function, LBIs a binary cross entry function. The expressions are respectively:
Figure BDA0003106063470000031
wherein output _ size is the number of pixels, p and q are two sample sets,
Figure BDA0003106063470000032
for classes predicted by the network, yiIs a real category.
Preferably, the labeling software in S101 is ITK-SNAP;
preferably, the preprocessing operation in S102 includes adjusting a window width value, a window bit value, and normalization.
Preferably, the suitable optimization learning method in S104 is to perform optimization by using an SGD optimizer; the relevant hyper-parameters include learning rate, batch _ size, momentum, and weight decay factor.
Compared with the prior art, the invention has the following advantages:
the invention adopts a double-branch depth convolution network, performs combined learning by utilizing the correlation between two tasks, and fuses the results of the two branches to obtain a final segmentation contour, so that the robustness of the shape change of the inner wall of the lumen can be maintained, the contour of the inner wall of the blood vessel can be accurately positioned, and the accurate segmentation of the lumen contour of the coronary artery under an OCT image can be realized.
By adopting the method, the OCT image can be automatically and accurately segmented, diagnosis and treatment of doctors are assisted, and the diagnosis and treatment efficiency is effectively accelerated.
Drawings
FIG. 1 is a schematic diagram of a dual-branch deep convolutional network structure according to the present invention;
FIG. 2 is a segmentation result using the method of the present invention;
FIG. 3 is a diagram showing the result of displaying the segmentation contour on the original drawing;
FIG. 4 is a schematic diagram of manually labeling data and comparing the results of segmentation using the U-net and V-net algorithms according to the present invention;
in fig. 4, the blue line is the manual marking data, and the red line is the segmentation result curve of each algorithm;
FIG. 5 is a flowchart of a deep network segmentation method according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this embodiment, the depth network segmentation algorithm for the lumen contour of the coronary artery under the OCT image, provided by the present invention, is used to segment the lumen inner wall in the OCT image, as shown in fig. 5, and includes the following specific steps:
s101, collecting a coronary artery OCT image slice scanning sequence, and constructing a coronary artery data set under an OCT image; labeling the contour region of each OCT image slice and the contour of the inner wall of the lumen by labeling software, and generating a corresponding binary mask image;
the OCT image slice scan sequence of this embodiment is collected from the OCT image slice scan sequence of a clinical case in a certain hospital, and a total of 270 image slices, the data format is JPG format, and the image resolution is 512 × 512. In this embodiment, the lumen contour region of each OCT image slice is labeled by using ITK-SNAP software, and a binary mask map is generated.
S102, in order to match with network training, preprocessing operations are respectively carried out on the image slices in the S101, wherein the image slices comprise OCT image slices, corresponding binarization mask images and lumen inner wall contour images, and the preprocessing operations in the embodiment comprise:
the training set data is expanded to two times by adopting horizontal and overturning, so that the network training is more sufficient;
and carrying out normalization processing on the OCT image slice and the corresponding binarization mask image so as to accelerate convergence of the depth full convolution network.
And dividing the preprocessed OCT image slices and the corresponding binarization mask image according to the ratio of 6:2:2, and randomly selecting 162 slices as a training set, 54 slices as a verification set and 54 slices as a test set.
S103, designing a double-branch deep convolution network segmentation model and designing a loss function for segmentation;
as shown in fig. 1, the network is designed as a codec dual-branch output structure, and the codec network is used as a basis to design the dual-branch output structure. The network outputs two types of images, one is a mask for segmenting the blood vessel region in the original image, and the other is a boundary for segmenting the inner wall of the blood vessel. For both types of outputs, the OCT images from the same patient should have the same vessel boundaries. Based on the consideration, the boundary drop can be obtained by performing a canny edge detection algorithm on the first type of output mask image, and the required contour of the inner wall of the blood vessel can be obtained by performing a next coupling process on the obtained edge and the second type of contour.
The overall Loss function of the network comprises two items, one item is die coeffient for mask data, and the other item is binary cross entry for boundary. The overall loss function is calculated as:
L=LB+LM
the Dice coefficient is used for evaluating the similarity of two samples, and the value of the Dice coefficient is generally between 0 and 1, so that the aim is to make the value of the Dice coefficient as large as possible, which can indicate that the similarity between a predicted value and a real value is higher. This function can be used in the present model to evaluate the similarity between the predicted boundary curve and its true value. The similarity therebetween can be expressed as:
Figure BDA0003106063470000051
the second kind of output of the network is introduced into a binary cross entry function to constrain the second kind of output
Figure BDA0003106063470000052
Wherein output _ size is the number of pixels, p and q are two sample sets,
Figure BDA0003106063470000053
for classes predicted by the network, yiIs a real category.
S1031, as shown in FIG. 1, for the reference of the coding and decoding network structure, the dual-branch deep convolution network segmentation model sequentially passes through four encoder blocks and four decoder blocks for a given input feature map, then performs channel number dimensionality reduction through 1x1 convolution, and finally outputs a probability map through dual-branch joint processing;
each encoder block is composed of two convolution layers with convolution kernel size of (3x3) and one max pool layer, and is used for extracting image semantic features. Each decoder is composed of two convolution layers with convolution kernel of (3x3) and one deconvolution layer with convolution kernel of (2x2) and is used for enlarging neuron receptive field to obtain high-order semantic information. The encoder block and the decoder block perform jump connection between the feature maps of the same resolution; then, carrying out channel number dimensionality reduction through 1x1 convolution, and finally outputting a probability map through double-branch joint processing;
according to the double-branch depth convolution network segmentation model, a probability map is output through double-branch combined processing, the probability map is defined as the probability that each pixel on an image belongs to a lumen/non-lumen, and the value range of the probability map is 0-1.
S104, selecting a proper optimization learning method, setting related hyper-parameters, performing iterative training by using a training set, and performing model performance evaluation by using a verification set to adjust the hyper-parameters; wherein, the suitable optimization learning method is to adopt an SGD optimizer for optimization; the relevant hyper-parameters comprise learning rate, batch _ size, momentum and weight attenuation coefficient;
the hyper-parameters during training of the embodiment all adopt the same settings as follows: batch _ size is set to 1; the initial learning rate was set to 0.01. In this embodiment, a training set is loaded, an SGD optimizer is used for training, the training is continued until the loss converges, and the model performance is evaluated by using a validation set continuously to adjust the hyper-parameters.
S105, after training is finished, for any OCT image slice in the test set, segmenting the tube cavity inner wall area by using the trained network model, and specifically comprising the following steps:
after training is finished, a coronary artery OCT image slice is selected from the test set, a two-branch depth network segmentation model is input, trained model weights are loaded for segmentation to obtain a probability map, the probability map is binarized (the probability value is changed from 0.5 to 1 when being more than or equal to 0.5 and is changed from 0 when being less than 0.5), a final binary segmentation mask map is generated, and the segmentation result is shown in fig. 2.
The embodiment also performs experimental comparison on experimental data with U-net and V-net algorithms, and the experimental data are trained through U-net and V-net models. The training result is shown in fig. 4, and fig. 4 compares the segmentation results of the three schemes. In the figure, a blue curve is a ground truth, and a red curve is a result of the segmentation of various algorithms. As can be seen from FIG. 4, when a more complex vessel segmentation is encountered, a closed segmentation curve cannot be obtained during the segmentation of the U-net algorithm and the V-net algorithm, but the algorithm of the present invention can accurately obtain a closed contour curve.
In addition, the results are also evaluated by Dice. As shown in the table below, the dice index of the U-net network is 0.76, the V-net index is 0.79, and the dice index of the scheme is 0.82, which is obviously higher than those of the other two methods, so that the scheme can obtain higher segmentation accuracy.
Indicators U-net V-net Algorithm of the invention
Dice 0.76 0.79 0.82

Claims (9)

  1. The method is characterized in that a double-branch depth convolution network structure is adopted in the segmentation method, one branch divides a lumen area mask corresponding to an OCT image, the other branch predicts the lumen inner wall profile, joint learning is carried out between the two tasks, and the coupled result is the final segmentation profile.
  2. 2. The deep network segmentation method according to claim 1, comprising the steps of:
    s101, collecting a coronary artery OCT image slice scanning sequence, and constructing a coronary artery data set under an OCT image; labeling the contour region of each OCT image slice and the contour of the inner wall of the lumen, and generating a corresponding binary mask image;
    s102, respectively preprocessing the OCT image slices of the coronary artery and the corresponding binarization mask images in the S101, and then respectively dividing the preprocessed OCT image slices and the corresponding binarization mask images into a training set, a verification set and a test set according to the ratio of 6:2: 2;
    s103, training by using a training set and a verification set in the S102 by adopting a double-branch deep convolution network segmentation model;
    s104, after training is finished, selecting a coronary artery OCT image slice from the test set, inputting a double-branch depth convolution network segmentation model, loading the trained model weight for segmentation, generating a probability map of a lumen/background, segmenting the inner wall of a blood vessel of the coronary artery OCT image slice, and generating a segmented binary mask map.
  3. 3. The method of claim 2, wherein the two-branch deep convolutional network partition model in S103 comprises:
    the encoder is used for reducing the scale of a given input characteristic image and extracting different scale characteristics of the OCT image;
    the decoder is used for expanding the scale of the characteristic diagram output by the encoder to be the same as the scale of the input characteristic diagram and outputting multi-scale fusion characteristics;
    and (5) performing joint segmentation on the two branches and outputting a probability map.
  4. 4. The depth network segmentation method according to claim 3, wherein the number of scales in the encoder and the decoder in the two-branch depth convolutional network segmentation model in S103 is 4; the method comprises the steps of firstly reducing the size of a given input feature map step by step through 4 encoder blocks, then expanding the size of the feature map step by step through 4 decoder blocks until the size of the feature map is the same as the input size, then carrying out channel number dimension reduction through 3x3 convolution, finally dividing the feature map into two branches, coupling the division results through joint processing, and generating a final network output result.
  5. 5. The method of deep network segmentation as claimed in claim 4, wherein each encoder block is composed of two convolution layers with convolution kernel size (3x3) and one max pool layer for extracting image semantic features; each decoder is composed of two convolution layers with convolution kernel of (3x3) and one deconvolution layer with convolution kernel of (2x2) and is used for enlarging neuron receptive field to obtain high-order semantic information.
  6. 6. The method of deep network segmentation according to claim 4, wherein the joint processing comprises the steps of: firstly, performing feature learning on one branch by using 1 × 1 convolution to predict a blood vessel region, performing canny edge detection to obtain a blood vessel profile, directly detecting the blood vessel profile by using the 1 × 1 convolution on the other branch, coupling the blood vessel internal profiles detected by the two paths, and performing coupling learning under the unified constraint of a loss function to enable the blood vessel internal profiles to be close to a real labeling result.
  7. 7. The method of claim 6, wherein the loss function is L-LB+LM,LMIs a Dice coeffient function, LBIs a binary cross entry function;
    the Dice coefficient is used to evaluate the similarity between two sample sets p and q, and the value is between 0 and 1, and is expressed as:
    Figure FDA0003106063460000021
    the bank cross entry function constrains the prediction result, and the expression is as follows:
    Figure FDA0003106063460000022
    wherein output _ size is the number of pixels,
    Figure FDA0003106063460000023
    for classes predicted by the network, yiIs a real category.
  8. 8. The method of claim 7, wherein the preprocessing operation in S102 includes adjusting a window width value, a window bit value and normalization.
  9. 9. The deep network segmentation method of claim 8, wherein during the S103 training, a stochastic gradient optimizer is used for optimization learning, and relevant hyper-parameters including learning rate, batch size, momentum and weight attenuation coefficient are set.
CN202110638265.4A 2021-06-08 2021-06-08 Depth network segmentation method for coronary artery lumen contour under OCT image Pending CN113362332A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110638265.4A CN113362332A (en) 2021-06-08 2021-06-08 Depth network segmentation method for coronary artery lumen contour under OCT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110638265.4A CN113362332A (en) 2021-06-08 2021-06-08 Depth network segmentation method for coronary artery lumen contour under OCT image

Publications (1)

Publication Number Publication Date
CN113362332A true CN113362332A (en) 2021-09-07

Family

ID=77533196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110638265.4A Pending CN113362332A (en) 2021-06-08 2021-06-08 Depth network segmentation method for coronary artery lumen contour under OCT image

Country Status (1)

Country Link
CN (1) CN113362332A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838027A (en) * 2021-09-23 2021-12-24 杭州柳叶刀机器人有限公司 Method and system for obtaining target image element based on image processing

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120075638A1 (en) * 2010-08-02 2012-03-29 Case Western Reserve University Segmentation and quantification for intravascular optical coherence tomography images
CN109993082A (en) * 2019-03-20 2019-07-09 上海理工大学 The classification of convolutional neural networks road scene and lane segmentation method
CN110490881A (en) * 2019-08-19 2019-11-22 腾讯科技(深圳)有限公司 Medical image dividing method, device, computer equipment and readable storage medium storing program for executing
CN110675406A (en) * 2019-09-16 2020-01-10 南京信息工程大学 CT image kidney segmentation algorithm based on residual double-attention depth network
CN111047613A (en) * 2019-12-30 2020-04-21 北京小白世纪网络科技有限公司 Fundus blood vessel segmentation method based on branch attention and multi-model fusion
CN111489328A (en) * 2020-03-06 2020-08-04 浙江工业大学 Fundus image quality evaluation method based on blood vessel segmentation and background separation
CN112529906A (en) * 2021-02-07 2021-03-19 南京景三医疗科技有限公司 Software-level intravascular oct three-dimensional image lumen segmentation method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120075638A1 (en) * 2010-08-02 2012-03-29 Case Western Reserve University Segmentation and quantification for intravascular optical coherence tomography images
CN109993082A (en) * 2019-03-20 2019-07-09 上海理工大学 The classification of convolutional neural networks road scene and lane segmentation method
CN110490881A (en) * 2019-08-19 2019-11-22 腾讯科技(深圳)有限公司 Medical image dividing method, device, computer equipment and readable storage medium storing program for executing
CN110675406A (en) * 2019-09-16 2020-01-10 南京信息工程大学 CT image kidney segmentation algorithm based on residual double-attention depth network
CN111047613A (en) * 2019-12-30 2020-04-21 北京小白世纪网络科技有限公司 Fundus blood vessel segmentation method based on branch attention and multi-model fusion
CN111489328A (en) * 2020-03-06 2020-08-04 浙江工业大学 Fundus image quality evaluation method based on blood vessel segmentation and background separation
CN112529906A (en) * 2021-02-07 2021-03-19 南京景三医疗科技有限公司 Software-level intravascular oct three-dimensional image lumen segmentation method and device

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
BO WANG 等: "Boundary Aware U-Net for Retinal Layers Segmentation in Optical Coherence Tomography Images", 《IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS》, pages 3034 *
QING LIU 等: "Dual-Branch Network with Dual-Sampling Modulated Dice Loss for Hard Exudate Segmentation from Colour Fundus Images", 《ARXIV》, pages 1 - 13 *
甄辉: "基于深度神经网络的医疗图像分割算法研究", 《中国优秀硕士学位论文全文数据库 医药卫生科技辑》, pages 080 - 20 *
罗文劼 等: "多尺度注意力解析网络的视网膜血管分割方法", 《激光与光电子学进展》, pages 1 - 17 *
胡敏 等: "利用边界校正网络提取建筑物轮廓", 《遥感信息》, vol. 35, no. 5, pages 120 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113838027A (en) * 2021-09-23 2021-12-24 杭州柳叶刀机器人有限公司 Method and system for obtaining target image element based on image processing

Similar Documents

Publication Publication Date Title
Su et al. Lung nodule detection based on faster R-CNN framework
Yun et al. Improvement of fully automated airway segmentation on volumetric computed tomographic images using a 2.5 dimensional convolutional neural net
WO2016192612A1 (en) Method for analysing medical treatment data based on deep learning, and intelligent analyser thereof
CN109345538A (en) A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks
Balakrishna et al. Automatic detection of lumen and media in the IVUS images using U-Net with VGG16 Encoder
CN107563983A (en) Image processing method and medical imaging devices
CN113420826B (en) Liver focus image processing system and image processing method
US20220284583A1 (en) Computerised tomography image processing
CN113744183B (en) Pulmonary nodule detection method and system
CN110074809B (en) Hepatic vein pressure gradient classification method of CT image and computer equipment
CN112991346B (en) Training method and training system for learning network for medical image analysis
Wankhade et al. A novel hybrid deep learning method for early detection of lung cancer using neural networks
CN112365973A (en) Pulmonary nodule auxiliary diagnosis system based on countermeasure network and fast R-CNN
CN115512110A (en) Medical image tumor segmentation method related to cross-modal attention mechanism
Kollias et al. Ai-enabled analysis of 3-d ct scans for diagnosis of covid-19 & its severity
Osadebey et al. Three-stage segmentation of lung region from CT images using deep neural networks
Li et al. Automatic coronary artery segmentation and diagnosis of stenosis by deep learning based on computed tomographic coronary angiography
CN111696109A (en) High-precision layer segmentation method for retina OCT three-dimensional image
CN113362332A (en) Depth network segmentation method for coronary artery lumen contour under OCT image
Wu et al. Transformer-based 3D U-Net for pulmonary vessel segmentation and artery-vein separation from CT images
Wang et al. Assessment of stroke risk using MRI-VPD with automatic segmentation of carotid plaques and classification of plaque properties based on deep learning
CN113223704B (en) Auxiliary diagnosis method for computed tomography aortic aneurysm based on deep learning
CN115330816A (en) Multi-temporal hepatic tumor segmentation method based on multi-head cross attention transfer network
CN115222674A (en) Detection device for intracranial aneurysm rupture risk based on multi-dimensional feature fusion
CN114429468A (en) Bone age measuring method, bone age measuring system, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination