CN111627017A - Blood vessel lumen automatic segmentation method based on deep learning - Google Patents

Blood vessel lumen automatic segmentation method based on deep learning Download PDF

Info

Publication number
CN111627017A
CN111627017A CN202010482028.9A CN202010482028A CN111627017A CN 111627017 A CN111627017 A CN 111627017A CN 202010482028 A CN202010482028 A CN 202010482028A CN 111627017 A CN111627017 A CN 111627017A
Authority
CN
China
Prior art keywords
image
frame
layer
deep learning
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010482028.9A
Other languages
Chinese (zh)
Other versions
CN111627017B (en
Inventor
李莹光
凌莉
谭清月
杨钒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Rongying Medical Technology Co ltd
Original Assignee
Kunshan Rongying Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Rongying Medical Technology Co ltd filed Critical Kunshan Rongying Medical Technology Co ltd
Priority to CN202010482028.9A priority Critical patent/CN111627017B/en
Publication of CN111627017A publication Critical patent/CN111627017A/en
Application granted granted Critical
Publication of CN111627017B publication Critical patent/CN111627017B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Abstract

The invention provides a blood vessel lumen automatic segmentation method based on deep learning, which can quickly acquire an accurate segmentation result in real time; it includes: s1, obtaining an IVUS image; s2, after the original image and the labeled template image are obtained, establishing a training set and a test sample set; s3, coarsening the template image marked by the training set to obtain a template image, and forming a four-channel image by the current frame of the training set, the previous frame and the next frame of the current frame of the training set and the coarsened template image; s4, inputting the template image marked in S2 and the four-channel image constructed in S3 into a model by adopting a network structure with residual connection in the established deep learning segmentation model to obtain a trained network; s5, the last frame segmentation result and the current frame of the test sample set, the previous frame and the next frame of the current frame form a four-channel image, the four-channel image is input into a network for segmentation, and finally the lumen-intima interface and the middle-adventitia interface segmented by the current frame are obtained.

Description

Blood vessel lumen automatic segmentation method based on deep learning
Technical Field
The invention relates to the technical field of image processing, in particular to a blood vessel lumen automatic segmentation method based on deep learning.
Background
Intravascular ultrasound (IVUS) of an intracavity image presents a high-resolution image in a blood vessel by using high-frequency ultrasonic waves, can comprehensively analyze the conditions of the blood vessel wall and the blood vessel cavity, is the most common intravascular imaging technology in clinical practice at present, can perform high-precision lumen segmentation on an intravascular ultrasound image, determines a lumen-intima interface and a mid-adventitia interface, can obtain detailed lumen and plaque information, helps a doctor to perform clinical diagnosis, and meanwhile, can establish a high-precision blood vessel three-dimensional model based on a segmentation result, and helps to improve the accuracy of later-stage virtual blood flow fraction calculation.
The existing intravascular ultrasound lumen segmentation method mainly depends on naked eye identification and manual delineation of clinicians or researchers with abundant experience, but generally IVUS image sequence frames are large in number and workload, and are easily influenced by subjective factors and objective factors of the environment of the clinicians, so that analysis results show that errors among or in observers are very large, and a unified standard is difficult to form; meanwhile, because image artifacts such as guide wire artifacts and ultrasonic shadows, and ultrasonic speckle noise, branched blood vessels and the like which exist during imaging increase the difficulty of segmentation, for example, patent CN103164859A discloses an intravascular ultrasound image segmentation algorithm based on a random walk algorithm, the method determines seed points of intima and adventitia by using an intravascular ultrasound average gray curve, obtains two types of probability maps by using the random walk algorithm, and then obtains lumen boundaries by combining threshold processing and gradient images to complete segmentation, but the intravascular ultrasound image segmentation algorithm only uses gray level and image gradient information, is very susceptible to the influence of ultrasonic speckle noise, and meanwhile, when vascular side branches and plaques exist, the contour cannot be determined only by gray level, and the problem of large calculation amount exists; patent CN110946619A discloses an intravascular ultrasound automatic imaging omics analysis system, which performs image quality screening and image segmentation by a deep learning method, wherein a deep full convolution neural network adopts a ResNet or U-Net or AlexNet or VGG network, but the networks selected in deep learning are different, and in addition, the patent removes images with poor quality (blood vessel contours cannot be distinguished due to excessive noise, excessive artifact, severe calcification and signal attenuation), which is not favorable for later-stage establishment of a high-precision blood vessel three-dimensional model.
Therefore, the invention aims to develop a segmentation algorithm with high accuracy and high automation degree, and solve the problem that the image cannot be segmented due to speckle noise, image artifacts, partial calcified shadows of the vascular wall and the like.
Disclosure of Invention
Aiming at the problems, the invention provides a blood vessel lumen automatic segmentation method based on deep learning, which can solve the problem that speckle noise, image artifacts, partial blood vessel wall calcification shadow and the like influence images to be incapable of being segmented, can quickly obtain accurate segmentation results in real time, and has high accuracy and high automation degree.
The technical scheme is as follows: a blood vessel lumen automatic segmentation method based on deep learning is characterized in that: which comprises the following steps:
s1, obtaining an IVUS complete image;
s2, manually drawing a lumen-intima interface and a middle-adventitia interface, selecting an original image with a representative frame from the complete IVUS image, labeling to obtain the original image and a labeled template image thereof, and establishing a training set and a test sample set;
s3, model training stage: carrying out affine transformation and non-rigid transformation on the template images marked in the training set to obtain coarsened template images, and then forming four-channel images by the original image of the current frame of the training set, the image of the previous frame of the original image, the image of the next frame of the original image and the coarsened template images;
s4, establishing a deep learning segmentation model, wherein the deep learning segmentation model adopts a network structure with residual connection, and then inputting the template image labeled in the step S2 and the four-channel image constructed in the step S3 into the deep learning segmentation model to train so as to obtain a trained network;
s5, model segmentation stage: when a new frame of IVUS image is segmented in a pixel level, a previous frame of segmented result is introduced, the previous frame of segmented result, the current frame original image of the test sample set, the previous frame of image and the next frame of image form a four-channel image, then the four-channel image is input into the trained network for segmentation, and finally a lumen-intima interface and a mid-adventitia interface segmented by the current frame are obtained.
Further, in step S3, the construction of the four-channel image specifically includes: carrying out coarsening processing on the original images with the sizes of 512 × 512 in the N-1 frame, the N frame and the N +1 frame of the training set and the labeled template image to obtain images, and combining the four image matrixes into a four-channel image with the size of 512 × 512 × 4, wherein the image of the N frame is the original image of the current frame of the training set;
further, in step S5, the construction of the four-channel image specifically includes: combining the original images with the sizes of 512 × 512 in the N-1 frame, the nth frame and the N +1 frame of the test sample set and the segmentation result of the N-1 frame into four-channel images with the sizes of 512 × 512 × 4, and using the four image matrixes as the input of the trained network, wherein the image of the nth frame is the original image of the current frame of the test sample set;
further, the deep learning segmentation model comprises an input layer, a coding layer, a decoding layer and an output layer which are connected in sequence;
furthermore, 4 coding layers are provided, each coding layer has 3 subunits, the input layer, the output layer and each subunit include a convolution layer, a batch normalization layer and a ReLU function activation layer which are sequentially connected in a data transfer order, fine-grained features of the IVUS complete image are obtained by layer-by-layer extraction among the coding layers, and the 3 subunits are connected in a residual error structure and then transfer information to the next coding layer;
furthermore, there are 4 decoding layers, each of which includes a residual block, a feature fusion block, and a chain pooling block connected in sequence, the decoding layer at the head end receives only the feature map transmitted from the coding layer at the tail end of the decoder, and the rest of the decoding layers input the information of the previous decoding layer and the information from the coding layer at the same level;
further, the template image contains images of speckle noise, vessel branches, image artifacts, and partial vessel wall calcification shadows.
The method has the advantages that the deep learning segmentation model with the residual connection network is adopted, residual connection with certain dimensionality or size can be fused, compared with a common neural network, the method has more abstract feature extraction capability and can generate a high-resolution prediction image, in addition, when a new frame of IVUS image is segmented at a pixel level, the fact that the segmentation result of the previous frame is already finished can be used for restraining, a better effect can be achieved, meanwhile, the marking template image is added in a training set, the problem that the image segmentation result is poor or cannot be segmented due to factors such as speckle noise, image artifacts and partial calcified shadows of the vascular wall is effectively solved, and therefore a more accurate segmentation result is obtained.
Drawings
FIG. 1 is a flow chart of the algorithm of the present invention;
FIG. 2 is a block diagram of the structure of the deep learning segmentation model of the present invention;
FIG. 3 is a schematic diagram of an input layer or an output layer according to the present invention;
FIG. 4 is a diagram illustrating the structure of an encoding layer according to the present invention;
FIG. 5 is a diagram illustrating the structure of a decoding layer according to the present invention.
Detailed Description
As shown in fig. 1 to 5, a method for automatically segmenting a blood vessel lumen based on deep learning includes the following steps:
s1, acquiring an IVUS complete image sequence;
s2, manually drawing a lumen-intima interface and a middle-adventitia interface, wherein hundreds of cases may occur during training, and thousands of frames of images are total, so that an original image with a representative frame is selected from an IVUS complete image for annotation to obtain the original image and an annotated template image thereof, and a training set and a test sample set are established; the template image comprises speckle noise, blood vessel branches, image artifacts and an image of partial calcified shadows of a blood vessel wall;
s3, model training stage: carrying out affine transformation and non-rigid transformation on the template images marked in the training set to obtain coarsened template images, and then forming four-channel images by the original image of the current frame of the training set, the image of the previous frame of the original image, the image of the next frame of the original image and the coarsened template images; specifically, the construction of the four-channel image is specifically as follows: carrying out coarsening processing on the original images with the sizes of 512 x 512 in the N-1 frame, the N frame and the N +1 frame of the training set and the annotated template image to obtain images, and combining the four image matrixes into a four-channel image with the size of 512 x 4, wherein the image of the N frame is the original image of the current frame of the training set; the image size can be set according to actual conditions, and in the embodiment, the image size is set to 512 × 512;
s4, establishing a deep learning segmentation model, wherein the deep learning segmentation model adopts a network structure with residual connection, and then inputting the template image and the four-channel image labeled in the step S2 into the deep learning segmentation model to train so as to obtain a trained network;
s5, model segmentation stage: when a new frame of IVUS image is segmented at pixel level, a previous frame of segmented result is introduced, and because a four-channel image is input during training, the four-channel image is constructed in the same way during testing, so that information of upper and lower frame images can be combined during convolution, and the introduced segmented result can play a role of constraint, then the segmented result of the previous frame, a current frame original image of a test sample set, a previous frame image and a next frame image are combined into a four-channel image, namely, the original image with the size of 512 x 512 of an N-1 frame, an Nth frame and an N +1 th frame of the test sample set and the segmented result of an N-1 frame are combined into a four-channel image with the size of 512 x 4, and then the four-channel image is input into a trained network for segmentation, wherein the Nth frame image is the current frame image of the test sample set, finally, the lumen-intima interface and the mid-adventitia interface segmented by the current frame are obtained.
The deep learning segmentation model comprises an input layer, a coding layer, a decoding layer and an output layer which are connected in sequence; the number of the coding layers is 4, each coding layer is provided with 3 subunits, each input layer, each output layer and each subunit comprise a convolution layer, a batch normalization layer and a ReLU function activation layer which are sequentially connected in a data transmission sequence, fine-grained characteristics of an IVUS complete image are obtained by layer-by-layer extraction between the coding layers, and the 3 subunits are connected in a residual error structure and then transmit information to the next coding layer; the decoding layers are provided with 4 decoding layers, each decoding layer comprises a residual block, a characteristic fusion block and a chain pooling block which are connected in sequence, the decoding layer 1 at the head end only receives a characteristic diagram transmitted by the coding layer 4 at the tail end of the group, the information of the previous decoding layer and the information from the coding layer at the same level are input into the rest decoding layers, for example, the decoding layer 2 receives the characteristic diagrams of the coding layer 3 and the decoding layer 1 and sums and fuses the characteristic diagrams, long-distance residual connection is introduced between the coding layer and the decoding layer, the characteristic fusion block sums and fuses the characteristic diagrams with different scales and dimensions, the visual detail information of the lower layer coding is used for enhancing the rough high-level characteristic diagram, in addition, continuous pooling operation is carried out through the chain pooling block, and pooling results are added and transmitted, so that the information of picture context is obtained, and a better segmentation effect is achieved.
The neural network is applied to IVUS segmentation, the segmentation effect can be improved after the characteristics of different layers of the neural network are fused, in addition, the segmentation result of the previous frame is introduced into the segmentation of the current frame, and the context information of the current frame is introduced, so that the image segmentation effect of the IVUS which continuously changes can be enhanced, the segmentation with high accuracy and high automation degree is realized, and the problem that the image cannot be segmented due to the influence of speckle noise, image artifacts, partial calcified shadows of the vascular wall and the like is solved.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (7)

1. A blood vessel lumen automatic segmentation method based on deep learning is characterized in that: which comprises the following steps:
s1, obtaining an IVUS complete image;
s2, manually drawing a lumen-intima interface and a middle-adventitia interface, selecting an original image with a representative frame from the complete IVUS image, labeling to obtain the original image and a labeled template image thereof, and establishing a training set and a test sample set;
s3, model training stage: carrying out affine transformation and non-rigid transformation on the template images marked in the training set to obtain coarsened template images, and then forming four-channel images by the original image of the current frame of the training set, the image of the previous frame of the original image, the image of the next frame of the original image and the coarsened template images;
s4, establishing a deep learning segmentation model, wherein the deep learning segmentation model adopts a network structure with residual connection, and then inputting the template image labeled in the step S2 and the four-channel image constructed in the step S3 into the deep learning segmentation model to train so as to obtain a trained network;
s5, model segmentation stage: when a new frame of IVUS image is segmented in a pixel level, a previous frame of segmented result is introduced, the previous frame of segmented result, the current frame original image of the test sample set, the previous frame of image and the next frame of image form a four-channel image, then the four-channel image is input into the trained network for segmentation, and finally a lumen-intima interface and a mid-adventitia interface segmented by the current frame are obtained.
2. The vessel lumen automatic segmentation method based on deep learning of claim 1, characterized in that: in step S3, the construction of the four-channel image specifically includes: and carrying out coarsening processing on the original images with the sizes of 512 × 512 in the N-1 frame, the N frame and the N +1 frame of the training set and the labeled template image to obtain an image, and combining the four image matrixes into a four-channel image with the size of 512 × 512 × 4, wherein the image of the N frame is the original image of the current frame of the training set.
3. The vessel lumen automatic segmentation method based on deep learning of claim 1, characterized in that: in step S5, the construction of the four-channel image specifically includes: and combining the original images with the sizes of 512 × 512 in the N-1 frame, the nth frame and the N +1 frame of the test sample set and the segmentation result of the N-1 frame into four-channel images with the sizes of 512 × 512 × 4, and using the four image matrixes as the input of the trained network, wherein the image of the nth frame is the original image of the current frame of the test sample set.
4. The vessel lumen automatic segmentation method based on deep learning of claim 1, characterized in that: the deep learning segmentation model comprises an input layer, an encoding layer, a decoding layer and an output layer which are sequentially connected.
5. The vessel lumen automatic segmentation method based on deep learning of claim 4, wherein: the number of the coding layers is 4, each coding layer is provided with 3 subunits, each input layer, each output layer and each subunit comprise a convolution layer, a batch normalization layer and a ReLU function activation layer which are sequentially connected in a data transmission sequence, fine-grained characteristics of the IVUS complete image are obtained by layer-by-layer extraction between the coding layers, and the 3 subunits are connected in a residual error structure and then transmit information to the next coding layer.
6. The vessel lumen automatic segmentation method based on deep learning of claim 5, wherein: the decoding layers are 4, each decoding layer comprises a residual block, a feature fusion block and a chain pooling block which are sequentially connected, the decoding layer at the head end only receives a feature map transmitted by the coding layer at the tail end of the decoder, and the rest decoding layers input information of the previous decoding layer and information from the coding layer at the same level.
7. The vessel lumen automatic segmentation method based on deep learning of claim 1, characterized in that: the template image contains images of speckle noise, vessel branches, image artifacts, and partial vessel wall calcification shadows.
CN202010482028.9A 2020-05-29 2020-05-29 Automatic segmentation method for vascular lumen based on deep learning Active CN111627017B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010482028.9A CN111627017B (en) 2020-05-29 2020-05-29 Automatic segmentation method for vascular lumen based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010482028.9A CN111627017B (en) 2020-05-29 2020-05-29 Automatic segmentation method for vascular lumen based on deep learning

Publications (2)

Publication Number Publication Date
CN111627017A true CN111627017A (en) 2020-09-04
CN111627017B CN111627017B (en) 2024-02-23

Family

ID=72271385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010482028.9A Active CN111627017B (en) 2020-05-29 2020-05-29 Automatic segmentation method for vascular lumen based on deep learning

Country Status (1)

Country Link
CN (1) CN111627017B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132203A (en) * 2020-09-18 2020-12-25 中山大学 Intravascular ultrasound image-based fractional flow reserve measurement method and system
CN112164074A (en) * 2020-09-22 2021-01-01 江南大学 3D CT bed fast segmentation method based on deep learning
CN112529906A (en) * 2021-02-07 2021-03-19 南京景三医疗科技有限公司 Software-level intravascular oct three-dimensional image lumen segmentation method and device
CN112950555A (en) * 2021-02-05 2021-06-11 广州中医药大学第一附属医院 Deep learning-based type 2 diabetes cardiovascular disease image classification method
CN113177953A (en) * 2021-04-27 2021-07-27 平安科技(深圳)有限公司 Liver region segmentation method, liver region segmentation device, electronic device, and storage medium
WO2022089266A1 (en) * 2020-11-02 2022-05-05 中科麦迪人工智能研究院(苏州)有限公司 Blood vessel lumen extraction method and apparatus, electronic device and storage medium
CN113177953B (en) * 2021-04-27 2024-04-26 平安科技(深圳)有限公司 Liver region segmentation method, liver region segmentation device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909590A (en) * 2017-11-15 2018-04-13 北京工业大学 A kind of IVUS image outer membrane edge fate segmentation methods based on Snake innovatory algorithms
CN108510493A (en) * 2018-04-09 2018-09-07 深圳大学 Boundary alignment method, storage medium and the terminal of target object in medical image
CN108520223A (en) * 2018-04-02 2018-09-11 广州华多网络科技有限公司 Dividing method, segmenting device, storage medium and the terminal device of video image
WO2019135501A1 (en) * 2018-01-03 2019-07-11 주식회사 메디웨일 Ivus image analysis method
CN110111351A (en) * 2019-05-10 2019-08-09 电子科技大学 Merge the pedestrian contour tracking of RGBD multi-modal information
CN110136157A (en) * 2019-04-09 2019-08-16 华中科技大学 A kind of three-dimensional carotid ultrasound image vascular wall dividing method based on deep learning
CN110969640A (en) * 2018-09-29 2020-04-07 Tcl集团股份有限公司 Video image segmentation method, terminal device and computer-readable storage medium
CN111161216A (en) * 2019-12-09 2020-05-15 杭州脉流科技有限公司 Intravascular ultrasound image processing method, device, equipment and storage medium based on deep learning

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909590A (en) * 2017-11-15 2018-04-13 北京工业大学 A kind of IVUS image outer membrane edge fate segmentation methods based on Snake innovatory algorithms
WO2019135501A1 (en) * 2018-01-03 2019-07-11 주식회사 메디웨일 Ivus image analysis method
CN108520223A (en) * 2018-04-02 2018-09-11 广州华多网络科技有限公司 Dividing method, segmenting device, storage medium and the terminal device of video image
CN108510493A (en) * 2018-04-09 2018-09-07 深圳大学 Boundary alignment method, storage medium and the terminal of target object in medical image
CN110969640A (en) * 2018-09-29 2020-04-07 Tcl集团股份有限公司 Video image segmentation method, terminal device and computer-readable storage medium
CN110136157A (en) * 2019-04-09 2019-08-16 华中科技大学 A kind of three-dimensional carotid ultrasound image vascular wall dividing method based on deep learning
CN110111351A (en) * 2019-05-10 2019-08-09 电子科技大学 Merge the pedestrian contour tracking of RGBD multi-modal information
CN111161216A (en) * 2019-12-09 2020-05-15 杭州脉流科技有限公司 Intravascular ultrasound image processing method, device, equipment and storage medium based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
GUOSHENG LIN等: "RefineNet: Multi-Path Refinement Networks for High-Resolution Semantic Segmentation" *
袁绍锋: "深度全卷积网络的IVUS图像内膜与中—外膜边界检测" *
袁绍锋;杨丰;徐琳;刘树杰;季飞;黄靖;: "深度全卷积网络的IVUS图像内膜与中―外膜边界检测" *
袁绍锋;杨丰;徐琳;吴洋洋;黄靖;刘娅琴;: "有条件生成对抗网络的IVUS图像内膜与中-外膜边界检测" *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132203A (en) * 2020-09-18 2020-12-25 中山大学 Intravascular ultrasound image-based fractional flow reserve measurement method and system
CN112132203B (en) * 2020-09-18 2023-09-29 中山大学 Fractional flow reserve measurement method and system based on intravascular ultrasound image
CN112164074A (en) * 2020-09-22 2021-01-01 江南大学 3D CT bed fast segmentation method based on deep learning
WO2022089266A1 (en) * 2020-11-02 2022-05-05 中科麦迪人工智能研究院(苏州)有限公司 Blood vessel lumen extraction method and apparatus, electronic device and storage medium
CN112950555A (en) * 2021-02-05 2021-06-11 广州中医药大学第一附属医院 Deep learning-based type 2 diabetes cardiovascular disease image classification method
CN112529906A (en) * 2021-02-07 2021-03-19 南京景三医疗科技有限公司 Software-level intravascular oct three-dimensional image lumen segmentation method and device
CN113177953A (en) * 2021-04-27 2021-07-27 平安科技(深圳)有限公司 Liver region segmentation method, liver region segmentation device, electronic device, and storage medium
WO2022227193A1 (en) * 2021-04-27 2022-11-03 平安科技(深圳)有限公司 Liver region segmentation method and apparatus, and electronic device and storage medium
CN113177953B (en) * 2021-04-27 2024-04-26 平安科技(深圳)有限公司 Liver region segmentation method, liver region segmentation device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN111627017B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN111627017B (en) Automatic segmentation method for vascular lumen based on deep learning
CN110298844B (en) X-ray radiography image blood vessel segmentation and identification method and device
CN111127482B (en) CT image lung and trachea segmentation method and system based on deep learning
CN110991254B (en) Ultrasonic image video classification prediction method and system
CN110827232B (en) Cross-modality MRI (magnetic resonance imaging) synthesis method based on morphological characteristics GAN (gamma GAN)
CN112597982B (en) Image classification method, device, equipment and medium based on artificial intelligence
CN112862830A (en) Multi-modal image segmentation method, system, terminal and readable storage medium
CN114494296A (en) Brain glioma segmentation method and system based on fusion of Unet and Transformer
CN116309571B (en) Three-dimensional cerebrovascular segmentation method and device based on semi-supervised learning
CN114972362A (en) Medical image automatic segmentation method and system based on RMAU-Net network
CN113160380A (en) Three-dimensional magnetic resonance image super-resolution reconstruction method, electronic device and storage medium
CN116152500A (en) Full-automatic tooth CBCT image segmentation method based on deep learning
CN114882048A (en) Image segmentation method and system based on wavelet scattering learning network
CN112785581A (en) Training method and device for extracting and training large blood vessel CTA (computed tomography angiography) imaging based on deep learning
CN114708353B (en) Image reconstruction method and device, electronic equipment and storage medium
CN112529906B (en) Software-level intravascular oct three-dimensional image lumen segmentation method and device
CN115969400A (en) Apparatus for measuring area of eyeball protrusion
CN115841457A (en) Three-dimensional medical image segmentation method fusing multi-view information
CN110992309B (en) Fundus image segmentation method based on deep information transfer network
CN113744215A (en) Method and device for extracting center line of tree-shaped lumen structure in three-dimensional tomography image
CN114119403A (en) Image defogging method and system based on red channel guidance
CN113592766B (en) Coronary angiography image segmentation method based on depth sequence information fusion
CN111986216A (en) RSG liver CT image interactive segmentation algorithm based on neural network improvement
CN111080588A (en) Multi-scale neural network-based rapid fetal MR image brain extraction method
CN115272363B (en) Method, device and storage medium for reconstructing carotid three-dimensional image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 301, auxiliary building 4, accelerator, No. 135, Zhangji Road, Kunshan Development Zone, Suzhou, Jiangsu 215347

Applicant after: Suzhou Bodong Rongying Medical Technology Co.,Ltd.

Address before: Room 1009, North building, complex building, 1699 Zuchongzhi South Road, Yushan Town, Kunshan City, Suzhou City, Jiangsu Province

Applicant before: Kunshan Rongying Medical Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant