CN113223002A - Blood vessel image segmentation method - Google Patents

Blood vessel image segmentation method Download PDF

Info

Publication number
CN113223002A
CN113223002A CN202110493742.2A CN202110493742A CN113223002A CN 113223002 A CN113223002 A CN 113223002A CN 202110493742 A CN202110493742 A CN 202110493742A CN 113223002 A CN113223002 A CN 113223002A
Authority
CN
China
Prior art keywords
module
input
layer
blood vessel
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110493742.2A
Other languages
Chinese (zh)
Inventor
王博
赵威
申建虎
张伟
徐正清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Zhizhen Intelligent Technology Co ltd
Original Assignee
Xi'an Zhizhen Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Zhizhen Intelligent Technology Co ltd filed Critical Xi'an Zhizhen Intelligent Technology Co ltd
Priority to CN202110493742.2A priority Critical patent/CN113223002A/en
Publication of CN113223002A publication Critical patent/CN113223002A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/002Image coding using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a blood vessel image segmentation method, which comprises the steps of firstly obtaining an original blood vessel image and preprocessing the original blood vessel image to obtain a training set image; and then inputting the preprocessed training set image into a neural network model for training, wherein the neural network model comprises a coding module, a decoding module and a residual error module, the input of an attention block in the decoding module specifically comprises a first input from the coding module, a second input from the residual error module and the decoding output of the attention block at the upper layer of the coding module, the attention block obtains a decoded image by fusing the inputs from different modules and inputting the fused characteristics into a deconvolution layer in the decoding module, and finally, the trained neural network model is used for processing the blood vessel image to obtain a blood vessel image segmentation result. According to the method, each layer of the decoding module is connected with the residual error module at the bottom layer, so that the position information in the image can be acquired from a complete scale, and the iris blood vessel image segmentation effect is improved.

Description

Blood vessel image segmentation method
Technical Field
The invention belongs to the field of medical image processing, and particularly relates to a blood vessel image segmentation method.
Background
Sturge-Weber syndrome (SWS) is a vascular malformation disease that can lead to glaucoma, which can lead to blindness in severe cases, and patients with Sturge-Weber syndrome usually have abnormal distribution of blood vessels in the iris, which can increase outflow resistance and further lead to glaucoma, so how to accurately segment iris blood vessel images has become an important problem in computer-aided diagnosis.
The existing blood vessel segmentation method is mainly designed for fundus images, the segmentation method mainly comprises manual segmentation and automatic segmentation, but the manual segmentation mainly depends on observation and manual marking of an ophthalmologist, so that the method is not only low in efficiency, but also high in difference, high in requirement on the level of the ophthalmologist and incapable of being popularized. Meanwhile, because the blood vessel structure of the iris is complex and a lot of tiny blood exists, a great deal of time and energy are consumed by an ophthalmologist, and the treatment time of a patient is delayed. The automatic segmentation can realize automatic segmentation of blood vessels without the assistance of an ophthalmologist, the obtained data is objective, the result difference caused by different levels is eliminated, and a good segmentation effect is achieved on tiny blood vessels. However, the quality of the automatic segmentation method directly results in whether the final image is clear and intuitive, and the segmentation method in the prior art has an unsatisfactory effect.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides a blood vessel image segmentation method, which can segment an iris blood vessel image and improve the segmentation effect of the iris blood vessel image, and the technical scheme of the present invention is as follows:
a method of vessel image segmentation, the method comprising:
s1, acquiring an original blood vessel image and preprocessing the original blood vessel image to obtain a training set image;
s2, inputting the preprocessed training set image into a neural network model for training;
the neural network model comprises a coding module, a decoding module and a residual error module, wherein the coding module comprises four coding layers, each coding layer comprises a convolution layer and a maximum pooling layer, and the coding module is used for performing downsampling operation on an input training set image to obtain a feature map;
the decoding module comprises four decoding layers, each decoding layer comprises an attention block and a deconvolution layer, the decoding module is used for performing up-sampling operation on the feature map to obtain a segmented image, and the encoding module corresponds to the decoding module in structure;
the residual error module comprises four upsampling layers obtained by performing continuous four times of upsampling according to the feature map of the deepest layer obtained by the downsampling of the coding module;
wherein, the input of the attention block in the decoding module specifically comprises a first input from the encoding module, a second input of the residual module, and a third input, and the third input is a decoding output of the attention block at a higher layer in the encoding module; the attention block executes a process including superimposing the first input, the second input and the third input and activating by a function, resampling the activated features, fusing the resampled features with the first input and the second input, and inputting the fused features into an deconvolution layer in the decoding module;
and S3, processing the blood vessel image by using the trained neural network model to obtain a blood vessel image segmentation result.
Further, the feature after resampling is fused with the first input and the second input through a first formula, where the first formula specifically includes:
Figure BDA0003053450210000031
wherein i represents the ith down-sampling layer in the encoding module, N represents all down-sampling layers of the encoding module,
Figure BDA0003053450210000032
represents the 2 nd down-sampling layer in the coding module, C () represents the convolution calculation, D () representsSampling operation, U () represents an upsampling operation, and]indicating a feature connection.
The invention has the beneficial effects that: the various parts of the decoder are connected to the upsampling of the base layer. By repeatedly using the high-level semantic feature map, the position information in the image can be acquired from a complete scale, and the accurate segmentation is facilitated, particularly for the detail region of the iris blood vessel. Can cut apart to iris blood vessel image, promote iris blood vessel image and cut apart the effect.
Drawings
FIG. 1 is a flow chart of a blood vessel image segmentation method according to the present invention;
FIG. 2 is a diagram of a neural network model architecture of the present invention;
FIG. 3 is a block diagram of the neural network model.
Detailed Description
The technical scheme of the invention is further described by combining the drawings and the embodiment:
the embodiment provides a blood vessel image segmentation method, which can be implemented by a terminal, as shown in fig. 1, including:
step 1, a terminal acquires an original blood vessel image and carries out preprocessing to obtain a training set image, wherein a data set comprises 50 iris blood vessel images from 50 different patients. To facilitate subsequent network training, the training and testing data sets are partitioned according to preprocessing by setting the image size to be 512 × 512, 4:1, and using binary cross entropy as an optimized loss function.
Step 2, inputting the training set image obtained by preprocessing into a neural network model by the terminal for training;
the neural network model comprises a coding module, a decoding module and a residual error module, wherein the coding module comprises four coding layers, each coding layer comprises a convolution layer and a maximum pooling layer, and the coding module is used for performing downsampling operation on an input training set image to obtain a feature map;
the decoding module comprises four decoding layers, each decoding layer comprises an attention block and a deconvolution layer, the decoding module is used for performing up-sampling operation on the feature map to obtain a segmented image, and the coding module corresponds to the decoding module in structure;
the residual error module comprises four upsampling layers obtained by performing continuous four times of upsampling according to the characteristic diagram of the deepest layer obtained by the downsampling of the coding module;
the input of the attention block in the decoding module specifically comprises a first input from the encoding module, a second input of the residual error module and a third input, wherein the third input is the decoding output of the attention block in the upper layer of the encoding module; note that the process performed by the block includes superimposing the first input, the second input, and the third input and activating by a function, resampling the activated features, fusing the resampled features with the first input and the second input, and inputting the fused features into the deconvolution layer in the decoding module.
In the one training process in the embodiment of the present application, the terminal inputs the iris blood vessel image with a size of 512 × 512 into the neural network model, the first convolution operation uses a convolution kernel with a size of 7 × 7, the step size is set to 2, the image size is adjusted to 256 × 0256, and the maximum pool operation is performed, where the maximum pool operation is used for downsampling to reduce the size of the feature mapping. In the second layer of the coding module, the obtained 256 × 1256 feature map is continuously subjected to convolution operations 3 times, wherein each convolution operation is to perform convolution twice by 3 × 23, adjust the image size to 128 × 3128, and then perform the maximum pool operation of the second layer of the coding module. In the third layer of the coding module, the obtained 128 × 128 feature map is continuously subjected to 4 convolution operations, wherein each convolution operation is to perform 3 × 3 convolution twice, the image size is adjusted to 64 × 64, and then the maximum pool operation of the third layer of the coding module is performed. In the fourth layer of the coding module, the obtained 64 × 64 feature map is continuously subjected to convolution operations 3 times, wherein each convolution operation is to perform convolution twice by 3 times, the image size is adjusted to 32 × 32, and then the maximum pool operation of the fourth layer of the coding module is performed. Performing convolution operation for 3 times on the output obtained at the fourth layer of the coding module, wherein each convolution operation is to perform convolution for 3 times by 3 times, and obtain a bottom layer characteristic diagram of 1616 and a depth of 512. It can be seen that the coding module encodes the feature maps for different depths for each layer, where i represents the layer number of the encoder. Characteristic map size of the i-th layer is 512/2i
In the residual module, the underlying resolution 16 × 16 feature maps are convolved for 4 consecutive times to obtain 32 × 32, 64 × 64, 128 × 128, and 256 × 256 feature maps, which correspond to the fourth, third, second, and first layers in the coding module, respectively.
The decoding module comprises attention blocks (a dotted line frame in fig. 1), the input of each layer of attention block comprises the input from a coding module, a residual error module and a layer before the decoding module in the same layer, the specific operation in the decoding module is as shown in fig. 2, after convolution of a first input from the coding module, a second input from the residual error module and a third input in the upper layer in the coding module, the first input, the second input and the third input are firstly superposed and activated through a Relu function, convolution of 1 × 1 × 1 is carried out, then a Sigmoid function is activated and resampling is carried out, the resampled features are fused with the first input and the second input, and the fused features are input into an deconvolution layer in the decoding module.
Wherein the resampled features are fused with the first input and the second input via a first formula, wherein the first formula specifically comprises:
Figure BDA0003053450210000051
wherein i represents the ith down-sampling layer in the encoding module, N represents all down-sampling layers of the encoding module,
Figure BDA0003053450210000052
represents the 2 nd downsampling layer in the coding module, C () represents the convolution calculation, D () represents the downsampling operation, U () represents the upsampling operation, and]indicating a feature connection.
And 3, processing the blood vessel image by the terminal by using the trained neural network model to obtain a blood vessel image segmentation result.
Finally, we used joint crossing (MIoU) as an index to evaluate network performance. Under the condition that all experimental parameters are set to be the same, compared with the Unet image segmentation network without the residual error module and the attention block in the prior art, the MIoU is reduced by about 1.6% compared with the neural network model in the present application, which indicates that the performance of the image segmentation processing is improved by adding the residual error module and the attention block in the embodiment of the present application.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the above teachings, and that all such modifications and variations are intended to be within the scope of the invention as defined in the appended claims.

Claims (2)

1. A method of vessel image segmentation, the method comprising:
s1, acquiring an original blood vessel image and preprocessing the original blood vessel image to obtain a training set image;
s2, inputting the preprocessed training set image into a neural network model for training;
the neural network model comprises a coding module, a decoding module and a residual error module, wherein the coding module comprises four coding layers, each coding layer comprises a convolution layer and a maximum pooling layer, and the coding module is used for performing downsampling operation on an input training set image to obtain a feature map;
the decoding module comprises four decoding layers, each decoding layer comprises an attention block and a deconvolution layer, the decoding module is used for performing up-sampling operation on the feature map to obtain a segmented image, and the encoding module corresponds to the decoding module in structure;
the residual error module comprises four upsampling layers obtained by performing continuous four times of upsampling according to the feature map of the deepest layer obtained by the downsampling of the coding module;
wherein, the input of the attention block in the decoding module specifically comprises a first input from the encoding module, a second input of the residual module, and a third input, and the third input is a decoding output of the attention block at a higher layer in the encoding module; the attention block executes a process including superimposing the first input, the second input and the third input and activating by a function, resampling the activated features, fusing the resampled features with the first input and the second input, and inputting the fused features into an deconvolution layer in the decoding module;
and S3, processing the blood vessel image by using the trained neural network model to obtain a blood vessel image segmentation result.
2. The method according to claim 1, wherein the re-sampled feature is fused with the first input and the second input by a first formula, wherein the first formula specifically comprises:
Figure FDA0003053450200000021
wherein i represents the ith down-sampling layer in the encoding module, N represents all down-sampling layers of the encoding module,
Figure FDA0003053450200000022
represents the 2 nd downsampling layer in the coding module, C () represents the convolution calculation, D () represents the downsampling operation, U () represents the upsampling operation, and]indicating a feature connection.
CN202110493742.2A 2021-05-07 2021-05-07 Blood vessel image segmentation method Pending CN113223002A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110493742.2A CN113223002A (en) 2021-05-07 2021-05-07 Blood vessel image segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110493742.2A CN113223002A (en) 2021-05-07 2021-05-07 Blood vessel image segmentation method

Publications (1)

Publication Number Publication Date
CN113223002A true CN113223002A (en) 2021-08-06

Family

ID=77091271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110493742.2A Pending CN113223002A (en) 2021-05-07 2021-05-07 Blood vessel image segmentation method

Country Status (1)

Country Link
CN (1) CN113223002A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140543A (en) * 2021-11-30 2022-03-04 深圳万兴软件有限公司 Multichannel output method, system, computer equipment and storage medium based on U2net
CN115063504A (en) * 2022-08-05 2022-09-16 全景恒升(北京)科学技术有限公司 Atheromatous plaque identification method and device, computer equipment and storage medium
CN117041601A (en) * 2023-10-09 2023-11-10 海克斯康制造智能技术(青岛)有限公司 Image processing method based on ISP neural network model

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020093042A1 (en) * 2018-11-02 2020-05-07 Deep Lens, Inc. Neural networks for biomedical image analysis
CN111242949A (en) * 2020-01-02 2020-06-05 浙江工业大学 Fundus image blood vessel segmentation method based on full convolution neural network multi-scale features
CN111340814A (en) * 2020-03-03 2020-06-26 北京工业大学 Multi-mode adaptive convolution-based RGB-D image semantic segmentation method
CN111833352A (en) * 2020-06-28 2020-10-27 杭州电子科技大学 Image segmentation method for improving U-net network based on octave convolution
US20200364870A1 (en) * 2019-05-14 2020-11-19 University-Industry Cooperation Group Of Kyung Hee University Image segmentation method and apparatus, and computer program thereof
CN111986181A (en) * 2020-08-24 2020-11-24 中国科学院自动化研究所 Intravascular stent image segmentation method and system based on double-attention machine system
CN112102283A (en) * 2020-09-14 2020-12-18 北京航空航天大学 Retina fundus blood vessel segmentation method based on depth multi-scale attention convolution neural network
WO2020260936A1 (en) * 2019-06-25 2020-12-30 Inception Institute of Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020093042A1 (en) * 2018-11-02 2020-05-07 Deep Lens, Inc. Neural networks for biomedical image analysis
US20200364870A1 (en) * 2019-05-14 2020-11-19 University-Industry Cooperation Group Of Kyung Hee University Image segmentation method and apparatus, and computer program thereof
WO2020260936A1 (en) * 2019-06-25 2020-12-30 Inception Institute of Artificial Intelligence, Ltd. Medical image segmentation using an integrated edge guidance module and object segmentation network
CN111242949A (en) * 2020-01-02 2020-06-05 浙江工业大学 Fundus image blood vessel segmentation method based on full convolution neural network multi-scale features
CN111340814A (en) * 2020-03-03 2020-06-26 北京工业大学 Multi-mode adaptive convolution-based RGB-D image semantic segmentation method
CN111833352A (en) * 2020-06-28 2020-10-27 杭州电子科技大学 Image segmentation method for improving U-net network based on octave convolution
CN111986181A (en) * 2020-08-24 2020-11-24 中国科学院自动化研究所 Intravascular stent image segmentation method and system based on double-attention machine system
CN112102283A (en) * 2020-09-14 2020-12-18 北京航空航天大学 Retina fundus blood vessel segmentation method based on depth multi-scale attention convolution neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LOGAN JIN: "3AU-Net: Triple Attention U-Net for Retinal Vessel Segmentation", 《IEEE》 *
李天培;陈黎;: "基于双注意力编码-解码器架构的视网膜血管分割", 计算机科学, no. 05 *
殷晓航: "基于U-Net结构改进的医学影像分割技术综述", 《软件学报》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140543A (en) * 2021-11-30 2022-03-04 深圳万兴软件有限公司 Multichannel output method, system, computer equipment and storage medium based on U2net
CN115063504A (en) * 2022-08-05 2022-09-16 全景恒升(北京)科学技术有限公司 Atheromatous plaque identification method and device, computer equipment and storage medium
CN115063504B (en) * 2022-08-05 2022-11-18 全景恒升(北京)科学技术有限公司 Atheromatous plaque identification method and device, computer equipment and storage medium
CN117041601A (en) * 2023-10-09 2023-11-10 海克斯康制造智能技术(青岛)有限公司 Image processing method based on ISP neural network model
CN117041601B (en) * 2023-10-09 2024-01-12 海克斯康制造智能技术(青岛)有限公司 Image processing method based on ISP neural network model

Similar Documents

Publication Publication Date Title
CN113223002A (en) Blood vessel image segmentation method
CN111145170B (en) Medical image segmentation method based on deep learning
CN111862056A (en) Retinal vessel image segmentation method based on deep learning
CN111292338B (en) Method and system for segmenting choroidal neovascularization from fundus OCT image
CN113793348B (en) Retinal blood vessel segmentation method and device
CN112132817A (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN112258488A (en) Medical image focus segmentation method
CN112614145B (en) Deep learning-based intracranial hemorrhage CT image segmentation method
CN108764342B (en) Semantic segmentation method for optic discs and optic cups in fundus image
CN112884788B (en) Cup optic disk segmentation method and imaging method based on rich context network
CN109919954B (en) Target object identification method and device
CN114881968A (en) OCTA image vessel segmentation method, device and medium based on deep convolutional neural network
CN116503422A (en) Eye cup video disc segmentation method based on attention mechanism and multi-scale feature fusion
CN116071549A (en) Multi-mode attention thinning and dividing method for retina capillary vessel
CN114187181B (en) Dual-path lung CT image super-resolution method based on residual information refining
CN116363060A (en) Mixed attention retinal vessel segmentation method based on residual U-shaped network
CN115578406A (en) CBCT jaw bone region segmentation method and system based on context fusion mechanism
CN114972365A (en) OCT image choroid segmentation model construction method combined with prior mask and application thereof
CN111489291A (en) Medical image super-resolution reconstruction method based on network cascade
Yu et al. MIA-UNet: Multi-Scale Iterative Aggregation U-Network for Retinal Vessel Segmentation.
CN109919098B (en) Target object identification method and device
CN116935045B (en) Retina blood vessel segmentation method and system based on mixed attention and multi-scale cascade
CN116385725A (en) Fundus image optic disk and optic cup segmentation method and device and electronic equipment
CN116883420A (en) Choroidal neovascularization segmentation method and system in optical coherence tomography image
CN116452571A (en) Image recognition method based on deep neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210806