CN113222975A - High-precision retinal vessel segmentation method based on improved U-net - Google Patents

High-precision retinal vessel segmentation method based on improved U-net Download PDF

Info

Publication number
CN113222975A
CN113222975A CN202110602901.8A CN202110602901A CN113222975A CN 113222975 A CN113222975 A CN 113222975A CN 202110602901 A CN202110602901 A CN 202110602901A CN 113222975 A CN113222975 A CN 113222975A
Authority
CN
China
Prior art keywords
segmentation
convolution
image
medical image
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110602901.8A
Other languages
Chinese (zh)
Other versions
CN113222975B (en
Inventor
吴聪
程禹清
李纬
李仕军
刘肖
龙成
刘延龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University of Technology
Original Assignee
Hubei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University of Technology filed Critical Hubei University of Technology
Priority to CN202110602901.8A priority Critical patent/CN113222975B/en
Publication of CN113222975A publication Critical patent/CN113222975A/en
Application granted granted Critical
Publication of CN113222975B publication Critical patent/CN113222975B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to and provides a high-precision retinal vessel segmentation method based on improved U-net. The deep neural network model in the method is named as DFUNET, and comprises a brand-new Dense Fusion Block (DFB), meanwhile, part of conventional convolution is improved to be deformable convolution, the DFB comprises a dense connected layer, Local Feature Fusion (LFF) and local residual learning, the DFB can learn residual representation among input and output, convergence speed can be higher, classification accuracy can be improved, and the deformable convolution simulates retinal blood vessels of different shapes and scales by learning self-adaptive receptive fields so as to improve nonlinear expression of feature transmission. Through verification on the DRIVE data set, the segmentation accuracy of the method is 96.17 percent and is higher than that of U-Net.

Description

High-precision retinal vessel segmentation method based on improved U-net
Technical Field
The invention belongs to the technical field of deep learning, and provides a high-precision retinal vessel segmentation method based on improved U-net.
Background
The advent of deep learning methods has provided a powerful tool for computer vision tasks, and these methods have outperformed others in many areas. Through the convolutional and pooling layers, the network can gain the ability to learn very complex feature representations. The originally proposed U-net is able to handle image blocks in an end-to-end fashion, where the initial convolutional feature mapping is skipped over the upsampled layer connected to the bottleneck layer, and thus finds wide application in medical image segmentation. Such a jump connection is critical to the segmentation task because the initial feature map maintains low-level features.
In practice, the blood vessel segmentation can be regarded as an image translation task in which an output segmented blood vessel map is generated from an input fundus image. A clearer, clearer vessel map can be obtained if the output is limited to human expert like annotations.
The U-NET is a deep learning network model and comprises a decoder and an encoder, the structure of the U-Net network is similar to a large letter U, a deep neural network is trained by loading a training image and a mask image, and a case image is fitted by using the trained model to obtain a blood vessel segmentation image. However, due to the fact that the number of network layers of the U-NET is deep, a certain gradient disappearance and network degradation problem can be caused in a network training process, and in order to solve the problem, an effective solution is provided, namely DFUNET, DFUNE T comprises a newly designed Dense Fusion Block (DFB) on the basis of U-NET, meanwhile, a part of conventional convolution is improved to deformable-convolution, DFB comprises a dense connected layer, a Local Feature Fusion (LFF) and a local residual learning, DFB can learn residual representation among input and output, convergence speed can be increased, classification accuracy can be improved, and the deformable convolution simulates retinal vessels of different shapes and scales through learning of a self-adaptive receptive field so as to improve nonlinear expression of feature transmission.
We have implemented this method using the pytorech framework and evaluated the new method on the DRIVE dataset, achieving very high segmentation quality.
Disclosure of Invention
The invention discloses a high-precision retinal vessel segmentation method DFUNET based on improved U-net, which has important significance for doctors to diagnose fundus diseases. We apply it in the U-Net model based on a completely new proposed DFB and deformable convolution module. And the DFUNET segmentation effect is obviously better than that of U-Net. The feasibility of the method is verified on the DRIVE data set, and the accuracy is 96.17%. By analyzing and comparing the segmented retinal blood vessel images, the method is superior to other methods. It can provide abundant eye disease information for doctors and help patients to receive treatment in time.
The technical scheme adopted by the invention is as follows:
a high-precision retinal vessel segmentation method based on improved U-net is characterized by comprising the following steps:
step 1: preparing data;
performing medical scanning on a medical image segmentation target region to obtain a color medical image, preprocessing an obtained image sample, extracting a G channel image in RGB three channels by a python built-in library function cvtColor method to obtain a gray image, manually segmenting the image by a doctor based on experience, manually making a corresponding segmentation label image, and using the image set as model training;
step 2: data enhancement processing;
rotating, translating and scaling a gray level image sample P1 obtained after the G channel is extracted after preprocessing and a corresponding manual segmentation label image P2, performing data enhancement processing to increase a sample image to obtain bitmap samples P1-1 and P2-1 corresponding to a plurality of medical image samples, taking P1-1 as an image sample and P2-1 as an image segmentation label, and dividing all samples into a training set and a verification set according to a preset proportion;
and step 3: a medical image segmentation network is defined, and comprises a medical image segmentation framework U-Net, wherein the U-Net is a network architecture of a coder decoder. Replacing double-conv modules in a U-Net encoder and a decoder with DFB (Dense Fusion Block), wherein the input of a second layer of convolution layer is cascaded by the convolution input of a first layer of convolution output and a first layer of convolution input, the cascade operation is carried out on the second layer of convolution output, the first layer of convolution output and the first layer of convolution input again, the two layers of convolution adopt convolution kernels with the size of 3 x 3, meanwhile, dimension reduction is carried out by introducing the convolution kernels with the size of 1 x 1 behind, feature Fusion is carried out on the output and the first layer of input, and meanwhile, deformable convolution is carried out, namely De-DFB, the deformable convolution passes through spatial sampling positions with extra offset added in the modules and learning offset from a target task without extra supervision, and the De-DFB is called
Figure BDA0003093531430000021
The method is characterized in that the method replaces the common module of the existing CNN and utilizes back propagation to train end to generate a deformable convolutional neural network, the improved model is DFUNET, on the basis of U-NET, double-conv is replaced by DFB module, the double-conv after bottom pooling is replaced by De-DFB module, DFB is improved by rest and den, DFB comprises dense continuous selected layer, Local Feature Fusion (LFF) and local residual learning, DFB can learn residual representation between input and output, convergence speed is higher, classification precision is improved, and deformable convolution simulates retinal blood with different shapes and scales by learning adaptive receptive fieldsA tube to promote a non-linear representation of the feature transfer.
And 4, step 4: training a medical image segmentation network;
training a medical image segmentation network by using a training set, and debugging network parameters by using a verification set to obtain a group of optimal model parameters;
and 5: and carrying out final network test by using the test set to obtain the final segmentation accuracy of the network.
In the high-precision retinal vessel segmentation method based on the improved U-net, in the step 2, the random interval range of rotation is 0-10 degrees, and the random interval range of translation and scaling is 0-10 degrees; and finally, the enhanced bitmap samples are processed according to a ratio of 4:1:1 to divide the training set, the verification set and the test set.
In the high-precision retinal vessel segmentation method based on the improved U-net, the previously divided training set is used for retinal vessel segmentation, the verification set is used for debugging network parameters, in the step 4, the divided training set is used for training a medical image segmentation network, internal parameters of the iterative network are automatically updated by an Adam optimizer by utilizing a back propagation strategy, the batch of training samples sent into the network each time is 4, the training times are 2000 times, and the optimal learning rate is 0.0001.
In the above-mentioned high-precision retinal vessel segmentation method based on improved U-net, in step 5, the system finally uses the color fundus retina to perform the test of the final system, and the system adopts ACC verification method (ACC verification method)
Figure BDA0003093531430000031
In the formula, a True Positive (TP) indicates a blood vessel point with correct segmentation, a False Positive (FP) indicates a blood vessel point with wrong segmentation, a True Negative (TN) indicates a background point with correct segmentation, and a False Negative (FN) indicates a background point with wrong segmentation. ) And obtaining the final system output result. The verification set is used for verifying the experimental result, and the experimental result shows that the accuracy rate of retinal vessel segmentation reaches 96.17 percent and is very close to the result of manual segmentation.
An apparatus for segmenting a medical image, the apparatus comprising:
the input and output module is used for acquiring a plurality of images to be segmented and determining target segmentation areas of the plurality of medical images;
the system comprises a processing module, a model training module and a data processing module, wherein the processing module is used for performing medical scanning on a medical image segmentation target region to obtain a color medical image, preprocessing an obtained image sample, extracting a G channel image in RGB three channels by a python built-in library function cvtColor method to obtain a gray image, performing manual image segmentation on the image by a doctor based on experience, manually making a corresponding segmentation label image, and using the image set as model training;
rotating, translating and scaling a gray level image sample P1 obtained after the G channel is extracted after preprocessing and a corresponding manual segmentation label image P2, performing data enhancement processing to increase a sample image to obtain bitmap samples P1-1 and P2-1 corresponding to a plurality of medical image samples, taking P1-1 as an image sample and P2-1 as an image segmentation label, and dividing all samples into a training set and a verification set according to a preset proportion;
generating a medical image segmentation model, and inputting a training set of bitmap samples into the medical image segmentation model through the input and output module so as to train the medical image segmentation model; debugging the model parameters of the medical image segmentation model by using the verification set of each bitmap sample to obtain a group of optimal model parameters of the medical image model; and inputting the training set of each bitmap sample into the medical image segmentation model through the input and output module so as to perform performance test by using the verification set pair of each medical image sample to obtain the optimal segmentation accuracy of the medical image segmentation model.
In the above-mentioned apparatus, the newly proposed medical image segmentation model DFUNET includes a medical image segmentation framework U-Net, in this embodiment, the double-conv module in the U-Net encoder and decoder is replaced by DFB (density Fusion block), wherein the input of the second layer convolution layer is cascaded from the convolution input of the first layer and the second layer convolution output and the first layer convolution output are cascaded again, the two-layer convolution uses convolution kernel with size 3 × 3, and the 1 × 1 convolution kernel is introduced behind to perform dimensionality reduction and perform feature Fusion on the output and the first layer input, and we improve part of the conventional convolution operation into deformable convolution, as shown in fig. 3, the improved module improves the conventional convolution into deformable convolution, called De-DFB, as shown in fig. 4. The proposed improved model, which we call DFUNET, replaces double-conv with DFB module on the basis of U-NET and replaces De-DFB module with double-conv after bottom pooling operation, as shown in FIG. 5.
In the above apparatus, the processing module is specifically configured to:
inputting the training set of each bitmap sample into the medical image segmentation model in batches through the input and output module; and automatically updating internal parameters of the iterative network by using a back propagation strategy through an Adam optimizer, wherein the batch of training samples sent into the network each time is 4, the training times are 2000, and the optimal learning rate is 0.0001.
A computer device, the device comprising:
at least one processor, memory, and transceiver;
wherein the memory is configured to store program code and the processor is configured to invoke the program code stored in the memory to perform the method of any of claims 1-5.
10. A computer storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-5.
Compared with the prior art, the invention has the following beneficial effects: the features required by the user can be effectively extracted through the feature extraction layer in the deep learning model, the target needing to be segmented can be well segmented from the medical image, and the method has great significance for doctors to diagnose diseases.
Drawings
FIG. 1 is a system flow diagram of the present invention.
FIG. 2 is a schematic diagram of a Dense Fusion Block (DFB) of the present invention
FIG. 3 is a schematic diagram of the deformable convolution of the present invention
FIG. 4 is a schematic diagram of a modified DFB (De-DFB) of the present invention
FIG. 5 is a schematic diagram of a DFUNET network model of the present invention
FIG. 6a is a RESNET schematic diagram 1 of the present invention
FIG. 6b is a RESNET schematic diagram 2 of the present invention
FIG. 6c is a DENSENET schematic of the invention
Detailed Description
In order to facilitate the understanding and implementation of the present invention for those of ordinary skill in the art, the present invention is further described in detail with reference to the accompanying drawings and examples, it is to be understood that the embodiments described herein are merely illustrative and explanatory of the present invention and are not restrictive thereof.
Referring to fig. 1, the high-precision retinal vessel segmentation method based on the improved U-net provided by the invention includes the following steps:
step 1: preparing data;
performing medical scanning on a medical image segmentation target region to obtain a color medical image, preprocessing an obtained image sample, extracting a G channel image in RGB three channels by a python built-in library function cvtColor method to obtain a gray image, manually segmenting the image by a doctor based on experience, manually making a corresponding segmentation label image, and using the image set as model training;
step 2: data enhancement processing;
rotating, translating and scaling a gray level image sample P1 obtained after the G channel is extracted after preprocessing and a corresponding manual segmentation label image P2, performing data enhancement processing to increase a sample image to obtain bitmap samples P1-1 and P2-1 corresponding to a plurality of medical image samples, taking P1-1 as an image sample and P2-1 as an image segmentation label, and dividing all samples into a training set and a verification set according to a preset proportion;
data enhancement refers to a method of increasing the amount of data by expanding the original data through a series of random transformations. In combination with practical situations, the system performs data enhancement by rotation, translation and scaling, wherein the random interval range of translation and scaling is 0-10%, and the random interval range of rotation is 0-10%. And finally, dividing the enhanced bitmap sample into a training set, a verification set and a test set according to the ratio of 4:1: 1.
And step 3: designing a medical image segmentation network;
the medical image segmentation network of the embodiment is named as DFUNET, and comprises a medical image segmentation framework U-Net, and the U-Net is a network architecture of a coder decoder. In this embodiment, the input of the second layer convolution layer is cascaded with the first layer convolution output and the first layer convolution input, the second layer convolution output and the first layer convolution input are cascaded again, the two-layer convolution adopts a convolution kernel with the size of 3 × 3, meanwhile, a convolution kernel with the size of 1 × 1 is introduced at the rear for dimension reduction, and the output and the first layer input are subjected to feature Fusion, which not only maintains the forward property, but also extracts the local dense features, as shown in fig. 2, the operation not only maintains the forward property, but also extracts the local dense features, can alleviate the gradient disappearance problem, strengthen the feature propagation, greatly reduce the parameter quantity, and simultaneously improve part of the conventional convolution operation into deformable convolution, deformable convolution passes spatial sampling positions in the module with additional offsets and learns offsets from the target task without additional supervision
Figure BDA0003093531430000061
The normal modules of the existing CNN can be easily replaced and end-to-end training can be performed using back propagation to produce a deformable convolutional neural network, as shown in fig. 3, and the improved modules improve the conventional convolution thereof into a deformable convolution, called De-DFB, as shown in fig. 4. The proposed improved model, which we call DFUNET, replaces double-conv with DFB module on the basis of U-NET and replaces De-DFB module with double-conv after bottom pooling operation, as shown in FIG. 5. DFB is improved over resnet and densenet, and combines the characteristics of resnet and densenet as shown in fig. 6 a-6 c, DFB includes dense connected layer, Local Feature Fusion (LFF) and local residual learning, DFB can learn residual representation between input and output, can make convergence speed faster and improve classification accuracy, and deformable convolution (fig. 3) simulates retinal blood vessels of different shapes and scales by learning adaptive receptive field to improve nonlinear expression of feature transmission.
And 4, step 4: training a medical image segmentation network;
using the previously divided training set to perform a retinal vessel segmentation network as shown in fig. 5, and using a validation set to debug network parameters;
in the embodiment, the retina management segmentation network is trained by using the divided training set, internal parameters of the iterative network are automatically updated by using a back propagation strategy through an Adam optimizer, the batch of training samples sent into the network each time is 4, the training times are 2000, and the optimal learning rate is 0.0001.
And 5: and carrying out final network test by using the test set to obtain the final segmentation accuracy of the network.
The system adopts an ACC verification method (
Figure BDA0003093531430000071
In the formula, a True Positive (TP) indicates a blood vessel point with correct segmentation, a False Positive (FP) indicates a blood vessel point with wrong segmentation, a True Negative (TN) indicates a background point with correct segmentation, and a False Negative (FN) indicates a background point with wrong segmentation. ) And obtaining the final system output result. The verification set is used for verifying the experimental result, and the experimental result shows that the accuracy rate of retinal vessel segmentation reaches 96.17 percent and is very close to the result of manual segmentation. It can provide abundant eye disease information for doctors and help patients to receive treatment in time.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
It should be understood that the above description of the preferred embodiments is given for clarity and not for any purpose of limitation, and that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (9)

1. A high-precision retinal vessel segmentation method based on improved U-net is characterized by comprising the following steps:
step 1: preparing data;
performing medical scanning on a medical image segmentation target region to obtain a color medical image, preprocessing an obtained image sample, extracting a G channel image in RGB three channels by a python built-in library function cvtColor method to obtain a gray image, manually segmenting the image by a doctor based on experience, manually making a corresponding segmentation label image, and using the image set as model training;
step 2: data enhancement processing;
rotating, translating and scaling a gray level image sample P1 obtained after the G channel is extracted after preprocessing and a corresponding manual segmentation label image P2, performing data enhancement processing to increase a sample image to obtain bitmap samples P1-1 and P2-1 corresponding to a plurality of medical image samples, taking P1-1 as an image sample and P2-1 as an image segmentation label, and dividing all samples into a training set and a verification set according to a preset proportion;
and step 3: defining a medical image segmentation network, wherein the medical image segmentation network comprises a medical image segmentation framework U-Net, and the U-Net is a network architecture of a coder decoder; replacing double-conv modules in a U-Net encoder and a decoder with DFB (Dense Fusion Block), wherein the input of a second layer of convolution layer is cascaded by the convolution input of a first layer of convolution output and a first layer of convolution input, the cascade operation is carried out on the second layer of convolution output, the first layer of convolution output and the first layer of convolution input again, the two layers of convolution adopt convolution kernels with the size of 3 x 3, meanwhile, dimension reduction is carried out by introducing the convolution kernels with the size of 1 x 1 behind, feature Fusion is carried out on the output and the first layer of input, and meanwhile, deformable convolution is carried out, namely De-DFB, and extra DFB is added in the deformable convolution passing modulesSpatially sampled location of offsets and learning offsets from a target task without additional supervision
Figure FDA0003093531420000011
Replacing the common module of the existing CNN and utilizing back propagation to train end to generate a deformable convolutional neural network, wherein the improved model is DFUNET, on the basis of U-NET, double-conv is replaced by a DFB module, the double-conv after bottom pooling is replaced by a De-DFB module, DFB is improved on rest and den, DFB comprises dense connected layer, Local Feature Fusion (LFF) and local residual learning, DFB can learn residual representation between input and output, so that convergence speed is higher and classification precision is improved, and the deformable convolution simulates retinal blood vessels of different shapes and scales by learning self-adaptive receptive fields to improve nonlinear expression of feature transmission;
and 4, step 4: training a medical image segmentation network;
training a medical image segmentation network by using a training set, and debugging network parameters by using a verification set to obtain a group of optimal model parameters;
and 5: and carrying out final network test by using the test set to obtain the final segmentation accuracy of the network.
2. The improved U-net based high precision retinal vessel segmentation method of claim 1, wherein: in the step 2, the random interval range of rotation is 0-10 degrees, and the random interval range of translation and scaling is 0-10 degrees; and finally, the enhanced bitmap samples are processed according to a ratio of 4:1:1 to divide the training set, the verification set and the test set.
3. The improved U-net based high precision retinal vessel segmentation method of claim 1, wherein: the method comprises the steps of performing retinal vessel segmentation by using a previously divided training set, debugging network parameters by using a verification set, training a medical image segmentation network by using the divided training set in step 4, automatically updating internal parameters of an iterative network by using a back propagation strategy through an Adam optimizer, wherein the batch of training samples sent into the network each time is 4, the training times are 2000, and the optimal learning rate is 0.0001.
4. The improved U-net based high precision retinal vessel segmentation method according to any one of claims 1-3, wherein: in step 5, the system finally uses the color fundus retina to test the final system, and the system adopts an ACC verification method
Figure FDA0003093531420000021
In the formula, a True Positive (TP) represents a blood vessel point with correct segmentation, a False Positive (FP) represents a blood vessel point with wrong segmentation, a True Negative (TN) represents a background point with correct segmentation, and a False Negative (FN) represents a background point with wrong segmentation; ) Obtaining the final system output result; the verification set is used for verifying the experimental result, and the experimental result shows that the accuracy rate of retinal vessel segmentation reaches 96.17 percent and is very close to the result of manual segmentation.
5. An apparatus for segmenting a medical image, the apparatus comprising:
the input and output module is used for acquiring a plurality of images to be segmented and determining target segmentation areas of the plurality of medical images;
the system comprises a processing module, a model training module and a data processing module, wherein the processing module is used for performing medical scanning on a medical image segmentation target region to obtain a color medical image, preprocessing an obtained image sample, extracting a G channel image in RGB three channels by a python built-in library function cvtColor method to obtain a gray image, performing manual image segmentation on the image by a doctor based on experience, manually making a corresponding segmentation label image, and using the image set as model training;
rotating, translating and scaling a gray level image sample P1 obtained after the G channel is extracted after preprocessing and a corresponding manual segmentation label image P2, performing data enhancement processing to increase a sample image to obtain bitmap samples P1-1 and P2-1 corresponding to a plurality of medical image samples, taking P1-1 as an image sample and P2-1 as an image segmentation label, and dividing all samples into a training set and a verification set according to a preset proportion;
generating a medical image segmentation model, and inputting a training set of bitmap samples into the medical image segmentation model through the input and output module so as to train the medical image segmentation model; debugging the model parameters of the medical image segmentation model by using the verification set of each bitmap sample to obtain a group of optimal model parameters of the medical image model; and inputting the training set of each bitmap sample into the medical image segmentation model through the input and output module so as to perform performance test by using the verification set pair of each medical image sample to obtain the optimal segmentation accuracy of the medical image segmentation model.
6. The apparatus of claim 5, wherein the newly proposed medical image segmentation model DFUNET comprises a medical image segmentation framework U-Net, in this embodiment, the double-conv modules in the U-Net encoder and decoder are replaced by DFB (Dense Fusion Block), wherein the input of the second layer convolution layer is cascaded with the convolution input of the first layer from the convolution output of the first layer, the second layer convolution output and the convolution output of the first layer are cascaded with the convolution input of the first layer again, the two-layer convolution uses convolution kernel with size 3 x 3, and introduces the size 1 x 1 convolution kernel to perform dimensionality reduction at the back, and performs characteristic Fusion of the output and the input of the first layer, and we improve a part of the conventional convolution operation into deformable convolution, as shown in FIG. 3, the improved module improves the conventional convolution into deformable convolution, referred to as De-DFB, as shown in fig. 4; the proposed improved model, which is called DFUNET, is characterized in that on the basis of U-NET, the double-conv is replaced by a DFB module, and the De-DFB module is replaced by the double-conv after the bottom pooling operation.
7. The apparatus of claim 6, wherein the processing module is specifically configured to:
inputting the training set of each bitmap sample into the medical image segmentation model in batches through the input and output module; and automatically updating internal parameters of the iterative network by using a back propagation strategy through an Adam optimizer, wherein the batch of training samples sent into the network each time is 4, the training times are 2000, and the optimal learning rate is 0.0001.
8. A computer device, the device comprising:
at least one processor, memory, and transceiver;
wherein the memory is configured to store program code and the processor is configured to invoke the program code stored in the memory to perform the method of any of claims 1-5.
9. A computer storage medium comprising instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1-5.
CN202110602901.8A 2021-05-31 2021-05-31 High-precision retinal vessel segmentation method based on improved U-net Active CN113222975B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110602901.8A CN113222975B (en) 2021-05-31 2021-05-31 High-precision retinal vessel segmentation method based on improved U-net

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110602901.8A CN113222975B (en) 2021-05-31 2021-05-31 High-precision retinal vessel segmentation method based on improved U-net

Publications (2)

Publication Number Publication Date
CN113222975A true CN113222975A (en) 2021-08-06
CN113222975B CN113222975B (en) 2023-04-07

Family

ID=77081786

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110602901.8A Active CN113222975B (en) 2021-05-31 2021-05-31 High-precision retinal vessel segmentation method based on improved U-net

Country Status (1)

Country Link
CN (1) CN113222975B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661144A (en) * 2022-12-15 2023-01-31 湖南工商大学 Self-adaptive medical image segmentation method based on deformable U-Net
CN117274278A (en) * 2023-09-28 2023-12-22 武汉大学人民医院(湖北省人民医院) Retina image focus part segmentation method and system based on simulated receptive field

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101118593A (en) * 2007-09-04 2008-02-06 西安电子科技大学 Texture image classification method based on SWBCT
ITTO20110169A1 (en) * 2011-02-28 2012-08-29 Consiglio Nazionale Ricerche POSITIVE ALLGLASS MODULATORS OF MGLUR5 FOR USE AS MEDICATION IN THE THERAPEUTIC TREATMENT OF THE PHELAN-MCDERMID SYNDROME
US20160004298A1 (en) * 2008-04-07 2016-01-07 Mohammad A. Mazed Chemical Compositon And Its Devlivery For Lowering The Risks Of Alzheimer's Cardiovascular And Type -2 Diabetes Diseases
CN107977926A (en) * 2017-12-01 2018-05-01 新乡医学院 A kind of different machine brain phantom information fusion methods of PET/MRI for improving neutral net
CN109447948A (en) * 2018-09-28 2019-03-08 上海理工大学 A kind of optic disk dividing method based on lesion colour retinal fundus images
CN109685813A (en) * 2018-12-27 2019-04-26 江西理工大学 A kind of U-shaped Segmentation Method of Retinal Blood Vessels of adaptive scale information
WO2021043980A1 (en) * 2019-09-06 2021-03-11 Carl Zeiss Meditec, Inc. Machine learning methods for creating structure-derived visual field priors

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101118593A (en) * 2007-09-04 2008-02-06 西安电子科技大学 Texture image classification method based on SWBCT
US20160004298A1 (en) * 2008-04-07 2016-01-07 Mohammad A. Mazed Chemical Compositon And Its Devlivery For Lowering The Risks Of Alzheimer's Cardiovascular And Type -2 Diabetes Diseases
ITTO20110169A1 (en) * 2011-02-28 2012-08-29 Consiglio Nazionale Ricerche POSITIVE ALLGLASS MODULATORS OF MGLUR5 FOR USE AS MEDICATION IN THE THERAPEUTIC TREATMENT OF THE PHELAN-MCDERMID SYNDROME
CN107977926A (en) * 2017-12-01 2018-05-01 新乡医学院 A kind of different machine brain phantom information fusion methods of PET/MRI for improving neutral net
CN109447948A (en) * 2018-09-28 2019-03-08 上海理工大学 A kind of optic disk dividing method based on lesion colour retinal fundus images
CN109685813A (en) * 2018-12-27 2019-04-26 江西理工大学 A kind of U-shaped Segmentation Method of Retinal Blood Vessels of adaptive scale information
WO2021043980A1 (en) * 2019-09-06 2021-03-11 Carl Zeiss Meditec, Inc. Machine learning methods for creating structure-derived visual field priors

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661144A (en) * 2022-12-15 2023-01-31 湖南工商大学 Self-adaptive medical image segmentation method based on deformable U-Net
CN117274278A (en) * 2023-09-28 2023-12-22 武汉大学人民医院(湖北省人民医院) Retina image focus part segmentation method and system based on simulated receptive field
CN117274278B (en) * 2023-09-28 2024-04-02 武汉大学人民医院(湖北省人民医院) Retina image focus part segmentation method and system based on simulated receptive field

Also Published As

Publication number Publication date
CN113222975B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
CN109389552B (en) Image super-resolution algorithm based on context-dependent multitask deep learning
CN108717869B (en) Auxiliary system for diagnosing diabetic retinal complications based on convolutional neural network
CN112132817B (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN113222975B (en) High-precision retinal vessel segmentation method based on improved U-net
CN111524144B (en) Intelligent lung nodule diagnosis method based on GAN and Unet network
CN114897780B (en) MIP sequence-based mesenteric artery blood vessel reconstruction method
CN112862805B (en) Automatic auditory neuroma image segmentation method and system
CN110859642B (en) Method, device, equipment and storage medium for realizing medical image auxiliary diagnosis based on AlexNet network model
CN113888412B (en) Image super-resolution reconstruction method for diabetic retinopathy classification
CN111080591A (en) Medical image segmentation method based on combination of coding and decoding structure and residual error module
CN112785593A (en) Brain image segmentation method based on deep learning
CN116230154A (en) Chest X-ray diagnosis report generation method based on memory strengthening transducer
CN112102185A (en) Image deblurring method and device based on deep learning and electronic equipment
CN112508063A (en) Medical image classification method based on incremental learning
CN114972365A (en) OCT image choroid segmentation model construction method combined with prior mask and application thereof
CN116779091A (en) Automatic generation method of multi-mode network interconnection and fusion chest image diagnosis report
CN111564205A (en) Pathological image dyeing normalization method and device
CN117593275A (en) Medical image segmentation system
CN114049315B (en) Joint recognition method, electronic device, storage medium, and computer program product
CN114882220B (en) Domain-adaptive priori knowledge-based GAN (generic object model) image generation method and system
CN115731214A (en) Medical image segmentation method and device based on artificial intelligence
CN115147303A (en) Two-dimensional ultrasonic medical image restoration method based on mask guidance
CN111080588A (en) Multi-scale neural network-based rapid fetal MR image brain extraction method
CN117788616A (en) Digital subtraction angiography image generation method based on deep learning
CN113592766B (en) Coronary angiography image segmentation method based on depth sequence information fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant