CN117058676B - Blood vessel segmentation method, device and system based on fundus examination image - Google Patents

Blood vessel segmentation method, device and system based on fundus examination image Download PDF

Info

Publication number
CN117058676B
CN117058676B CN202311319781.6A CN202311319781A CN117058676B CN 117058676 B CN117058676 B CN 117058676B CN 202311319781 A CN202311319781 A CN 202311319781A CN 117058676 B CN117058676 B CN 117058676B
Authority
CN
China
Prior art keywords
fundus
image
blood vessel
network
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311319781.6A
Other languages
Chinese (zh)
Other versions
CN117058676A (en
Inventor
严棽棽
周海英
纪海霞
余海澄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tongren Hospital
Original Assignee
Beijing Tongren Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tongren Hospital filed Critical Beijing Tongren Hospital
Priority to CN202311319781.6A priority Critical patent/CN117058676B/en
Publication of CN117058676A publication Critical patent/CN117058676A/en
Application granted granted Critical
Publication of CN117058676B publication Critical patent/CN117058676B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Vascular Medicine (AREA)
  • Human Computer Interaction (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a blood vessel segmentation method, device and system based on fundus examination images, which aim to realize automatic segmentation of retinal small blood vessels and capillary vessels based on fundus photographs or OCT by combining computer vision, image processing technology, neural network algorithm and medical image analysis, improve the efficiency and accuracy of blood vessel segmentation, reduce the workload of doctors and reduce the risk of human errors. The method, the device and the system provided by the application take the retina blood vessel segmentation scheme of the U-Net improved network model as a core model, combine conventional fundus examination image data, and simultaneously utilize the advantages of computer data to rapidly divide small retina blood vessels and capillary blood vessels, so that the workload of doctors can be reduced, and the risk of human errors can be reduced.

Description

Blood vessel segmentation method, device and system based on fundus examination image
Technical Field
The invention relates to the field of machine vision and machine learning algorithms, in particular to a blood vessel segmentation method, device and system based on fundus examination images and a computer readable storage medium.
Background
Currently, global myopia prevalence is as high as 28.3%, with high myopia prevalence rising from current 4.0% to 9.8%. High myopia is also classified into simple high myopia and pathological myopia. Wherein, the pathological myopia refers to that the sphere power (SE) is less than or equal to-6.00D, and/or the eye axis is more than 26.5mm, the myopia power is continuously increased, the ocular fundus lesions and other blinding ocular diseases which cause visual impairment are accompanied, and the optimal corrected vision (BCVA) is often lower than the normal value. Retinopathy caused by pathological myopia has become an important etiology for irreversible blinding eye disease.
With advances in medical technology, automated and computer-aided diagnosis methods are gradually being applied to the ophthalmic field. In the prior art, when an automation system is used for identifying pathological myopia, a color fundus photo or Optical Coherence Tomography (OCT) is usually taken, retinal small blood vessels and capillary vessels are divided according to the fundus photo or OCT, and whether the periphery of the fundus is diseased or not is determined according to the changes of the retinal small blood vessels and the capillary vessels, the length of an eye axis, diopter and other factors. However, the current automation system cannot accurately divide retinal small blood vessels and capillaries in the fundus photo or OCT, and the division of the blood vessels requires a doctor with abundant experience to participate in the whole course, which consumes a great deal of labor cost.
Under the background, the invention provides a blood vessel segmentation method, device and system based on fundus examination images, which realize automatic segmentation of small retinal blood vessels and capillary vessels based on fundus photographs or OCT, improve the efficiency and accuracy of blood vessel segmentation, reduce the workload of doctors and reduce the risk of human errors.
Disclosure of Invention
The main object of the present invention is to provide a blood vessel segmentation method based on fundus examination image, comprising:
step one, collecting fundus examination image data through a data input interface;
preprocessing the image data of the eye fundus inspection, wherein the preprocessing comprises denoising, contrast enhancement and color standardization so as to reduce noise and improve image quality;
step three, obtaining the optimal registration of the detection targets based on the position with the maximum mutual information in space according to the detection targets; the same target in the image has the strongest relation to Ji Shixiang in space, and the mutual information of the corresponding pixel gray scale reaches the maximum;
extracting blood vessel features in the registered image data by using a preset image processing method, wherein the blood vessel features comprise edges, textures, colors and intensity gradients; the preset image processing method can be any one of common methods for processing pictures in the prior art;
fifthly, model training of retina blood vessel segmentation and data, selecting partial data from blood vessel characteristics in the fourth step as a training set and a testing set, establishing a neural network model based on machine learning, training the neural network model by using the training set, testing the trained neural network model by using the testing set, and finally outputting a blood vessel segmentation result by using the network model;
wherein the network model is built based on a U-Net improved network comprising two parts of a contracted network and an expanded network, the contracted network comprising a shortcut layer and being to be used forAs a loss function of the model:
wherein X represents a predicted value of a division point, Y represents a reference value of the division point, N represents the number of inputs,representing the degree of similarity between the models.
Preferably, the fundus examination image data comprises a color fundus photograph, an Optical Coherence Tomography (OCT) image.
Preferably, the image preprocessing step further includes an image enhancement step for enhancing contrast and sharpness of the fundus inspection image data.
Preferably, the fourth step performs feature extraction on edge, texture, color and intensity gradient on the original data of the fundus inspection image data.
Preferably, in the fifth step, an Adam optimization algorithm is adopted to adjust model parameters and improve network depth of the residual network.
Preferably, the method further comprises a step six of updating and optimizing the trained model according to the new eye image data.
Preferably, the method further comprises a model updating module for updating and optimizing the trained model according to the new eye image data.
The invention also provides a blood vessel segmentation device based on fundus examination images, which comprises:
the data input interface is used for acquiring fundus examination image data;
the image preprocessing module is used for preprocessing the fundus examination image data, wherein the preprocessing of the fundus examination image data comprises denoising, contrast enhancement and color standardization so as to reduce noise and improve image quality;
the multi-mode image registration module is used for finding out the optimal registration according to the position of the maximum mutual information, wherein the multi-mode image registration module has the strongest relation to Ji Shixiang in space based on the same target in the image, and the mutual information of the corresponding pixel gray scale is the maximum;
the feature extraction module is used for extracting blood vessel features by using an image processing technology, wherein the blood vessel features comprise edges, textures, colors and intensity gradients;
the method comprises the steps of a U-Net improved network model module, model training of retina blood vessel segmentation and data, selecting partial data from blood vessel characteristics as a training set and a testing set, establishing a machine learning neural network model, training the neural network model by using the training set, testing the trained neural network model by using the testing set, and finally outputting a blood vessel segmentation result by using the network model.
Preferably, the fundus examination image data comprises a color fundus photograph, an Optical Coherence Tomography (OCT) image.
Preferably, the image preprocessing module further comprises an image enhancement step for enhancing contrast and sharpness of the fundus inspection image data.
Preferably, the feature extraction model construction module is further used for extracting features on edges, textures, colors and intensity gradients from the original data of the fundus inspection image data.
Preferably, the U-Net improved network model module adopts an Adam optimization algorithm to adjust model parameters and improve network depth of a residual network.
Preferably, the device further comprises a model updating module for updating and optimizing the trained model according to the new eye image data.
The invention also provides a blood vessel segmentation system based on the fundus examination image, which comprises fundus examination equipment, an optical coherence tomography scanner and data processing equipment; wherein the data processing apparatus comprises a data input interface, a processor, a memory storing a computer program executable on the processor, and input and output means;
the fundus examination apparatus and the optical coherence tomography scanner are configured to acquire fundus images and OCT images and transmit the fundus images and OCT images to the data processing apparatus for processing through a data input interface of the data processing apparatus, and a processor of the data processing apparatus is configured to execute the computer program to implement the steps described in the above-described technical solution, thereby recognizing the fundus images and OCT images to obtain a recognition result and outputting.
Compared with the prior art, the invention has at least the following beneficial effects:
1. the trained model can be updated and optimized according to the new eye image data, so that the network model is continuously optimized, and the accuracy of blood vessel segmentation is improved;
2. the eye data and clinical image data of different types can be processed, and the data compatibility is good;
3. the method is suitable for segmenting the retinal small blood vessel and the capillary blood vessel based on the fundus image and the OCT image, improves the accuracy and the efficiency of segmentation, reduces the workload of doctors, and reduces the labor cost and the artificial segmentation error.
The blood vessel segmentation method, device and system based on the fundus examination image provided by the invention have innovation and practicability in the aspects of model construction, segmentation result output and the like. The method can accurately divide the retinal small blood vessel and the capillary vessel of the eye based on fundus examination images, provides a data basis for the subsequent eye further detection combining the factors such as the length of the eye axis, diopter and the like, and provides a more reliable and efficient tool for accurately dividing the retinal small blood vessel and the capillary vessel.
Drawings
FIG. 1 is a convolutional neural network algorithm framework;
FIG. 2U-Net network model architecture;
FIG. 3U-Net improved network model architecture;
comparing the relation between the loss function and the iteration times in the training process of the 4 3 models;
FIG. 5 comparison of retinogram, probability prediction map, and binary prediction map;
FIG. 6 training a test set ROC curve;
FIG. 7 training test set PR curves;
fig. 8 is a flowchart of an implementation of a blood vessel segmentation method based on a fundus image.
Detailed Description
Before describing the embodiments, related terms will be explained first.
Fundus image: for viewing optic disc, macula, or peripheral retinal structural abnormalities. The common fundus photographing device can collect fundus 30-45 degrees, and the ultra-wide-angle fundus photographing system can acquire fundus images of 200 degrees in one photographing, so that peripheral fundus lesions can be found conveniently.
Optical Coherence Tomography (OCT): OCT can clearly display the structures of each layer of retina, and observe whether there are lesions such as posterior vitreal detachment, retina and macular cleavage, macular hole and anterior membrane, choroidal neovascularization, etc., retinochoroidal atrophy, etc.; blood flow OCT (OCTA) can detect the choroid and help to discover choroidal neovascularization.
Example 1
1. Machine learning neural network algorithm and implementation
The basic three-layer neural network architecture is shown in FIG. 1, in which the input layer has three unitsTo complement bias, usually set to 1).
Hidden layer:
wherein,an ith stimulus representing a jth layer, also referred to as a cell;A weight matrix mapped for the j-th layer to the j+1-th layer, namely the weight of each edge;
output layer:
indicate output->An ith stimulus representing a jth layer;
s-shaped functionAlso known as an excitation function.
It can be seen thatMatrix of 3x4>Matrix of 1x 4-> =>The number of units x of j+1 (the number of units of j layers+1).
2. Cost function
Assuming final outputI.e. there are K units representing the output layer, cost function:
wherein,representing the cost function of the ith element output and logistic regression,
representing the accumulated cost function, each output accumulated (K outputs total).
3. Regularization of
L represents the number of all layers
-representing the number of layer-l units, the regularized cost function is:
wherein,l-1 layers are shared, and theta matrixes corresponding to each layer are accumulated.
4. Back propagation BP
J (θ) can be calculated from the regularized expression described above, and the purpose of using the gradient descent method to also require its back propagation of the gradient BP is to solve for the gradient of the cost function, assuming a 4-layer neural network,let be- =>Error of j-th cell of layer l:
none of the aboveSince there is no error for the input, S-shaped function +.>The derivative of (2) is:Therefore->And->The process by which the back-propagation computational gradient can be calculated in the forward propagation is:
is +.>
Forward propagation computation(l=2,3,4...L)
Inverse computation...
The gradient of the cost function is:
finallyAnd obtaining the gradient of the cost function.
5. BP gradient determination
The chain derivation method is utilized because the next layer unit uses the last layer unit as input to calculate the general derivation process as follows, and finally we want the prediction function to be very close to the known y, and the gradient of the mean square error can minimize the cost function along the gradient direction. The gradient procedure above can be contrasted. Derivation process for error in more detail:
wherein,for the cost function output value, y is a known quantity, +.>The representing unit outputs a cost function that is regressed with the logic.
6. Gradient examination
Checking if the gradient found with BP correctly uses the definition verification of the derivative:
the obtained numerical gradient is very close to the gradient obtained by BP, and the algorithm for verifying the gradient is not needed to be executed after the BP is verified to be correct.
Example 2
1. Network architecture improvement scheme based on U-Net
Based on the problems of a plurality of training parameters and long training time of the conventional U-Net network with slightly insufficient depth as shown in fig. 2, and the SegNet network, the invention provides an innovative network structure, namely a U-Net improved network. The network combines the characteristics of the U-Net and the residual network, combines the self-defined residual network, introduces the concept of 'shortcut' connection, and can strengthen the depth of the U-Net network while maintaining the training time.
By adding a "shortcut" connection to the U-Net network structure, we successfully enhanced the depth of the network and did not cause excessive training time increase. Compared with the traditional U-Net, the improved U-Net network has more complex structure and more training parameters, slightly increases training time, and simultaneously obviously improves segmentation effect.
The improvement not only fuses the advantages of the U-Net and residual network in the network structure, as shown in figure 3, but also effectively solves the problem of insufficient depth of the traditional U-Net network. The U-Net improved network shows more excellent performance in the image segmentation task, and provides a more reliable and efficient tool for accurately segmenting small retinal blood vessels and capillaries.
2. U-Net improved network model
U-Net improved networks are divided into two parts, a contracted network and an expanded network.
2.1 shrink network
The contracted network is similar to that in a conventional U-Net, but introduces some variations. Before the result output by each layer, we add normalization processing and access the activation function. Each up-sampling step consists of two 3x3 convolutional layers, each convolutional followed by a 1x1 linear correction unit, i.e., a "shortcut" connection, and a 2x2 max pooling layer, downsampling with a step size of 2. In each downsampling step, the image size is halved while the number of feature channels doubles.
2.2 Expanding a network
The expansion network is also similar to that in a conventional U-Net, each step in the expansion path includes upsampling of the feature map, each upsampling comprising a 2x2 convolutional layer, halving the number of feature channels, concatenating the corresponding clipping feature map from the contraction path, and then comprising two 3x3 convolutional layers and a 1x1 linear correction unit, i.e., a "shortcut" connection. The corresponding contracted network results need to be combined with each up-sampling. Similar to the contracted network, each layer of output result of the expanded network is normalized, and then activated by an activation function. Finally, we add a 1x1 convolution layer to obtain the final feature map.
2.3 Network sampling details
The encoder of UNet modified network downsamples 4 times, and 16 times in total, and symmetrically, its decoder also upsamples 4 times correspondingly, restoring the advanced semantic feature map obtained by the encoder to the resolution of the original picture. Therefore, UNet retrofit networks are smaller and run faster.
2.4 Residual error network
The following is a self-defined residual network, defining a realization mode of the residual network, and in the original U-Net network, overlapping the tuning mode of the residual network to perform data output calculation.
The residual network is composed of a series of residual blocks, a single residual block formula is as follows:
wherein the residual block is divided into two parts, a direct mapped part and a residual part.Is a direct mapping->Is the residual part.
The custom residual network code is implemented as follows:
import torch
from torch import nn
from torch.nn import functional
class CustomBlk(nn.Module):
def __init__(self, ch_in, ch_out, stride=1):
super(CustomBlk, self).__init__()
self.conv1 = nn.Conv2d(ch_in, ch_out, kernel_size=3, stride=stride, padding=1)
self.bn1 = nn.BatchNorm2d(ch_out)
self.conv2 = nn.Conv2d(ch_out, ch_out, kernel_size=3, stride=1, padding=1)
self.bn2 = nn.BatchNorm2d(ch_out)
self.extra = nn.Sequential()
if ch_out = ch_in:
self.extra = nn.Sequential(
nn.Conv2d(ch_in, ch_out, kernel_size=1, stride=stride),
nn.BatchNorm2d(ch_out)
)
def forward(self, x):
out = functional.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out = self.extra(x) + out
out = functional.relu(out)
return out
class CustomResNet(nn.Module):
def __init__(self, num_class):
super(CustomResNet, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=3, padding=0),
nn.BatchNorm2d(16)
)
self.blk1 = CustomBlk(16, 32, stride=3)
self.blk2 = CustomBlk(32, 64, stride=3)
self.blk3 = CustomBlk(64, 128, stride=2)
self.blk4 = CustomBlk(128, 256, stride=2)
self.outlayer = nn.Linear(256 * 3 * 3, num_class)
def forward(self, x):
x = functional.relu(self.conv1(x))
x = self.blk1(x)
x = self.blk2(x)
x = self.blk3(x)
x = self.blk4(x)
x = x.view(x.size(0), -1)
x = self.outlayer(x)
return x
compared with the traditional U-Net, the U-Net improved network performs standardization processing on the output result of each layer by introducing a self-defined residual error network. Structurally, as shown in fig. 2, the arrow represents a "shortcut" connection, the first square represents the result after the "shortcut" connection, and the second square represents the replenishment of boundary information during the upsampling process.
The U-Net improved network is deeper in network level, training parameters are more, and the problem of insufficient depth of the traditional U-Net network is solved to a certain extent. Meanwhile, due to the property of the residual error network, the method also solves the problem of performance degradation under the extremely deep convolutional neural network.
These changes and improvements have led our U-Net improvement network to exhibit greater performance in the image segmentation task.
3. Optimizing model stability using batch normalization
Batch normalization (Batch Normalization) is a commonly used deep neural network optimization technique that helps to speed up convergence of network training and improve model stability and generalization ability. The following are the specific implementation steps of batch normalization:
a) The batch normalization layer is added after the convolution layer or full connection layer, which typically operates after the convolution layer or full connection layer. Normalizing in the output of each layer can reduce the gradient vanishing and gradient explosion problems, thereby speeding up the training of the network.
b) The mean and variance are calculated, and for each batch of training data, the mean and variance of the features are calculated in the channel dimension. This can be obtained by calculating the mean and variance of each channel and then averaging over the whole batch.
c) And normalizing the features by applying normalization and using the calculated mean and variance. For an input of x, the average isVariance is->The batch normalized operation can be expressed as:
wherein,is a small constant, avoiding the case that the denominator is 0.
d) Scaling and panning the normalized features so that the network can learn the transformations appropriate for the task. Introducing two learnable parametersAnd->Scaling and translating the normalized features:
y is the final output characteristic.
e) Updating during training, parameters during trainingAnd->Will be updated and the mean and variance updated by exponential moving averages to maintain stability.
f) The normalization of the mean and variance is carried out on the output of each layer through batch normalization, and then the internal covariate offset is reduced through the leavable parameter scaling and translation, so that the stability and generalization capability of the network are improved.
4. "shortcut" layer
U-Net improved network introduces a "shortcut" layer (shortcut), the basic network structure of which is shown in FIG. 3, which is formulated in the present invention:
where Y and X denote the outputs and inputs of the network,representing a weight, K representing an activation function; b is an adjustable parameter, default to 1 in this experiment. One "shortcut" layer may contain multiple convolution layers, we can writeBy->Representing the case of multiple convolutional layers, the improvement is as follows:
the introduction of the 'shortcut' layer enables the network structure of the U-Net to be deeper, and simultaneously avoids the phenomena of overlong training time, excessive training parameters and overfitting.
5. Loss function
The Loss Function (Loss Function) is used to evaluate the degree of inconsistency between the predicted value and the reference value (ground trunk), and the smaller the Loss Function, the better the robustness of the model. This experiment we willAs a loss function of the model, < >>The following is shown:
wherein X represents a partition point prediction value, Y represents a partition point reference value,representing the degree of similarity between the two models, similarity +.>The following is shown:
wherein k is a smoothed value,representing the intersection or overlap between two samples, +.>Representing the total amount of predicted and reference values. Since there is no neural area in some ultrasound images, a phenomenon of blank image occurs, and we add the smoothed value k again to correct the function. The smaller the loss function is, the better the robustness of the model is.
6. Optimizing functions
The optimization function helps the model to adjust the weight while training the model, so that the model weight is adjusted to be optimal, and the loss function is minimized. In addition, the Adam optimization function is adopted, so that the method has the advantages of high calculation efficiency, less occupied memory, good treatment on non-stationary models and the like.
7. Experimental analysis
We used fundus examination image data to train U-Net network, segNet network, and U-Net retrofit network, respectively, from cooperative medical institutions and global ophthalmic image disclosure databases [ http:// www.ykxjz.com/docs/tzgg/details. Aspxdocultid=54 & nid=A 967CBAD-BC53-4787-88ED-CD9D9ABAA7DE ], with training data of approximately 6500 sets of fundus examination image data. The experimental result shows that the improved network segmentation effect of the U-Net is higher than that of the U-Net network and the SegNet network. The result of cutting out a part of the fundus examination image data is shown in fig. 5.
8. Experimental data
The experimental data of the experiment are fundus examination image data, and a plurality of medical institutions, ophthalmic clinics, research institutions and global ophthalmic image disclosure databases are sourced. In the retinal vessel segmentation training stage of the U-Net improved network, a large number of labeling images are required to be prepared, and the images comprise original fundus images and vessel segmentation results corresponding to each image.
Decompressing the fundus examination image dataset to the local, creating a dataset path index file, starting a starting program, formulating a U-Net improved network model for test training, selecting AUC of ROC stored data when a test result is true, and verifying a dataset performance evaluation result storage model. And performing performance test on the test set through test evaluation, saving the file to the local, and drawing a corresponding visual result. As shown in fig. 6 and 7, aucroc and aucpr test results showed that the accuracy of the test results was higher than expected.
9. Evaluation criteria
In the experiment, a Dice coefficient (Dice coefficient) is adopted to evaluate the quality of the model. The Dice coefficient is a set similarity function used for judging the similarity degree between two samples, and the better the similarity between the two samples is, the larger the Dice coefficient is. The Dice coefficients are as follows:
wherein X represents a partition point predicted value, Y represents a partition point reference value,representing the intersection or overlap between two samples, +.>Representing the total amount of predicted and reference values. In the experiment, when the predicted value is completely the same as the reference value, the Dice coefficient is 1; when the predicted value is not correlated with the reference value, the Dice coefficient is 0. The larger the Dice coefficient, the higher the similarity of the two images, and the more accurate the model. />
10. Experimental results and assessment
(1) Experimental results
The 6500 groups of eyeground examination image data are used as a training set for training a network model of three networks, namely U-Net, segNet and improved U-Net. The three trained network models are used for respectively predicting data in the test set, the prediction results are matched with the pathological myopia conditions of the actual patient, and the evaluation results are shown in table 1. From Table 1 we can see that the improved U-Net network splitting effect is significantly higher than that of the SegNet and U-Net networks. Compared with a U-Net network, the improved U-Net network segmentation effect is improved by 15%, and compared with a SegNet network, the improved U-Net network segmentation effect is improved by 10%. The table is the Dice coefficient of the network model of the test results of three different network structures:
table 1 3 comparison of training coefficients for network models
In summary, the improved U-Net network segmentation effect is significantly better than that of the U-Net network and the SegNet network.
(2) Training time assessment
The improved U-Net network introduces a self-defined residual network, deepens the depth of the U-Net network, so that the training parameters of the improved U-Net network are higher than those of the U-Net network, but are far smaller than those of the SegNet network. Table 2 shows a comparison of training time for three network structures, and by introducing means such as normalization processing, the improved U-Net training time is reduced. From the following table we can see that the training time of the improved U-Net network is slightly longer than that of the U-Net network, but is far shorter than that of the SegNet network, and the training time is about 1/5 of that of the SegNet network.
Table 2 training time comparisons for three different networks
(3) Training process assessment
Fig. 4 shows the relationship between the loss function and the number of iterations in model training, comparing the training process of three models. After analysis, the improved U-Net network shows a quicker descending trend in the training process, and the accuracy rate of the improved U-Net network exceeds that of the original U-Net network and the SegNet network. Therefore, it can be concluded that the improved U-Net network has stronger robustness than the original U-Net network and SegNet network. Experimental results further prove that the segmentation effect of the improved U-Net network is obviously better than that of the original U-Net network and the SegNet network.
Experimental data shows that the improved U-Net network increases the depth of the network by introducing a residual network, so that the high-dimensional characteristics of the image can be captured better, and the segmentation accuracy is improved. Meanwhile, the training speed of the model is obviously improved by adding normalization processing, and the accuracy of the training process is enhanced. Research shows that the segmentation effect of the improved U-Net network is obviously improved compared with that of the original U-Net network and the SegNet network, and the training time is shorter and the training parameters are fewer. Specifically, the segmentation effect is improved by 15% compared with the original U-Net network and 10% compared with the SegNet network.
These results fully demonstrate the superiority of the improved U-Net network, which exhibits excellent performance in the image segmentation task.
Example 3
The blood vessel segmentation method based on the fundus examination image is used in actual medical diagnosis.
Pathologic myopia exhibits a more characteristic manifestation in fundus lesions, which are classified into five classes of grades according to fundus lesions (myopic macular lesions are classified), including normal (no maculopathy), leopard-like fundus, diffuse chorioretinopathy, zebra-like chorioretinal atrophy, and macular atrophy. Normal indicates no apparent myopic maculopathy, while other grades correspond to different degrees of pathological changes.
Pathological myopia is often accompanied by the appearance of several complications, including paint cracks (lacquer cracks), choroidal Neovascularization (CNV), fuchs spots (Fuchs spot), etc., which have a major impact on vision. The three lesions are "Plus" lesions. These complications may lead to vision loss, severely affecting the quality of life of the patient.
The blood vessel segmentation system based on the fundus examination image currently enters a clinical function test stage, a doctor can further judge pathological myopia diagnosis conditions of a patient by combining an output eye blood vessel segmentation result, an eye axis length and a diopter fusion detection system, upload the fundus examination image data, give an evaluation test result through synchronous rapid analysis, and count clinical diagnosis schemes and feedback comments of the doctor:
table 3 clinical data evaluation feedback statistics
In summary, in clinical diagnosis, observation and analysis of fundus lesions are critical to determining pathological myopia grade of a patient, by means of collected fundus examination image data, according to fundus blood vessel images after blood vessel segmentation of fundus examination images based on U-Net improvement network, in a subsequent evaluation process, the evaluation result reaches 99.3% of accuracy, meanwhile, the time efficiency is improved by 21%, the diagnosis efficiency and the diagnosis accuracy of doctors are greatly improved, meanwhile, by combining with the diagnosis evaluation result, corresponding data reference and treatment schemes can be quickly positioned, an ophthalmologist can be helped to evaluate the condition more accurately, the pathological cause is positioned, and corresponding treatment and management plans are formulated.

Claims (9)

1. A blood vessel segmentation method based on fundus examination images, comprising:
step one, collecting fundus examination image data; the fundus examination image data includes at least a color fundus image and an optical coherence tomography image;
step two, preprocessing the fundus examination image data; the preprocessing comprises denoising, contrast enhancement and color normalization, and is used for reducing noise and improving image quality;
step three, based on the position of the detection target with the largest mutual information in space in the image data, obtaining the optimal registration of the detection target;
extracting blood vessel features in the registered image data by using a preset image processing method, wherein the blood vessel features comprise edges, textures, colors and intensity gradients;
step five, selecting part of data from the blood vessel characteristics in the step four as a training set and a testing set, establishing a neural network model based on machine learning, training the neural network model by using the training set, testing the trained neural network model by using the testing set, and finally outputting a blood vessel segmentation result by using the neural network model;
wherein the neural network model is constructed based on a U-Net improved network comprising a contracted network and an expanded network, the contracted network comprising a shortcut layer and being to be used as a network layerAs a loss function of the model:
wherein X represents a predicted value of a division point, Y represents a reference value of the division point, N represents the number of inputs,representing the degree of similarity between the models; similarity->The following is shown:
wherein k is a smoothed value,representing the intersection or overlap between two samples, +.>Representing the total amount of predicted and reference values.
2. The fundus image based blood vessel segmentation method according to claim 1, wherein said image preprocessing step further comprises an image enhancement step for enhancing contrast and sharpness of fundus image data.
3. The method of claim 1, wherein the fourth step further comprises extracting features on edges, textures, colors, and intensity gradients from the raw data of the fundus image data.
4. The vessel segmentation method based on fundus examination image according to claim 1, wherein the fifth step adopts Adam optimization algorithm to adjust model parameters and residual network lifting network depth.
5. The method of claim 1, further comprising the step of updating and optimizing the trained model based on new ocular image data.
6. A vessel segmentation device based on fundus examination images, the device comprising:
the data input interface is used for acquiring fundus examination image data; the fundus examination image data includes at least a color fundus image and an optical coherence tomography image;
the image preprocessing module is used for preprocessing the fundus examination image data, wherein the preprocessing of the fundus examination image data comprises denoising, contrast enhancement and color standardization so as to reduce noise and improve image quality;
the multi-mode image registration module obtains the optimal registration of the detection targets based on the position of the detection targets with the largest mutual information in space in the image data;
the feature extraction module is used for extracting blood vessel features in the registered image data by using a preset image processing method, wherein the blood vessel features comprise edges, textures, colors and intensity gradients;
the U-Net improved network model module is used for model training of retina blood vessel segmentation and data, part of data is selected from the blood vessel characteristics to serve as a training set and a testing set, a machine learning neural network model is established, the neural network model is trained by utilizing the training set, the trained neural network model is tested by utilizing the testing set, and finally a blood vessel segmentation result is output by utilizing the neural network model; wherein the neural network model is built based on a U-Net improved network comprising two parts of a contracted network and an expanded network, the contracted network comprises a shortcut layer, and L (X, Y) is taken as a loss function of the model:wherein X represents a division point predicted value, Y represents a division point reference value, N represents an input number, and S (X, Y) represents a degree of similarity between models; wherein the similarity S (X, Y) is as follows:Where 2|X n Y represents the intersection or overlap between two samples, X Y represents the total of the predicted value and the reference value, and k is a smoothed value.
7. The fundus-examination-image-based blood vessel segmentation device according to claim 6, wherein the image preprocessing module further comprises an image enhancement unit for enhancing contrast and sharpness of the fundus examination image data.
8. A vessel segmentation system based on fundus examination images, comprising: fundus examination apparatus, optical coherence tomography scanner, and data processing apparatus; wherein the data processing apparatus comprises a data input interface, a processor, a memory storing a computer program executable on the processor, and input and output means;
the fundus examination device and the optical coherence tomography scanner are configured to acquire fundus images and OCT images and to transmit the fundus images and OCT images to the data processing device for processing via a data input interface of the data processing device, a processor of the data processing device being configured to execute the computer program to implement the steps recited in the method of claim 1, thereby recognizing the fundus images and OCT images to obtain a recognition result and outputting.
9. A computer readable storage medium, characterized in that it stores a computer program which, when executed by a processor, can implement the steps of the method of claim 1.
CN202311319781.6A 2023-10-12 2023-10-12 Blood vessel segmentation method, device and system based on fundus examination image Active CN117058676B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311319781.6A CN117058676B (en) 2023-10-12 2023-10-12 Blood vessel segmentation method, device and system based on fundus examination image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311319781.6A CN117058676B (en) 2023-10-12 2023-10-12 Blood vessel segmentation method, device and system based on fundus examination image

Publications (2)

Publication Number Publication Date
CN117058676A CN117058676A (en) 2023-11-14
CN117058676B true CN117058676B (en) 2024-02-02

Family

ID=88661257

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311319781.6A Active CN117058676B (en) 2023-10-12 2023-10-12 Blood vessel segmentation method, device and system based on fundus examination image

Country Status (1)

Country Link
CN (1) CN117058676B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117372284B (en) * 2023-12-04 2024-02-23 江苏富翰医疗产业发展有限公司 Fundus image processing method and fundus image processing system
CN117809839B (en) * 2024-01-02 2024-05-14 珠海全一科技有限公司 Correlation analysis method for predicting hypertensive retinopathy and related factors

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN110570446A (en) * 2019-09-20 2019-12-13 河南工业大学 Fundus retina image segmentation method based on generation countermeasure network
CN114881962A (en) * 2022-04-28 2022-08-09 桂林理工大学 Retina image blood vessel segmentation method based on improved U-Net network
CN116452571A (en) * 2023-04-26 2023-07-18 四川吉利学院 Image recognition method based on deep neural network

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3629898A4 (en) * 2017-05-30 2021-01-20 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification
US11990224B2 (en) * 2020-03-26 2024-05-21 The Regents Of The University Of California Synthetically generating medical images using deep convolutional generative adversarial networks
US11580646B2 (en) * 2021-03-26 2023-02-14 Nanjing University Of Posts And Telecommunications Medical image segmentation method based on U-Net

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106408562A (en) * 2016-09-22 2017-02-15 华南理工大学 Fundus image retinal vessel segmentation method and system based on deep learning
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN110570446A (en) * 2019-09-20 2019-12-13 河南工业大学 Fundus retina image segmentation method based on generation countermeasure network
CN114881962A (en) * 2022-04-28 2022-08-09 桂林理工大学 Retina image blood vessel segmentation method based on improved U-Net network
CN116452571A (en) * 2023-04-26 2023-07-18 四川吉利学院 Image recognition method based on deep neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
基于U-Net的结节分割方法;徐峰;郑斌;郭进祥;刘立波;;软件导刊(第08期);全文 *
基于U型网络的视网膜血管图像分割方法研究;周舒婕;中国优秀硕士学位论文全文数据库医药卫生科技辑;E073-56 *
基于改进U-Net视网膜血管图像分割算法;李大湘;张振;;光学学报(第10期);全文 *

Also Published As

Publication number Publication date
CN117058676A (en) 2023-11-14

Similar Documents

Publication Publication Date Title
CN110197493B (en) Fundus image blood vessel segmentation method
EP3674968B1 (en) Image classification method, server and computer readable storage medium
US11636340B2 (en) Modeling method and apparatus for diagnosing ophthalmic disease based on artificial intelligence, and storage medium
CN117058676B (en) Blood vessel segmentation method, device and system based on fundus examination image
CN109325942B (en) Fundus image structure segmentation method based on full convolution neural network
CN109726743B (en) Retina OCT image classification method based on three-dimensional convolutional neural network
CN111259982A (en) Premature infant retina image classification method and device based on attention mechanism
Hassan et al. Deep learning based joint segmentation and characterization of multi-class retinal fluid lesions on OCT scans for clinical use in anti-VEGF therapy
CN112132817A (en) Retina blood vessel segmentation method for fundus image based on mixed attention mechanism
CN111862009B (en) Classifying method of fundus OCT (optical coherence tomography) images and computer readable storage medium
de Moura et al. Joint diabetic macular edema segmentation and characterization in OCT images
CN109447962A (en) A kind of eye fundus image hard exudate lesion detection method based on convolutional neural networks
CN113160226A (en) Two-way guide network-based classification segmentation method and system for AMD lesion OCT image
CN111563884A (en) Neural network-based fundus disease identification method, computer device, and medium
CN113889267A (en) Method for constructing diabetes diagnosis model based on eye image recognition and electronic equipment
CN113012163A (en) Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network
Abbasi-Sureshjani et al. Boosted exudate segmentation in retinal images using residual nets
CN112806957B (en) Keratoconus and subclinical keratoconus detection system based on deep learning
CN108665474A (en) A kind of eye fundus image Segmentation Method of Retinal Blood Vessels based on B-COSFIRE
Pappu et al. EANet: Multiscale autoencoder based edge attention network for fluid segmentation from SD‐OCT images
Ferreira et al. Multilevel cnn for angle closure glaucoma detection using as-oct images
CN116246331B (en) Automatic keratoconus grading method, device and storage medium
CN117314935A (en) Diffusion model-based low-quality fundus image enhancement and segmentation method and system
CN116092667A (en) Disease detection method, system, device and storage medium based on multi-mode images
Thanh et al. A real-time classification of glaucoma from retinal fundus images using AI technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Yan Chenchen

Inventor after: Zhou Haiying

Inventor after: Ji Haixia

Inventor after: She Haicheng

Inventor before: Yan Chenchen

Inventor before: Zhou Haiying

Inventor before: Ji Haixia

Inventor before: Yu Haicheng

CB03 Change of inventor or designer information