CN117058676B - Blood vessel segmentation method, device and system based on fundus examination image - Google Patents
Blood vessel segmentation method, device and system based on fundus examination image Download PDFInfo
- Publication number
- CN117058676B CN117058676B CN202311319781.6A CN202311319781A CN117058676B CN 117058676 B CN117058676 B CN 117058676B CN 202311319781 A CN202311319781 A CN 202311319781A CN 117058676 B CN117058676 B CN 117058676B
- Authority
- CN
- China
- Prior art keywords
- fundus
- image
- blood vessel
- network
- image data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 210000004204 blood vessel Anatomy 0.000 title claims abstract description 50
- 230000011218 segmentation Effects 0.000 title claims abstract description 47
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000012545 processing Methods 0.000 claims abstract description 19
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 8
- 210000001525 retina Anatomy 0.000 claims abstract description 7
- 238000012549 training Methods 0.000 claims description 53
- 230000006870 function Effects 0.000 claims description 35
- 238000012014 optical coherence tomography Methods 0.000 claims description 26
- 238000012360 testing method Methods 0.000 claims description 23
- 238000003062 neural network model Methods 0.000 claims description 16
- 238000007781 pre-processing Methods 0.000 claims description 14
- 238000010606 normalization Methods 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 8
- 239000003086 colorant Substances 0.000 claims description 6
- 238000010801 machine learning Methods 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 5
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims description 4
- 230000002708 enhancing effect Effects 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 238000003672 processing method Methods 0.000 claims description 4
- 238000003860 storage Methods 0.000 claims description 3
- 230000002207 retinal effect Effects 0.000 abstract description 8
- 238000013528 artificial neural network Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 abstract description 3
- 230000004438 eyesight Effects 0.000 abstract description 3
- 238000010191 image analysis Methods 0.000 abstract 1
- 230000000875 corresponding effect Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 9
- 230000003902 lesion Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 208000001309 degenerative myopia Diseases 0.000 description 8
- 230000004340 degenerative myopia Effects 0.000 description 8
- 238000011156 evaluation Methods 0.000 description 7
- 230000006872 improvement Effects 0.000 description 6
- 238000003745 diagnosis Methods 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 5
- 238000007689 inspection Methods 0.000 description 5
- 208000005590 Choroidal Neovascularization Diseases 0.000 description 4
- 206010060823 Choroidal neovascularisation Diseases 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 4
- 238000005192 partition Methods 0.000 description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 description 3
- 230000004913 activation Effects 0.000 description 3
- 238000009795 derivation Methods 0.000 description 3
- 230000004402 high myopia Effects 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 230000008676 import Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 210000001210 retinal vessel Anatomy 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 206010003694 Atrophy Diseases 0.000 description 2
- 241000282485 Vulpes vulpes Species 0.000 description 2
- 230000037444 atrophy Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000003759 clinical diagnosis Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 2
- 208000001491 myopia Diseases 0.000 description 2
- 230000004379 myopia Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000004393 visual impairment Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 201000004569 Blindness Diseases 0.000 description 1
- 208000033825 Chorioretinal atrophy Diseases 0.000 description 1
- 208000033379 Chorioretinopathy Diseases 0.000 description 1
- FCKYPQBAHLOOJQ-UHFFFAOYSA-N Cyclohexane-1,2-diaminetetraacetic acid Chemical compound OC(=O)CN(CC(O)=O)C1CCCCC1N(CC(O)=O)CC(O)=O FCKYPQBAHLOOJQ-UHFFFAOYSA-N 0.000 description 1
- 101100391172 Dictyostelium discoideum forA gene Proteins 0.000 description 1
- 206010025421 Macule Diseases 0.000 description 1
- 208000035719 Maculopathy Diseases 0.000 description 1
- 208000024080 Myopic macular degeneration Diseases 0.000 description 1
- 208000022873 Ocular disease Diseases 0.000 description 1
- 206010073286 Pathologic myopia Diseases 0.000 description 1
- 208000002367 Retinal Perforations Diseases 0.000 description 1
- 208000017442 Retinal disease Diseases 0.000 description 1
- 206010038923 Retinopathy Diseases 0.000 description 1
- 206010047571 Visual impairment Diseases 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000017531 blood circulation Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 210000003161 choroid Anatomy 0.000 description 1
- 238000003776 cleavage reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004195 computer-aided diagnosis Methods 0.000 description 1
- 230000008602 contraction Effects 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000011157 data evaluation Methods 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 238000012854 evaluation process Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 208000030533 eye disease Diseases 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 230000002427 irreversible effect Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 239000004922 lacquer Substances 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 208000002780 macular degeneration Diseases 0.000 description 1
- 208000029233 macular holes Diseases 0.000 description 1
- 239000012528 membrane Substances 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 239000003973 paint Substances 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 231100000915 pathological change Toxicity 0.000 description 1
- 230000036285 pathological change Effects 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 238000011056 performance test Methods 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000007017 scission Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 208000029257 vision disease Diseases 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/0985—Hyperparameter optimisation; Meta-learning; Learning-to-learn
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/14—Vascular patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Vascular Medicine (AREA)
- Human Computer Interaction (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Eye Examination Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a blood vessel segmentation method, device and system based on fundus examination images, which aim to realize automatic segmentation of retinal small blood vessels and capillary vessels based on fundus photographs or OCT by combining computer vision, image processing technology, neural network algorithm and medical image analysis, improve the efficiency and accuracy of blood vessel segmentation, reduce the workload of doctors and reduce the risk of human errors. The method, the device and the system provided by the application take the retina blood vessel segmentation scheme of the U-Net improved network model as a core model, combine conventional fundus examination image data, and simultaneously utilize the advantages of computer data to rapidly divide small retina blood vessels and capillary blood vessels, so that the workload of doctors can be reduced, and the risk of human errors can be reduced.
Description
Technical Field
The invention relates to the field of machine vision and machine learning algorithms, in particular to a blood vessel segmentation method, device and system based on fundus examination images and a computer readable storage medium.
Background
Currently, global myopia prevalence is as high as 28.3%, with high myopia prevalence rising from current 4.0% to 9.8%. High myopia is also classified into simple high myopia and pathological myopia. Wherein, the pathological myopia refers to that the sphere power (SE) is less than or equal to-6.00D, and/or the eye axis is more than 26.5mm, the myopia power is continuously increased, the ocular fundus lesions and other blinding ocular diseases which cause visual impairment are accompanied, and the optimal corrected vision (BCVA) is often lower than the normal value. Retinopathy caused by pathological myopia has become an important etiology for irreversible blinding eye disease.
With advances in medical technology, automated and computer-aided diagnosis methods are gradually being applied to the ophthalmic field. In the prior art, when an automation system is used for identifying pathological myopia, a color fundus photo or Optical Coherence Tomography (OCT) is usually taken, retinal small blood vessels and capillary vessels are divided according to the fundus photo or OCT, and whether the periphery of the fundus is diseased or not is determined according to the changes of the retinal small blood vessels and the capillary vessels, the length of an eye axis, diopter and other factors. However, the current automation system cannot accurately divide retinal small blood vessels and capillaries in the fundus photo or OCT, and the division of the blood vessels requires a doctor with abundant experience to participate in the whole course, which consumes a great deal of labor cost.
Under the background, the invention provides a blood vessel segmentation method, device and system based on fundus examination images, which realize automatic segmentation of small retinal blood vessels and capillary vessels based on fundus photographs or OCT, improve the efficiency and accuracy of blood vessel segmentation, reduce the workload of doctors and reduce the risk of human errors.
Disclosure of Invention
The main object of the present invention is to provide a blood vessel segmentation method based on fundus examination image, comprising:
step one, collecting fundus examination image data through a data input interface;
preprocessing the image data of the eye fundus inspection, wherein the preprocessing comprises denoising, contrast enhancement and color standardization so as to reduce noise and improve image quality;
step three, obtaining the optimal registration of the detection targets based on the position with the maximum mutual information in space according to the detection targets; the same target in the image has the strongest relation to Ji Shixiang in space, and the mutual information of the corresponding pixel gray scale reaches the maximum;
extracting blood vessel features in the registered image data by using a preset image processing method, wherein the blood vessel features comprise edges, textures, colors and intensity gradients; the preset image processing method can be any one of common methods for processing pictures in the prior art;
fifthly, model training of retina blood vessel segmentation and data, selecting partial data from blood vessel characteristics in the fourth step as a training set and a testing set, establishing a neural network model based on machine learning, training the neural network model by using the training set, testing the trained neural network model by using the testing set, and finally outputting a blood vessel segmentation result by using the network model;
wherein the network model is built based on a U-Net improved network comprising two parts of a contracted network and an expanded network, the contracted network comprising a shortcut layer and being to be used forAs a loss function of the model:
wherein X represents a predicted value of a division point, Y represents a reference value of the division point, N represents the number of inputs,representing the degree of similarity between the models.
Preferably, the fundus examination image data comprises a color fundus photograph, an Optical Coherence Tomography (OCT) image.
Preferably, the image preprocessing step further includes an image enhancement step for enhancing contrast and sharpness of the fundus inspection image data.
Preferably, the fourth step performs feature extraction on edge, texture, color and intensity gradient on the original data of the fundus inspection image data.
Preferably, in the fifth step, an Adam optimization algorithm is adopted to adjust model parameters and improve network depth of the residual network.
Preferably, the method further comprises a step six of updating and optimizing the trained model according to the new eye image data.
Preferably, the method further comprises a model updating module for updating and optimizing the trained model according to the new eye image data.
The invention also provides a blood vessel segmentation device based on fundus examination images, which comprises:
the data input interface is used for acquiring fundus examination image data;
the image preprocessing module is used for preprocessing the fundus examination image data, wherein the preprocessing of the fundus examination image data comprises denoising, contrast enhancement and color standardization so as to reduce noise and improve image quality;
the multi-mode image registration module is used for finding out the optimal registration according to the position of the maximum mutual information, wherein the multi-mode image registration module has the strongest relation to Ji Shixiang in space based on the same target in the image, and the mutual information of the corresponding pixel gray scale is the maximum;
the feature extraction module is used for extracting blood vessel features by using an image processing technology, wherein the blood vessel features comprise edges, textures, colors and intensity gradients;
the method comprises the steps of a U-Net improved network model module, model training of retina blood vessel segmentation and data, selecting partial data from blood vessel characteristics as a training set and a testing set, establishing a machine learning neural network model, training the neural network model by using the training set, testing the trained neural network model by using the testing set, and finally outputting a blood vessel segmentation result by using the network model.
Preferably, the fundus examination image data comprises a color fundus photograph, an Optical Coherence Tomography (OCT) image.
Preferably, the image preprocessing module further comprises an image enhancement step for enhancing contrast and sharpness of the fundus inspection image data.
Preferably, the feature extraction model construction module is further used for extracting features on edges, textures, colors and intensity gradients from the original data of the fundus inspection image data.
Preferably, the U-Net improved network model module adopts an Adam optimization algorithm to adjust model parameters and improve network depth of a residual network.
Preferably, the device further comprises a model updating module for updating and optimizing the trained model according to the new eye image data.
The invention also provides a blood vessel segmentation system based on the fundus examination image, which comprises fundus examination equipment, an optical coherence tomography scanner and data processing equipment; wherein the data processing apparatus comprises a data input interface, a processor, a memory storing a computer program executable on the processor, and input and output means;
the fundus examination apparatus and the optical coherence tomography scanner are configured to acquire fundus images and OCT images and transmit the fundus images and OCT images to the data processing apparatus for processing through a data input interface of the data processing apparatus, and a processor of the data processing apparatus is configured to execute the computer program to implement the steps described in the above-described technical solution, thereby recognizing the fundus images and OCT images to obtain a recognition result and outputting.
Compared with the prior art, the invention has at least the following beneficial effects:
1. the trained model can be updated and optimized according to the new eye image data, so that the network model is continuously optimized, and the accuracy of blood vessel segmentation is improved;
2. the eye data and clinical image data of different types can be processed, and the data compatibility is good;
3. the method is suitable for segmenting the retinal small blood vessel and the capillary blood vessel based on the fundus image and the OCT image, improves the accuracy and the efficiency of segmentation, reduces the workload of doctors, and reduces the labor cost and the artificial segmentation error.
The blood vessel segmentation method, device and system based on the fundus examination image provided by the invention have innovation and practicability in the aspects of model construction, segmentation result output and the like. The method can accurately divide the retinal small blood vessel and the capillary vessel of the eye based on fundus examination images, provides a data basis for the subsequent eye further detection combining the factors such as the length of the eye axis, diopter and the like, and provides a more reliable and efficient tool for accurately dividing the retinal small blood vessel and the capillary vessel.
Drawings
FIG. 1 is a convolutional neural network algorithm framework;
FIG. 2U-Net network model architecture;
FIG. 3U-Net improved network model architecture;
comparing the relation between the loss function and the iteration times in the training process of the 4 3 models;
FIG. 5 comparison of retinogram, probability prediction map, and binary prediction map;
FIG. 6 training a test set ROC curve;
FIG. 7 training test set PR curves;
fig. 8 is a flowchart of an implementation of a blood vessel segmentation method based on a fundus image.
Detailed Description
Before describing the embodiments, related terms will be explained first.
Fundus image: for viewing optic disc, macula, or peripheral retinal structural abnormalities. The common fundus photographing device can collect fundus 30-45 degrees, and the ultra-wide-angle fundus photographing system can acquire fundus images of 200 degrees in one photographing, so that peripheral fundus lesions can be found conveniently.
Optical Coherence Tomography (OCT): OCT can clearly display the structures of each layer of retina, and observe whether there are lesions such as posterior vitreal detachment, retina and macular cleavage, macular hole and anterior membrane, choroidal neovascularization, etc., retinochoroidal atrophy, etc.; blood flow OCT (OCTA) can detect the choroid and help to discover choroidal neovascularization.
Example 1
1. Machine learning neural network algorithm and implementation
The basic three-layer neural network architecture is shown in FIG. 1, in which the input layer has three unitsTo complement bias, usually set to 1).
Hidden layer:
wherein,an ith stimulus representing a jth layer, also referred to as a cell;A weight matrix mapped for the j-th layer to the j+1-th layer, namely the weight of each edge;
output layer:
indicate output->An ith stimulus representing a jth layer;
s-shaped functionAlso known as an excitation function.
It can be seen thatMatrix of 3x4>Matrix of 1x 4-> =>The number of units x of j+1 (the number of units of j layers+1).
2. Cost function
Assuming final outputI.e. there are K units representing the output layer, cost function:
wherein,representing the cost function of the ith element output and logistic regression,
representing the accumulated cost function, each output accumulated (K outputs total).
3. Regularization of
L represents the number of all layers
-representing the number of layer-l units, the regularized cost function is:
wherein,l-1 layers are shared, and theta matrixes corresponding to each layer are accumulated.
4. Back propagation BP
J (θ) can be calculated from the regularized expression described above, and the purpose of using the gradient descent method to also require its back propagation of the gradient BP is to solve for the gradient of the cost function, assuming a 4-layer neural network,let be- =>Error of j-th cell of layer l:
none of the aboveSince there is no error for the input, S-shaped function +.>The derivative of (2) is:Therefore->And->The process by which the back-propagation computational gradient can be calculated in the forward propagation is:
(is +.>)
Forward propagation computation(l=2,3,4...L)
Inverse computation、...;
;
The gradient of the cost function is:
finallyAnd obtaining the gradient of the cost function.
5. BP gradient determination
The chain derivation method is utilized because the next layer unit uses the last layer unit as input to calculate the general derivation process as follows, and finally we want the prediction function to be very close to the known y, and the gradient of the mean square error can minimize the cost function along the gradient direction. The gradient procedure above can be contrasted. Derivation process for error in more detail:
wherein,for the cost function output value, y is a known quantity, +.>The representing unit outputs a cost function that is regressed with the logic.
6. Gradient examination
Checking if the gradient found with BP correctly uses the definition verification of the derivative:
the obtained numerical gradient is very close to the gradient obtained by BP, and the algorithm for verifying the gradient is not needed to be executed after the BP is verified to be correct.
Example 2
1. Network architecture improvement scheme based on U-Net
Based on the problems of a plurality of training parameters and long training time of the conventional U-Net network with slightly insufficient depth as shown in fig. 2, and the SegNet network, the invention provides an innovative network structure, namely a U-Net improved network. The network combines the characteristics of the U-Net and the residual network, combines the self-defined residual network, introduces the concept of 'shortcut' connection, and can strengthen the depth of the U-Net network while maintaining the training time.
By adding a "shortcut" connection to the U-Net network structure, we successfully enhanced the depth of the network and did not cause excessive training time increase. Compared with the traditional U-Net, the improved U-Net network has more complex structure and more training parameters, slightly increases training time, and simultaneously obviously improves segmentation effect.
The improvement not only fuses the advantages of the U-Net and residual network in the network structure, as shown in figure 3, but also effectively solves the problem of insufficient depth of the traditional U-Net network. The U-Net improved network shows more excellent performance in the image segmentation task, and provides a more reliable and efficient tool for accurately segmenting small retinal blood vessels and capillaries.
2. U-Net improved network model
U-Net improved networks are divided into two parts, a contracted network and an expanded network.
2.1 shrink network
The contracted network is similar to that in a conventional U-Net, but introduces some variations. Before the result output by each layer, we add normalization processing and access the activation function. Each up-sampling step consists of two 3x3 convolutional layers, each convolutional followed by a 1x1 linear correction unit, i.e., a "shortcut" connection, and a 2x2 max pooling layer, downsampling with a step size of 2. In each downsampling step, the image size is halved while the number of feature channels doubles.
2.2 Expanding a network
The expansion network is also similar to that in a conventional U-Net, each step in the expansion path includes upsampling of the feature map, each upsampling comprising a 2x2 convolutional layer, halving the number of feature channels, concatenating the corresponding clipping feature map from the contraction path, and then comprising two 3x3 convolutional layers and a 1x1 linear correction unit, i.e., a "shortcut" connection. The corresponding contracted network results need to be combined with each up-sampling. Similar to the contracted network, each layer of output result of the expanded network is normalized, and then activated by an activation function. Finally, we add a 1x1 convolution layer to obtain the final feature map.
2.3 Network sampling details
The encoder of UNet modified network downsamples 4 times, and 16 times in total, and symmetrically, its decoder also upsamples 4 times correspondingly, restoring the advanced semantic feature map obtained by the encoder to the resolution of the original picture. Therefore, UNet retrofit networks are smaller and run faster.
2.4 Residual error network
The following is a self-defined residual network, defining a realization mode of the residual network, and in the original U-Net network, overlapping the tuning mode of the residual network to perform data output calculation.
The residual network is composed of a series of residual blocks, a single residual block formula is as follows:
wherein the residual block is divided into two parts, a direct mapped part and a residual part.Is a direct mapping->Is the residual part.
The custom residual network code is implemented as follows:
import torch
from torch import nn
from torch.nn import functional
class CustomBlk(nn.Module):
def __init__(self, ch_in, ch_out, stride=1):
super(CustomBlk, self).__init__()
self.conv1 = nn.Conv2d(ch_in, ch_out, kernel_size=3, stride=stride, padding=1)
self.bn1 = nn.BatchNorm2d(ch_out)
self.conv2 = nn.Conv2d(ch_out, ch_out, kernel_size=3, stride=1, padding=1)
self.bn2 = nn.BatchNorm2d(ch_out)
self.extra = nn.Sequential()
if ch_out = ch_in:
self.extra = nn.Sequential(
nn.Conv2d(ch_in, ch_out, kernel_size=1, stride=stride),
nn.BatchNorm2d(ch_out)
)
def forward(self, x):
out = functional.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out = self.extra(x) + out
out = functional.relu(out)
return out
class CustomResNet(nn.Module):
def __init__(self, num_class):
super(CustomResNet, self).__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=3, padding=0),
nn.BatchNorm2d(16)
)
self.blk1 = CustomBlk(16, 32, stride=3)
self.blk2 = CustomBlk(32, 64, stride=3)
self.blk3 = CustomBlk(64, 128, stride=2)
self.blk4 = CustomBlk(128, 256, stride=2)
self.outlayer = nn.Linear(256 * 3 * 3, num_class)
def forward(self, x):
x = functional.relu(self.conv1(x))
x = self.blk1(x)
x = self.blk2(x)
x = self.blk3(x)
x = self.blk4(x)
x = x.view(x.size(0), -1)
x = self.outlayer(x)
return x
compared with the traditional U-Net, the U-Net improved network performs standardization processing on the output result of each layer by introducing a self-defined residual error network. Structurally, as shown in fig. 2, the arrow represents a "shortcut" connection, the first square represents the result after the "shortcut" connection, and the second square represents the replenishment of boundary information during the upsampling process.
The U-Net improved network is deeper in network level, training parameters are more, and the problem of insufficient depth of the traditional U-Net network is solved to a certain extent. Meanwhile, due to the property of the residual error network, the method also solves the problem of performance degradation under the extremely deep convolutional neural network.
These changes and improvements have led our U-Net improvement network to exhibit greater performance in the image segmentation task.
3. Optimizing model stability using batch normalization
Batch normalization (Batch Normalization) is a commonly used deep neural network optimization technique that helps to speed up convergence of network training and improve model stability and generalization ability. The following are the specific implementation steps of batch normalization:
a) The batch normalization layer is added after the convolution layer or full connection layer, which typically operates after the convolution layer or full connection layer. Normalizing in the output of each layer can reduce the gradient vanishing and gradient explosion problems, thereby speeding up the training of the network.
b) The mean and variance are calculated, and for each batch of training data, the mean and variance of the features are calculated in the channel dimension. This can be obtained by calculating the mean and variance of each channel and then averaging over the whole batch.
c) And normalizing the features by applying normalization and using the calculated mean and variance. For an input of x, the average isVariance is->The batch normalized operation can be expressed as:
wherein,is a small constant, avoiding the case that the denominator is 0.
d) Scaling and panning the normalized features so that the network can learn the transformations appropriate for the task. Introducing two learnable parametersAnd->Scaling and translating the normalized features:
y is the final output characteristic.
e) Updating during training, parameters during trainingAnd->Will be updated and the mean and variance updated by exponential moving averages to maintain stability.
f) The normalization of the mean and variance is carried out on the output of each layer through batch normalization, and then the internal covariate offset is reduced through the leavable parameter scaling and translation, so that the stability and generalization capability of the network are improved.
4. "shortcut" layer
U-Net improved network introduces a "shortcut" layer (shortcut), the basic network structure of which is shown in FIG. 3, which is formulated in the present invention:
where Y and X denote the outputs and inputs of the network,representing a weight, K representing an activation function; b is an adjustable parameter, default to 1 in this experiment. One "shortcut" layer may contain multiple convolution layers, we can writeBy->Representing the case of multiple convolutional layers, the improvement is as follows:
the introduction of the 'shortcut' layer enables the network structure of the U-Net to be deeper, and simultaneously avoids the phenomena of overlong training time, excessive training parameters and overfitting.
5. Loss function
The Loss Function (Loss Function) is used to evaluate the degree of inconsistency between the predicted value and the reference value (ground trunk), and the smaller the Loss Function, the better the robustness of the model. This experiment we willAs a loss function of the model, < >>The following is shown:
wherein X represents a partition point prediction value, Y represents a partition point reference value,representing the degree of similarity between the two models, similarity +.>The following is shown:
wherein k is a smoothed value,representing the intersection or overlap between two samples, +.>Representing the total amount of predicted and reference values. Since there is no neural area in some ultrasound images, a phenomenon of blank image occurs, and we add the smoothed value k again to correct the function. The smaller the loss function is, the better the robustness of the model is.
6. Optimizing functions
The optimization function helps the model to adjust the weight while training the model, so that the model weight is adjusted to be optimal, and the loss function is minimized. In addition, the Adam optimization function is adopted, so that the method has the advantages of high calculation efficiency, less occupied memory, good treatment on non-stationary models and the like.
7. Experimental analysis
We used fundus examination image data to train U-Net network, segNet network, and U-Net retrofit network, respectively, from cooperative medical institutions and global ophthalmic image disclosure databases [ http:// www.ykxjz.com/docs/tzgg/details. Aspxdocultid=54 & nid=A 967CBAD-BC53-4787-88ED-CD9D9ABAA7DE ], with training data of approximately 6500 sets of fundus examination image data. The experimental result shows that the improved network segmentation effect of the U-Net is higher than that of the U-Net network and the SegNet network. The result of cutting out a part of the fundus examination image data is shown in fig. 5.
8. Experimental data
The experimental data of the experiment are fundus examination image data, and a plurality of medical institutions, ophthalmic clinics, research institutions and global ophthalmic image disclosure databases are sourced. In the retinal vessel segmentation training stage of the U-Net improved network, a large number of labeling images are required to be prepared, and the images comprise original fundus images and vessel segmentation results corresponding to each image.
Decompressing the fundus examination image dataset to the local, creating a dataset path index file, starting a starting program, formulating a U-Net improved network model for test training, selecting AUC of ROC stored data when a test result is true, and verifying a dataset performance evaluation result storage model. And performing performance test on the test set through test evaluation, saving the file to the local, and drawing a corresponding visual result. As shown in fig. 6 and 7, aucroc and aucpr test results showed that the accuracy of the test results was higher than expected.
9. Evaluation criteria
In the experiment, a Dice coefficient (Dice coefficient) is adopted to evaluate the quality of the model. The Dice coefficient is a set similarity function used for judging the similarity degree between two samples, and the better the similarity between the two samples is, the larger the Dice coefficient is. The Dice coefficients are as follows:
wherein X represents a partition point predicted value, Y represents a partition point reference value,representing the intersection or overlap between two samples, +.>Representing the total amount of predicted and reference values. In the experiment, when the predicted value is completely the same as the reference value, the Dice coefficient is 1; when the predicted value is not correlated with the reference value, the Dice coefficient is 0. The larger the Dice coefficient, the higher the similarity of the two images, and the more accurate the model. />
10. Experimental results and assessment
(1) Experimental results
The 6500 groups of eyeground examination image data are used as a training set for training a network model of three networks, namely U-Net, segNet and improved U-Net. The three trained network models are used for respectively predicting data in the test set, the prediction results are matched with the pathological myopia conditions of the actual patient, and the evaluation results are shown in table 1. From Table 1 we can see that the improved U-Net network splitting effect is significantly higher than that of the SegNet and U-Net networks. Compared with a U-Net network, the improved U-Net network segmentation effect is improved by 15%, and compared with a SegNet network, the improved U-Net network segmentation effect is improved by 10%. The table is the Dice coefficient of the network model of the test results of three different network structures:
table 1 3 comparison of training coefficients for network models
In summary, the improved U-Net network segmentation effect is significantly better than that of the U-Net network and the SegNet network.
(2) Training time assessment
The improved U-Net network introduces a self-defined residual network, deepens the depth of the U-Net network, so that the training parameters of the improved U-Net network are higher than those of the U-Net network, but are far smaller than those of the SegNet network. Table 2 shows a comparison of training time for three network structures, and by introducing means such as normalization processing, the improved U-Net training time is reduced. From the following table we can see that the training time of the improved U-Net network is slightly longer than that of the U-Net network, but is far shorter than that of the SegNet network, and the training time is about 1/5 of that of the SegNet network.
Table 2 training time comparisons for three different networks
(3) Training process assessment
Fig. 4 shows the relationship between the loss function and the number of iterations in model training, comparing the training process of three models. After analysis, the improved U-Net network shows a quicker descending trend in the training process, and the accuracy rate of the improved U-Net network exceeds that of the original U-Net network and the SegNet network. Therefore, it can be concluded that the improved U-Net network has stronger robustness than the original U-Net network and SegNet network. Experimental results further prove that the segmentation effect of the improved U-Net network is obviously better than that of the original U-Net network and the SegNet network.
Experimental data shows that the improved U-Net network increases the depth of the network by introducing a residual network, so that the high-dimensional characteristics of the image can be captured better, and the segmentation accuracy is improved. Meanwhile, the training speed of the model is obviously improved by adding normalization processing, and the accuracy of the training process is enhanced. Research shows that the segmentation effect of the improved U-Net network is obviously improved compared with that of the original U-Net network and the SegNet network, and the training time is shorter and the training parameters are fewer. Specifically, the segmentation effect is improved by 15% compared with the original U-Net network and 10% compared with the SegNet network.
These results fully demonstrate the superiority of the improved U-Net network, which exhibits excellent performance in the image segmentation task.
Example 3
The blood vessel segmentation method based on the fundus examination image is used in actual medical diagnosis.
Pathologic myopia exhibits a more characteristic manifestation in fundus lesions, which are classified into five classes of grades according to fundus lesions (myopic macular lesions are classified), including normal (no maculopathy), leopard-like fundus, diffuse chorioretinopathy, zebra-like chorioretinal atrophy, and macular atrophy. Normal indicates no apparent myopic maculopathy, while other grades correspond to different degrees of pathological changes.
Pathological myopia is often accompanied by the appearance of several complications, including paint cracks (lacquer cracks), choroidal Neovascularization (CNV), fuchs spots (Fuchs spot), etc., which have a major impact on vision. The three lesions are "Plus" lesions. These complications may lead to vision loss, severely affecting the quality of life of the patient.
The blood vessel segmentation system based on the fundus examination image currently enters a clinical function test stage, a doctor can further judge pathological myopia diagnosis conditions of a patient by combining an output eye blood vessel segmentation result, an eye axis length and a diopter fusion detection system, upload the fundus examination image data, give an evaluation test result through synchronous rapid analysis, and count clinical diagnosis schemes and feedback comments of the doctor:
table 3 clinical data evaluation feedback statistics
In summary, in clinical diagnosis, observation and analysis of fundus lesions are critical to determining pathological myopia grade of a patient, by means of collected fundus examination image data, according to fundus blood vessel images after blood vessel segmentation of fundus examination images based on U-Net improvement network, in a subsequent evaluation process, the evaluation result reaches 99.3% of accuracy, meanwhile, the time efficiency is improved by 21%, the diagnosis efficiency and the diagnosis accuracy of doctors are greatly improved, meanwhile, by combining with the diagnosis evaluation result, corresponding data reference and treatment schemes can be quickly positioned, an ophthalmologist can be helped to evaluate the condition more accurately, the pathological cause is positioned, and corresponding treatment and management plans are formulated.
Claims (9)
1. A blood vessel segmentation method based on fundus examination images, comprising:
step one, collecting fundus examination image data; the fundus examination image data includes at least a color fundus image and an optical coherence tomography image;
step two, preprocessing the fundus examination image data; the preprocessing comprises denoising, contrast enhancement and color normalization, and is used for reducing noise and improving image quality;
step three, based on the position of the detection target with the largest mutual information in space in the image data, obtaining the optimal registration of the detection target;
extracting blood vessel features in the registered image data by using a preset image processing method, wherein the blood vessel features comprise edges, textures, colors and intensity gradients;
step five, selecting part of data from the blood vessel characteristics in the step four as a training set and a testing set, establishing a neural network model based on machine learning, training the neural network model by using the training set, testing the trained neural network model by using the testing set, and finally outputting a blood vessel segmentation result by using the neural network model;
wherein the neural network model is constructed based on a U-Net improved network comprising a contracted network and an expanded network, the contracted network comprising a shortcut layer and being to be used as a network layerAs a loss function of the model:
wherein X represents a predicted value of a division point, Y represents a reference value of the division point, N represents the number of inputs,representing the degree of similarity between the models; similarity->The following is shown:
wherein k is a smoothed value,representing the intersection or overlap between two samples, +.>Representing the total amount of predicted and reference values.
2. The fundus image based blood vessel segmentation method according to claim 1, wherein said image preprocessing step further comprises an image enhancement step for enhancing contrast and sharpness of fundus image data.
3. The method of claim 1, wherein the fourth step further comprises extracting features on edges, textures, colors, and intensity gradients from the raw data of the fundus image data.
4. The vessel segmentation method based on fundus examination image according to claim 1, wherein the fifth step adopts Adam optimization algorithm to adjust model parameters and residual network lifting network depth.
5. The method of claim 1, further comprising the step of updating and optimizing the trained model based on new ocular image data.
6. A vessel segmentation device based on fundus examination images, the device comprising:
the data input interface is used for acquiring fundus examination image data; the fundus examination image data includes at least a color fundus image and an optical coherence tomography image;
the image preprocessing module is used for preprocessing the fundus examination image data, wherein the preprocessing of the fundus examination image data comprises denoising, contrast enhancement and color standardization so as to reduce noise and improve image quality;
the multi-mode image registration module obtains the optimal registration of the detection targets based on the position of the detection targets with the largest mutual information in space in the image data;
the feature extraction module is used for extracting blood vessel features in the registered image data by using a preset image processing method, wherein the blood vessel features comprise edges, textures, colors and intensity gradients;
the U-Net improved network model module is used for model training of retina blood vessel segmentation and data, part of data is selected from the blood vessel characteristics to serve as a training set and a testing set, a machine learning neural network model is established, the neural network model is trained by utilizing the training set, the trained neural network model is tested by utilizing the testing set, and finally a blood vessel segmentation result is output by utilizing the neural network model; wherein the neural network model is built based on a U-Net improved network comprising two parts of a contracted network and an expanded network, the contracted network comprises a shortcut layer, and L (X, Y) is taken as a loss function of the model:wherein X represents a division point predicted value, Y represents a division point reference value, N represents an input number, and S (X, Y) represents a degree of similarity between models; wherein the similarity S (X, Y) is as follows:Where 2|X n Y represents the intersection or overlap between two samples, X Y represents the total of the predicted value and the reference value, and k is a smoothed value.
7. The fundus-examination-image-based blood vessel segmentation device according to claim 6, wherein the image preprocessing module further comprises an image enhancement unit for enhancing contrast and sharpness of the fundus examination image data.
8. A vessel segmentation system based on fundus examination images, comprising: fundus examination apparatus, optical coherence tomography scanner, and data processing apparatus; wherein the data processing apparatus comprises a data input interface, a processor, a memory storing a computer program executable on the processor, and input and output means;
the fundus examination device and the optical coherence tomography scanner are configured to acquire fundus images and OCT images and to transmit the fundus images and OCT images to the data processing device for processing via a data input interface of the data processing device, a processor of the data processing device being configured to execute the computer program to implement the steps recited in the method of claim 1, thereby recognizing the fundus images and OCT images to obtain a recognition result and outputting.
9. A computer readable storage medium, characterized in that it stores a computer program which, when executed by a processor, can implement the steps of the method of claim 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311319781.6A CN117058676B (en) | 2023-10-12 | 2023-10-12 | Blood vessel segmentation method, device and system based on fundus examination image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311319781.6A CN117058676B (en) | 2023-10-12 | 2023-10-12 | Blood vessel segmentation method, device and system based on fundus examination image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117058676A CN117058676A (en) | 2023-11-14 |
CN117058676B true CN117058676B (en) | 2024-02-02 |
Family
ID=88661257
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311319781.6A Active CN117058676B (en) | 2023-10-12 | 2023-10-12 | Blood vessel segmentation method, device and system based on fundus examination image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117058676B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117372284B (en) * | 2023-12-04 | 2024-02-23 | 江苏富翰医疗产业发展有限公司 | Fundus image processing method and fundus image processing system |
CN117809839B (en) * | 2024-01-02 | 2024-05-14 | 珠海全一科技有限公司 | Correlation analysis method for predicting hypertensive retinopathy and related factors |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106408562A (en) * | 2016-09-22 | 2017-02-15 | 华南理工大学 | Fundus image retinal vessel segmentation method and system based on deep learning |
CN109087302A (en) * | 2018-08-06 | 2018-12-25 | 北京大恒普信医疗技术有限公司 | A kind of eye fundus image blood vessel segmentation method and apparatus |
CN110570446A (en) * | 2019-09-20 | 2019-12-13 | 河南工业大学 | Fundus retina image segmentation method based on generation countermeasure network |
CN114881962A (en) * | 2022-04-28 | 2022-08-09 | 桂林理工大学 | Retina image blood vessel segmentation method based on improved U-Net network |
CN116452571A (en) * | 2023-04-26 | 2023-07-18 | 四川吉利学院 | Image recognition method based on deep neural network |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3629898A4 (en) * | 2017-05-30 | 2021-01-20 | Arterys Inc. | Automated lesion detection, segmentation, and longitudinal identification |
US11990224B2 (en) * | 2020-03-26 | 2024-05-21 | The Regents Of The University Of California | Synthetically generating medical images using deep convolutional generative adversarial networks |
US11580646B2 (en) * | 2021-03-26 | 2023-02-14 | Nanjing University Of Posts And Telecommunications | Medical image segmentation method based on U-Net |
-
2023
- 2023-10-12 CN CN202311319781.6A patent/CN117058676B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106408562A (en) * | 2016-09-22 | 2017-02-15 | 华南理工大学 | Fundus image retinal vessel segmentation method and system based on deep learning |
CN109087302A (en) * | 2018-08-06 | 2018-12-25 | 北京大恒普信医疗技术有限公司 | A kind of eye fundus image blood vessel segmentation method and apparatus |
CN110570446A (en) * | 2019-09-20 | 2019-12-13 | 河南工业大学 | Fundus retina image segmentation method based on generation countermeasure network |
CN114881962A (en) * | 2022-04-28 | 2022-08-09 | 桂林理工大学 | Retina image blood vessel segmentation method based on improved U-Net network |
CN116452571A (en) * | 2023-04-26 | 2023-07-18 | 四川吉利学院 | Image recognition method based on deep neural network |
Non-Patent Citations (3)
Title |
---|
基于U-Net的结节分割方法;徐峰;郑斌;郭进祥;刘立波;;软件导刊(第08期);全文 * |
基于U型网络的视网膜血管图像分割方法研究;周舒婕;中国优秀硕士学位论文全文数据库医药卫生科技辑;E073-56 * |
基于改进U-Net视网膜血管图像分割算法;李大湘;张振;;光学学报(第10期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN117058676A (en) | 2023-11-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110197493B (en) | Fundus image blood vessel segmentation method | |
EP3674968B1 (en) | Image classification method, server and computer readable storage medium | |
US11636340B2 (en) | Modeling method and apparatus for diagnosing ophthalmic disease based on artificial intelligence, and storage medium | |
CN117058676B (en) | Blood vessel segmentation method, device and system based on fundus examination image | |
CN109325942B (en) | Fundus image structure segmentation method based on full convolution neural network | |
CN109726743B (en) | Retina OCT image classification method based on three-dimensional convolutional neural network | |
CN111259982A (en) | Premature infant retina image classification method and device based on attention mechanism | |
Hassan et al. | Deep learning based joint segmentation and characterization of multi-class retinal fluid lesions on OCT scans for clinical use in anti-VEGF therapy | |
CN112132817A (en) | Retina blood vessel segmentation method for fundus image based on mixed attention mechanism | |
CN111862009B (en) | Classifying method of fundus OCT (optical coherence tomography) images and computer readable storage medium | |
de Moura et al. | Joint diabetic macular edema segmentation and characterization in OCT images | |
CN109447962A (en) | A kind of eye fundus image hard exudate lesion detection method based on convolutional neural networks | |
CN113160226A (en) | Two-way guide network-based classification segmentation method and system for AMD lesion OCT image | |
CN111563884A (en) | Neural network-based fundus disease identification method, computer device, and medium | |
CN113889267A (en) | Method for constructing diabetes diagnosis model based on eye image recognition and electronic equipment | |
CN113012163A (en) | Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network | |
Abbasi-Sureshjani et al. | Boosted exudate segmentation in retinal images using residual nets | |
CN112806957B (en) | Keratoconus and subclinical keratoconus detection system based on deep learning | |
CN108665474A (en) | A kind of eye fundus image Segmentation Method of Retinal Blood Vessels based on B-COSFIRE | |
Pappu et al. | EANet: Multiscale autoencoder based edge attention network for fluid segmentation from SD‐OCT images | |
Ferreira et al. | Multilevel cnn for angle closure glaucoma detection using as-oct images | |
CN116246331B (en) | Automatic keratoconus grading method, device and storage medium | |
CN117314935A (en) | Diffusion model-based low-quality fundus image enhancement and segmentation method and system | |
CN116092667A (en) | Disease detection method, system, device and storage medium based on multi-mode images | |
Thanh et al. | A real-time classification of glaucoma from retinal fundus images using AI technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CB03 | Change of inventor or designer information |
Inventor after: Yan Chenchen Inventor after: Zhou Haiying Inventor after: Ji Haixia Inventor after: She Haicheng Inventor before: Yan Chenchen Inventor before: Zhou Haiying Inventor before: Ji Haixia Inventor before: Yu Haicheng |
|
CB03 | Change of inventor or designer information |