CN113012093B - Training method and training system for glaucoma image feature extraction - Google Patents
Training method and training system for glaucoma image feature extraction Download PDFInfo
- Publication number
- CN113012093B CN113012093B CN202010702643.6A CN202010702643A CN113012093B CN 113012093 B CN113012093 B CN 113012093B CN 202010702643 A CN202010702643 A CN 202010702643A CN 113012093 B CN113012093 B CN 113012093B
- Authority
- CN
- China
- Prior art keywords
- image
- weight
- blood vessel
- disc
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012549 training Methods 0.000 title claims abstract description 115
- 208000010412 Glaucoma Diseases 0.000 title claims abstract description 77
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000000605 extraction Methods 0.000 title claims abstract description 44
- 210000004204 blood vessel Anatomy 0.000 claims abstract description 210
- 238000013528 artificial neural network Methods 0.000 claims abstract description 121
- 238000002372 labelling Methods 0.000 claims abstract description 73
- 238000007781 pre-processing Methods 0.000 claims abstract description 33
- 238000002156 mixing Methods 0.000 claims description 38
- 238000001514 detection method Methods 0.000 claims description 32
- 238000010586 diagram Methods 0.000 claims description 25
- 230000008569 process Effects 0.000 claims description 18
- 238000001914 filtration Methods 0.000 claims description 15
- 230000002792 vascular Effects 0.000 description 53
- 230000006870 function Effects 0.000 description 34
- 230000010339 dilation Effects 0.000 description 28
- 238000012545 processing Methods 0.000 description 12
- 239000011159 matrix material Substances 0.000 description 10
- 239000000203 mixture Substances 0.000 description 7
- 230000015572 biosynthetic process Effects 0.000 description 4
- 210000005252 bulbus oculi Anatomy 0.000 description 4
- 230000003902 lesion Effects 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 230000011218 segmentation Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 230000004069 differentiation Effects 0.000 description 3
- 239000003814 drug Substances 0.000 description 3
- 210000001508 eye Anatomy 0.000 description 3
- 238000009499 grossing Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 201000004569 Blindness Diseases 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 210000003050 axon Anatomy 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000011478 gradient descent method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000004660 morphological change Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 210000003994 retinal ganglion cell Anatomy 0.000 description 2
- 206010025421 Macule Diseases 0.000 description 1
- 208000037273 Pathologic Processes Diseases 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 210000001367 artery Anatomy 0.000 description 1
- FFBHFFJDDLITSX-UHFFFAOYSA-N benzyl N-[2-hydroxy-4-(3-oxomorpholin-4-yl)phenyl]carbamate Chemical compound OC1=C(NC(=O)OCC2=CC=CC=C2)C=CC(=C1)N1CCOCC1=O FFBHFFJDDLITSX-UHFFFAOYSA-N 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 208000030533 eye disease Diseases 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000002427 irreversible effect Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 210000004126 nerve fiber Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 230000001575 pathological effect Effects 0.000 description 1
- 230000009054 pathological process Effects 0.000 description 1
- 210000000578 peripheral nerve Anatomy 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005316 response function Methods 0.000 description 1
- 210000001927 retinal artery Anatomy 0.000 description 1
- 230000002207 retinal effect Effects 0.000 description 1
- 210000001957 retinal vein Anatomy 0.000 description 1
- 210000003786 sclera Anatomy 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000009966 trimming Methods 0.000 description 1
- 230000004865 vascular response Effects 0.000 description 1
- 210000003462 vein Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Geometry (AREA)
- Quality & Reliability (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Eye Examination Apparatus (AREA)
- Image Analysis (AREA)
Abstract
The present disclosure describes a training method for glaucoma image feature extraction based on an artificial neural network, comprising: preparing a fundus image and a labeling image, wherein the labeling image comprises a video disc labeling image labeled with a video disc area and a video cup labeling image labeled with a video cup area; preprocessing the fundus image to obtain a preprocessed fundus image, and detecting blood vessels to obtain a blood vessel image; performing mixed weighting on the video disc labeling image and the blood vessel image to generate a mixed weight distribution map; and training the artificial neural network by adopting a mixed weight distribution map based on the preprocessed fundus image and the marked image. Therefore, the accuracy of the artificial neural network in recognizing the glaucoma image features can be improved.
Description
Technical Field
The disclosure relates to a training method and training system for glaucoma image feature extraction.
Background
Glaucoma has now become the global second blinding eye disease. Global primary glaucoma patients have exceeded millions of people, with more than one patient likely developing blindness to both eyes. Glaucoma may develop irreversible eye blindness if not diagnosed early, so early glaucoma screening is of great importance.
Glaucoma is mainly pathological in that retinal ganglion cells die and axons are lost, resulting in defects in optic disc peripheral nerve fibers, which cause morphological changes in the optic disc, such as enlargement of optic disc pits, deepening of optic disc pits, etc. Clinical medicine research shows that the cup-to-disc ratio (CDR) of fundus images is a reliable index for measuring optic disc pits, so that glaucoma can be identified by the cup-to-disc ratio of fundus images. In clinical medicine, existing identification methods include processing features in fundus images to identify a optic disc or cup by artificial intelligence techniques to identify lesions in the fundus.
However, since the optic disc region generally has only 4-6 pairs of level 1 or level 2 retinal arteries and veins, when the fundus image is recognized using artificial intelligence techniques, the artery and vein information is easily ignored and small blood vessel travel cannot be accurately learned, resulting in failure to accurately recognize the optic disc or the optic cup.
Disclosure of Invention
The present disclosure has been made in view of the above-described conventional circumstances, and an object thereof is to provide a training method and training system for extracting glaucoma image features by an artificial neural network, which can improve accuracy of extracting glaucoma image features by the artificial neural network.
To this end, a first aspect of the present disclosure provides a training method for glaucoma image feature extraction based on an artificial neural network, comprising: preparing a fundus image and a labeling image, wherein the labeling image comprises a video disc labeling image marked with a video disc area and a video cup labeling image marked with a video cup area; preprocessing the fundus image to obtain a preprocessed fundus image, and generating a blood vessel image containing a blood vessel region according to a blood vessel detection result; performing mixed weighting on the video disc labeling image and the blood vessel image to generate a mixed weight distribution map; and training an artificial neural network based on the preprocessed fundus image, the annotation image and the mixed weight distribution map, wherein the weight of the optic disc region is made larger than that of the non-optic disc region and the weight of the vascular region is made larger than that of the non-vascular region when the mixed weighting is performed. In the method, the artificial neural network is trained based on the preprocessed fundus image, the marked image and the mixed weight distribution map, so that the blood vessel area and the optic disc area can be considered in the training of the artificial neural network, learning of small blood vessel running is optimized, unbalance of positive and negative samples is restrained, and the accuracy of the artificial neural network on glaucoma image feature extraction can be improved.
In addition, in the training method for glaucoma image feature extraction based on an artificial neural network according to the first aspect of the present disclosure, optionally, the optic disc region in the optic disc labeling image is inflated to form a optic disc inflated image; the vessel region in the vessel image is dilated to form a vessel dilation image. In this case, a disc expansion image including a region near the disc and a blood vessel expansion image including a blood vessel boundary are obtained by the expansion processing.
In addition, in the training method for glaucoma image feature extraction based on an artificial neural network according to the first aspect of the present disclosure, the optic disc expansion image and the blood vessel expansion image may be optionally subjected to a mixed weighting to generate the mixed weight distribution map. Therefore, the accuracy of the artificial neural network on glaucoma image feature extraction can be further improved.
In addition, in the training method for glaucoma image feature extraction based on an artificial neural network according to the first aspect of the present disclosure, optionally, in a training process, coefficients of a loss function in the artificial neural network are obtained based on the mixed weight distribution map, and the artificial neural network is trained based on the loss function. In this case, the artificial neural network is optimized based on the loss function coefficient obtained from the mixed weight distribution map, and unbalance of the positive and negative samples can be suppressed, whereby the accuracy of the artificial neural network in extracting the glaucoma image features can be further improved.
In addition, in the training method for glaucoma image feature extraction based on an artificial neural network according to the first aspect of the present disclosure, optionally, the mixed weight distribution map includes a fundus region and a background region, and the weight of the background region is set to zero. Thus, interference of the background area on the characteristic of the glaucoma image identified by the artificial neural network can be reduced.
In addition, in the training method for glaucoma image feature extraction based on an artificial neural network according to the first aspect of the present disclosure, optionally, the preprocessing includes a cropping and normalization process for the fundus image. In this case, the trimming process can convert the fundus image into an image of a fixed standard form, and the normalization process can overcome the variability of different fundus images, thereby enabling the artificial neural network to extract glaucoma image features more conveniently.
In addition, in the training method for glaucoma image feature extraction based on an artificial neural network according to the first aspect of the present disclosure, optionally, the mixed weight distribution map includes an intra-optic-disc blood vessel region, an intra-optic-disc non-blood vessel region, an extra-optic-disc blood vessel region, and an extra-optic-disc non-blood vessel region. Thus, the artificial neural network can more accurately extract the glaucoma image characteristics of each region in the fundus image.
In addition, in the training method for glaucoma image feature extraction based on an artificial neural network according to the first aspect of the present disclosure, optionally, the weight of the optic disc region is a first weight, the weight of the non-optic disc region is a second weight, the weight of the blood vessel region is a third weight, the weight of the non-blood vessel region in the optic disc is a fourth weight, the weight of the non-blood vessel region in the optic disc is the first weight multiplied by the third weight, the weight of the non-blood vessel region in the optic disc is the first weight multiplied by the fourth weight, the weight of the non-blood vessel region outside the optic disc is the second weight multiplied by the third weight, and the weight of the non-blood vessel region outside the optic disc is the second weight multiplied by the fourth weight. Thus, the weights of the in-disc blood vessel region, the in-disc non-blood vessel region, the out-of-disc blood vessel region, and the out-of-disc non-blood vessel region can be obtained from the weights of the disc region, the non-disc region, the blood vessel region, and the non-blood vessel region, respectively.
In addition, in the training method for glaucoma image features based on an artificial neural network according to the first aspect of the present disclosure, optionally, blood vessel region detection is performed based on franki filtering to form the blood vessel image. Therefore, the blood vessel region can be automatically identified, and the identification and the processing of the subsequent artificial neural network on the blood vessel region are facilitated.
A second aspect of the present disclosure provides a training system for glaucoma image feature extraction based on an artificial neural network, comprising: the device comprises an acquisition module, a display module and a display module, wherein the acquisition module acquires fundus images and annotation images, and the annotation images comprise video disc annotation images marked with video disc areas and video cup annotation images marked with video cup areas; an image preprocessing module for preprocessing the fundus image to obtain a preprocessed fundus image; a blood vessel region detection module that performs blood vessel region detection on the pre-processed fundus image to form a blood vessel image; a mixing weight generation module that performs mixing weighting on the optic disc labeling image and the blood vessel image to generate a mixing weight distribution map; and a model training module for training the artificial neural network based on the preprocessed fundus image, the labeling image and the mixed weight distribution map, wherein the weight of the optic disc area is greater than that of the non-optic disc area, and the weight of the blood vessel area is greater than that of the non-blood vessel area when the mixed weighting is carried out. In the method, the artificial neural network is trained based on the preprocessed fundus image, the marked image and the mixed weight distribution map, so that the blood vessel area and the optic disc area can be considered in the training of the artificial neural network, learning of small blood vessel running is optimized, unbalance of positive and negative samples is restrained, and the accuracy of the artificial neural network on glaucoma image feature extraction can be improved.
Additionally, in the training system for artificial neural network-based glaucoma image feature extraction according to the second aspect of the present disclosure, optionally, the optic disc region in the optic disc labeling image is inflated to form a optic disc inflated image; the vessel region in the vessel image is dilated to form a vessel dilation image. In this case, a disc expansion image including a region near the disc and a blood vessel expansion image including a blood vessel boundary are obtained by the expansion processing.
In addition, in the training system for glaucoma image feature extraction based on an artificial neural network according to the second aspect of the present disclosure, the optic disc dilation image and the blood vessel dilation image may be optionally subjected to a mixed weighting to generate the mixed weight distribution map. Therefore, the accuracy of the artificial neural network on glaucoma image feature extraction can be further improved.
In addition, in the training system based on glaucoma image feature extraction of an artificial neural network according to the second aspect of the present disclosure, optionally, in the training process, coefficients of a loss function in the artificial neural network are obtained based on the mixed weight distribution map, and the artificial neural network is trained based on the loss function. In this case, the artificial neural network is optimized based on the loss function coefficient obtained from the mixed weight distribution map, and unbalance of the positive and negative samples can be suppressed, whereby the accuracy of the artificial neural network in extracting the glaucoma image features can be further improved.
In addition, in the training system for glaucoma image feature extraction based on an artificial neural network according to the second aspect of the present disclosure, optionally, the mixed weight distribution map includes a fundus region and a background region, and the weight of the background region is set to zero. Thus, interference of the background area on the characteristic of the glaucoma image identified by the artificial neural network can be reduced.
In addition, in the training system for glaucoma image feature extraction based on an artificial neural network according to the second aspect of the present disclosure, optionally, blood vessel region detection is performed based on franki filtering to form the blood vessel image. Therefore, the vascular region can be automatically identified, and the vascular region can be conveniently identified and processed by the follow-up artificial neural network.
In addition, in the training system for glaucoma image feature extraction based on an artificial neural network according to the second aspect of the present disclosure, optionally, the mixed weight distribution map includes an intra-optic vascular region, an intra-optic non-vascular region, an extra-optic vascular region, and an extra-optic non-vascular region. Thus, the artificial neural network can more accurately extract the glaucoma image characteristics of each region in the fundus image.
In the training system for glaucoma image feature extraction based on an artificial neural network according to the second aspect of the present disclosure, the weight of the optic disc region may be a first weight, the weight of the non-optic disc region may be a second weight, the weight of the blood vessel region may be a third weight, the weight of the non-blood vessel region may be a fourth weight, the weight of the intra-optic disc blood vessel region may be the first weight multiplied by the third weight, the weight of the intra-optic disc non-blood vessel region may be the first weight multiplied by the fourth weight, the weight of the extra-optic disc blood vessel region may be the second weight multiplied by the third weight, and the weight of the extra-optic disc non-blood vessel region may be the second weight multiplied by the fourth weight. Thus, the weights of the in-disc blood vessel region, the in-disc non-blood vessel region, the out-of-disc blood vessel region, and the out-of-disc non-blood vessel region can be obtained from the weights of the disc region, the non-disc region, the blood vessel region, and the non-blood vessel region, respectively.
According to the invention, a training method and a training system for extracting glaucoma image features based on an artificial neural network are provided, wherein the accuracy of extracting glaucoma image features by the artificial neural network can be improved.
Drawings
Embodiments of the present disclosure will now be explained in further detail by way of example only with reference to the accompanying drawings, in which:
fig. 1 shows a schematic diagram of an electronic device according to an embodiment of the present disclosure.
Fig. 2 illustrates a block diagram of a training system for artificial neural network-based glaucoma image feature extraction in accordance with embodiments of the present disclosure.
Fig. 3 shows a schematic diagram of labeling fundus images to form a labeling image according to an embodiment of the present disclosure.
Fig. 4 shows a schematic diagram of a blood vessel image formed by blood vessel region detection on a preprocessed fundus image according to an embodiment of the present disclosure.
Fig. 5 illustrates a schematic formation of a first mixing weight profile according to an embodiment of the present disclosure.
Fig. 6 shows a block diagram of a hybrid weight generation module of a training system according to an embodiment of the present disclosure.
Fig. 7 shows a schematic diagram of a video disc expansion image according to an embodiment of the present disclosure.
Fig. 8 shows a schematic diagram of a blood vessel dilation image in accordance with an embodiment of the present disclosure.
Fig. 9 shows a schematic diagram of formation of a second mixing weight profile according to an embodiment of the present disclosure.
Fig. 10 shows a flowchart of a training method of artificial neural network-based glaucoma image feature extraction according to an embodiment of the present disclosure.
Symbol description:
1 electronic equipment, 1a host, 1b display equipment, 1c input equipment, 10 training systems, 100 acquisition modules, 200 image preprocessing modules, 300 vascular region detection modules, 400 mixed weight generation modules, 410 video disc expansion modules, 420 vascular expansion modules, 500 model training modules, P1 fundus images, P20 video disc labeling images, P30 video disc labeling images, P10 preprocessed fundus images, P40 vascular images, P21 weighted video disc labeling images, P41 weighted vascular images, P3 first mixed weight distribution map, P23 video disc expansion images, P43 vascular expansion images, P4 second mixed weight distribution map, A1 video disc region, A1 'non-video disc region, A2 video disc region, A3 video disc expansion region, A3' non-video disc expansion region, A4 vascular region, A4 'non-vascular region, A5 vascular expansion region, A5' non-vascular expansion region in the first video disc, A30 'non-vascular region in the first video disc, A31 vascular region outside the first video disc, A31' vascular region outside the second vascular region A40 'in the second video disc, and 41' non-vascular region outside the second video disc.
Detailed Description
Hereinafter, preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. In the following description, the same members are denoted by the same reference numerals, and duplicate descriptions are omitted. In addition, the drawings are schematic, and the ratio of the sizes of the components to each other, the shapes of the components, and the like may be different from actual ones.
Fig. 1 shows a schematic diagram of an electronic device according to an embodiment of the present disclosure. Fig. 2 illustrates a block diagram of a training system for artificial neural network-based glaucoma image feature extraction in accordance with embodiments of the present disclosure.
In some examples, referring to fig. 1 and 2, a training system (which may be simply referred to as a "training system") 10 for artificial neural network-based glaucoma image feature extraction, to which the present disclosure relates, may be implemented by means of an electronic device 1 (such as a computer). In some examples, as shown in fig. 1, the electronic device 1 may include a host 1a, a display device 1b, and an input device 1c (e.g., a mouse, a keyboard). Wherein the host 1a may comprise one or more processors, a memory and a computer program stored in the memory, in which case the training system 10 may be stored as a computer program in the memory.
In some examples, the glaucoma image features may be feature information related to glaucoma. The main pathological process of glaucoma is the defect of optic disc marginal nerve fiber caused by death of retinal ganglion cells and loss of axons, so that the morphological changes of optic discs, such as enlargement of optic disc pits, deepening of optic disc pits and the like, are caused. Clinical medicine research shows that the cup-to-disc ratio (CDR) of fundus images is a reliable indicator for measuring optic disc pits. The glaucoma image may be characterized as cup, disc. In other examples, the glaucoma image feature may be a cup optic disc ratio (cup-to-disc ratio for short; cup-to-disc, CDR).
In some examples, the one or more processors may include a central processing unit, an image processing unit, and any other electronic component capable of processing input data. For example, the processor may execute instructions and programs stored on the memory.
As described above, training system 10 may be implemented by program instructions and algorithms encoded in a computer program. Additionally, training system 10 may also be stored in memory of a cloud server. In some examples, the cloud server may be leased. Thereby, maintenance costs of the server can be reduced. In other examples, the cloud server may also be self-built. In this case, the memory can be set in the server built by itself, ensuring confidentiality of data and preventing leakage of data from clients or patients.
In some examples, training system 10 may use one or more artificial neural networks to extract and learn glaucoma image features in fundus images. In some examples, the artificial neural network may be implemented by one or more processors (e.g., microprocessors, integrated circuits, field programmable gate arrays, etc.). The training system 10 may be used to receive input multiple fundus images and train the fundus images. Wherein the artificial neural network parameter values may be iteratively determined via the artificial neural network from the training data set. Wherein the training dataset may be composed of a plurality of fundus images. In some examples, the output of the artificial neural network is continuously optimized by using the pre-processed fundus image and the annotation image as inputs and by artificially setting weights for the loss function of each region (i.e., each pixel point) of the pre-processed fundus image as training the artificial neural network.
In this embodiment, the training system 10 may include an acquisition module 100, an image preprocessing module 200, a blood vessel region detection module 300, a mixing weight generation module 400, and a model training module 500 (see fig. 2).
In some examples, the acquisition module 100 may be used to acquire fundus images and annotation images. The image preprocessing module 200 may be configured to preprocess the fundus image to obtain a preprocessed fundus image. The blood vessel region detection module 300 may be used to perform blood vessel region detection on the pre-processed fundus image to form a blood vessel image. The blending weight generation module 400 may be configured to blend-weight the disc labeling image and the vessel image to generate a blended weight distribution map. The model training module 500 may train the artificial neural network based on the pre-processed fundus image, the annotation image, and the mixed weight profile. In the above example, training system 10 may derive the blended weight distribution map based on the optic disc labeling image and the blood vessel image. In this case, the training system 10 can train the artificial neural network based on the preprocessed fundus image, the labeling image, and the mixed weight distribution map, and can make the artificial neural network compatible with the blood vessel region and the optic disc region in training, optimize learning of small blood vessel travel, and suppress imbalance of positive and negative samples, thereby improving accuracy of the artificial neural network in extracting glaucoma image features.
Fig. 3 shows a schematic diagram of labeling fundus images to form a labeling image according to an embodiment of the present disclosure, in which fig. 3 (a) shows a fundus image P1, fig. 3 (b) shows a disc labeling image P20, and fig. 3 (c) shows a cup labeling image P30.
In some examples, as described above, training system 10 may include acquisition module 100 (see fig. 2). The acquisition module 100 may be used to acquire fundus images and annotation images. The labeling image may be an image obtained after labeling the bottom-eye image.
In some examples, the acquisition module 100 may be used to acquire fundus images. The fundus image may be an image about the fundus taken by a fundus camera or other fundus camera apparatus. As an example of the fundus image, for example, fig. 3 (a) shows a fundus image P1 photographed by a fundus camera. In some examples, the fundus image may include the area of the optic disc and the optic cup, but the present embodiment is not limited thereto, and in some examples, the fundus image may include only the optic disc area.
In some examples, the plurality of fundus images may constitute a training dataset. The training data set may include a training set and a test set. For example, 5 to 20 ten thousand fundus images, for example, from a partner hospital and from which patient information is removed, can be selected as a training set (training set), for example, 5000 to 20000 fundus images as a test set (testing set).
In some examples, the fundus image may be a color fundus image. The colorful fundus image can clearly present abundant fundus information such as optic disc, optic cup, macula, blood vessel and the like. In addition, the fundus image may be an image of RGB mode, CMYK mode, lab mode, or gradation mode, or the like.
In some examples, the acquisition module 100 may be used to acquire annotation images. The annotation image may include a disc annotation image and a cup annotation image. Medically, the optic disc and the optic cup have a well-defined anatomical definition, i.e., the optic disc is defined as the edge of the posterior aperture of the sclera, bounded by the inner edge of the scleral ring; the optic cup is defined as the range from the scleral plate to the retinal plane, and is an important basis for the identification of the optic cup region by small vessel travel.
In other examples, the annotation image obtained by the acquisition module 100 may include only the optic disc annotation image.
In some examples, the optic disc labeling image or the cup labeling image may be used as a truth value for artificial neural network training. In other examples, the optic disc annotation image and the cup annotation image may be combined into one annotation image as a truth value for artificial neural network training.
In some examples, as described above, the annotation image may be an image obtained after annotating the fundus image. In this case, the disc labeling image may be an image obtained by labeling the disc in the fundus image. The visual cup labeling image may be an image obtained after labeling a visual cup in the fundus image.
Specifically, the disc region in the fundus image P1 may be manually labeled, thereby obtaining a disc labeled image P20 (see fig. 3 (b)). In some examples, manual labeling may be performed by an experienced physician, thereby enabling increased accuracy in labeling of optic disc regions. The cup region in the fundus image P1 may be manually labeled to obtain a cup labeled image P30 (see fig. 3 (c)). In some examples, manual labeling may be performed by an experienced physician, thereby enabling increased accuracy in labeling of the optic cup region.
In this embodiment, the image preprocessing module 200 may be configured to perform preprocessing on the fundus image to obtain a preprocessed fundus image. Specifically, the image preprocessing module 200 may acquire the fundus image output by the acquisition module 100, and perform preprocessing on the fundus image to obtain a preprocessed fundus image.
In some examples, the image preprocessing module 200 may crop the bottom-of-eye image. In general, since the fundus image acquired by the acquisition module 100 may have problems of image format, size, and the like, it is necessary to trim the fundus image so that the fundus image is converted into an image of a fixed standard form. A fixed standard form may refer to the images being in the same format and of uniform size. For example, in some examples, the size of the fundus image after preprocessing may be unified as a fundus image of 512×512 or 1024×1024 pixels.
In some examples, the image pre-processing module 200 may normalize the bottom-of-eye image. In some examples, the normalization process may include coordinate centering, scaling normalization, and the like on the bottom-of-eye image. Therefore, the difference of different fundus images can be overcome, and the performance of the artificial neural network is improved.
In addition, in some examples, the image preprocessing module 200 may include performing noise reduction, graying processing, and the like on the fundus image. Thus, the characteristics of the glaucoma image can be highlighted.
Additionally, in some examples, the image preprocessing module 200 may include zooming, flipping, panning, etc., the fundus image. In this case, the amount of data for training the artificial neural network can be increased, and thus the generalization ability of the artificial neural network can be improved.
In some examples, fundus images may also be used directly for artificial neural network training without image preprocessing.
In other examples, the fundus image may be preprocessed and then labeled.
Additionally, in some examples, the image preprocessing module 200 may acquire the annotation image output by the acquisition module 100. Preprocessing the fundus image may also include preprocessing the annotation image. Therefore, the size of the marked image and the size of the preprocessed fundus image can be kept consistent all the time, and further the artificial neural network training is facilitated.
Fig. 4 shows a schematic diagram of a blood vessel image formed by blood vessel region detection on a pre-processed fundus image according to an embodiment of the present disclosure, in which fig. 4 (a) shows a pre-processed fundus image P10 and fig. 4 (b) shows a blood vessel image P40.
In this embodiment, as described above, the training system 10 may include a vascular region detection module 300. The blood vessel region detection module 300 may be used to perform blood vessel region detection on the pre-processed fundus image to form a blood vessel image.
As an example of the blood vessel image formation, as shown in fig. 4, blood vessel region detection is performed on the pre-processed fundus image P10 to form a blood vessel image P40 containing a blood vessel region A4.
In some examples, the pre-processed fundus image P10 may be subjected to vessel region detection based on franki (multi-scale linear) filtering to form the vessel image P40. Specifically, franki filtering is an edge detection enhancement filtering algorithm constructed based on a Hessian matrix (Hessian matrix).
In franki filtering, first, the pre-processed fundus image P10 is converted into a grayscale image. The preprocessed fundus image P10 is image noise reduced using gaussian filtering. Next, a Hessian matrix (Hessian matrix) is calculated. The Hessian matrix is a square matrix of the second partial derivative of a scalar function, which describes the local curvature of a multi-variable function, and the basic form of the Hessian matrix is shown in the following formula (1):
Wherein, second order partial differentiation in x direction:
second order partial differentiation in the y direction:
hybrid partial differentiation in the x, y direction:
wherein f xy =f yx H is a real symmetric matrix, and two eigenvalues lambda can be used 1 、λ 2 To construct enhanced filtering. In the two-dimensional preprocessing fundus image, the feature value λ 1 、λ 2 Can be calculated by the following formula:
since the second partial derivative is relatively sensitive to noise, gaussian smoothing is advanced when solving the Hessian matrix. The response function of the P vascular region of the pixel point P of the preprocessed fundus image P10 is V (sigma, P):
where σ is the scale factor, which is the standard deviation of the gaussian smoothing when solving the Hessian matrix. Beta may be set to 0.5 for distinguishing between line and bulk objects.Is a parameter for controlling the overall smoothing of the thread. R is R B And s is defined by the eigenvalues lambda 1 、λ 2 And (5) defining.
The output of the filter is maximized when the scale factor sigma is closest to the actual width of the vessel. The maximum response of each pixel point P in the pre-processed fundus image P10 at different scale factors can thus be taken as the final vascular response. In this case, the scale factor σ is closest to the actual width of the vessel, and the final vessel response is shown in equation (9) below:
wherein sigma min Is the minimum value of the scale factor sigma, sigma max Is the maximum value of the scale factor sigma.
Finally, the detected blood vessel area A4 may be obtained by setting a threshold T and making the position where the blood vessel response is greater than T (see fig. 4 (b)).
In the present disclosure, a franki algorithm is utilized to automatically detect a blood vessel region in a pre-processed fundus image to obtain a blood vessel image containing the blood vessel region. In this case, the blood vessel region in the blood vessel image is significant compared with the blood vessel region in the pre-processed fundus image. Therefore, the later identification and treatment of the vascular region can be facilitated.
Examples of the present disclosure are not limited thereto and in other examples vessel region detection for pre-processing fundus images may be implemented using a matched filtering algorithm, an adaptive contrast enhancement algorithm, a two-dimensional Gabor filtering algorithm, or using other types of artificial neural networks, etc. Therefore, a proper algorithm or artificial neural network can be selected according to different requirements to realize detection of the blood vessel region.
Fig. 5 illustrates a schematic formation of a first mixing weight profile according to an embodiment of the present disclosure. Fig. 5 (a) shows a schematic view of the weighted disc labeling image P21, fig. 5 (b) shows a schematic view of the weighted blood vessel image P41, and fig. 5 (c) shows a schematic view of generating the first mixture weight distribution map P3 based on the weighted disc labeling image P21 and the weighted blood vessel image P41.
In this embodiment, as described above, training system 10 may include a mixing weight generation module 400. The blending weight generation module 400 may be configured to blend-weight the disc labeling image and the vessel image to generate a blended weight distribution map.
In some examples, as shown in fig. 5, the mixing weight profile may be a first mixing weight profile P3. The first mixed weight distribution map P3 may be obtained based on the disc labeling image P20 (see fig. 3 (b)) and the blood vessel image P40 (see fig. 4 (b)). For example, the disc labeling image P20 may be weighted to generate a weighted disc labeling image P21, the blood vessel image P40 may be weighted to generate a weighted blood vessel image P41, and the first mixed weight distribution map P3 may be generated based on the weighted disc labeling image P21 and the weighted blood vessel image P41. Under the condition, the artificial neural network is trained by utilizing the mixed weight distribution diagram, so that the artificial neural network can give consideration to a blood vessel area and a video disc area in training, optimize learning of small blood vessel running and inhibit unbalance of positive and negative samples.
In some examples, the hybrid weighting process includes having the weights of the optic disc regions be a first weight, the weights of the non-optic disc regions be a second weight, the weights of the vessel regions be a third weight, the weights of the non-vessel regions be a fourth weight, then the weights of the vessel regions within the optic disc are the first weight multiplied by the third weight, the weights of the non-vessel regions within the optic disc are the first weight multiplied by the fourth weight, the weights of the vessel regions outside the optic disc are the second weight multiplied by the third weight, and the weights of the non-vessel regions outside the optic disc are the second weight multiplied by the fourth weight. The mixing weighting process is specifically described below with reference to the drawings.
In some examples, as shown in fig. 5, the weight of the optic disc area A1 is set to be the first weight w during the mixed weighting process 1 Let the weight of the non-optic disc area A1' be the second weight w 2 (see fig. 5 (a)). The weight of the blood vessel area A4 is the third weight v 1 The non-vascular region A4' has a fourth weight v 2 (see fig. 5 (b)).
In some examples, the weighting of the optic disc region may be made greater than the weighting of the non-optic disc region and the vascular region may be made greater than the weighting of the non-vascular region when the hybrid weighting process is performed. For example, as shown in fig. 5, when the mixed weighting is performed, the weight of the disc area A1 may be made larger than the weight of the non-disc area A1', and the weight of the blood vessel area A4 may be made larger than the weight of the non-blood vessel area A4'. That is, w can be made to 1 >w 2 ,v 1 >v 2 . In this disclosure, the optic disc region may also be referred to as an intra-optic disc region, and the non-optic disc region may also be referred to as an out-of-optic disc region.
In some examples, the first mixed weight distribution map P3 obtained based on the disc labeling image P20 and the blood vessel image P40 includes four mixed regions of a blood vessel region a30 in the first disc, a non-blood vessel region a30 'in the first disc, a blood vessel region a31 outside the first disc, and a non-blood vessel region a31' outside the first disc (see fig. 5 (c)). In this case, for example, the weight of the blood vessel region a30 in the first disc may be obtained by multiplying the weight of the region A1 in the disc by the weight of the blood vessel region A4. Thus, glaucoma image features of each region in the fundus image can be more accurately identified.
In some examples, as described above, the optic disc is caused toThe weight of the area A1 is the first weight w 1 Let the weight of the non-optic disc area A1' be the second weight w 2 The weight of the blood vessel area A4 is the third weight v 1 The non-vascular region A4' has a fourth weight v 2 . The weight of the vessel region a30 in the first optic disc is the first weight multiplied by a third weight, i.e. w 1 v 1 . The weight of the non-vascular region A30' in the first optic disc is the first weight multiplied by the fourth weight, i.e., w 1 v 2 . The weight of the vessel region A31 outside the first optic disc is the second weight multiplied by the third weight, i.e. w 2 v 1 . The weight of the non-vascular region A31' outside the first optic disc is the second weight multiplied by the fourth weight, i.e. w 2 v 2 (see fig. 5 (c)). Thus, the weights of the in-disc blood vessel region, the in-disc non-blood vessel region, the out-of-disc blood vessel region, and the out-of-disc non-blood vessel region can be obtained from the weights of the disc region, the non-disc region, the blood vessel region, and the non-blood vessel region, respectively.
In some examples, the blending weighting profile may be partitioned based on the eye contours as the blending weighting process is performed. In this case, the mixed weight profile may include a fundus region and a background region. Wherein the fundus region may be a region within the outline of the eyeball. The fundus region may include four blend regions of a vascular region within the first optic disc, a non-vascular region within the first optic disc, a vascular region outside the first optic disc, and a non-vascular region outside the first optic disc of the first blend weight profile. The background area may be an area outside the outline of the eyeball. The background region may be a partial region of the non-vascular region outside the first optic disc. When training the artificial neural network, the weight of the background area can be zero. Therefore, interference of the background area on extraction of glaucoma image features of the artificial neural network can be reduced in the training process.
In some examples, as described above, the optic disc region of the hybrid weight distribution map has weights, since the optic disc region contains the optic cup region, and thus the optic cup region of the hybrid weight distribution map has weights.
In other examples, the blending weight generation module 400 may weight based on the disc-labeling image alone.
In other examples, the hybrid weight generation module 400 may weight based on the vessel image alone.
Fig. 6 shows a block diagram of a hybrid weight generation module of a training system according to an embodiment of the present disclosure. Fig. 7 shows a schematic diagram of a disc expansion image according to an embodiment of the present disclosure, in which fig. 7 (a) shows a disc labeling image P20 and fig. 7 (b) shows a disc expansion image P23.
In some examples, the mixing weight generation module 400 may include a video disc expansion module 410 (see fig. 6).
In some examples, in the disc expansion module 410, a disc region in the disc labeling image may be expanded to form a disc expanded image. The disc-expansion image includes a disc-expansion area. The optic disc expansion area may include a optic disc area and a optic disc vicinity area.
For example, as shown in fig. 7 (a) and 7 (b), the disc area in the disc label image P20 is expanded to form a disc expanded image P23. The disc expansion image P23 includes a disc expansion area A3. The disc expansion area A3 of the disc expansion image P23 corresponds to the disc area A1 of the disc label image P20. In fig. 7 (b), A3' is a non-optic disc expansion area. The optic disc expansion area may include a optic disc area and a optic disc vicinity area. In this case, since the near-optic-disc region affects segmentation of the optic cup or the optic disc and thus affects extraction of the glaucoma image features, the optic-disc expansion image is acquired through the expansion processing so that the correlation processing is performed based on the optic-disc expansion image later, thereby improving the accuracy of the glaucoma image feature extraction.
Fig. 8 shows a schematic diagram of a blood vessel expansion image according to an embodiment of the present disclosure, wherein fig. 8 (a) shows a blood vessel image P40 and fig. 8 (b) shows a blood vessel expansion image P43.
In some examples, the mixing weight generation module 400 may include a vessel dilation module 420 (see fig. 6).
In some examples, in vessel dilation module 420, a vessel region in a vessel image may be dilated to form a vessel dilation image. The vessel dilation image includes a vessel dilation region. The vessel dilation region may include a vessel region and a vessel vicinity region. For example, as shown in fig. 8 (a) and 8 (b), the blood vessel region A4 in the blood vessel image P40 may be inflated to form a blood vessel inflated image P43. The blood vessel inflation image P43 includes a blood vessel inflation area A5. The blood vessel expansion region A5 of the blood vessel expansion image P43 corresponds to the blood vessel region A4 of the blood vessel image P40. Wherein A5' is a non-vascular dilation region in fig. 8 (b). In this case, an error in detecting the boundary of the blood vessel based on the blood vessel detection algorithm can be reduced by the expansion process.
Fig. 9 shows a schematic diagram of forming a second mixture weight distribution map according to an embodiment of the present disclosure, in which fig. 9 (a) shows a schematic diagram of weighting a disc dilation image P23, fig. 9 (b) shows a schematic diagram of weighting a vessel dilation image P43, and fig. 9 (c) shows a schematic diagram of generating a second mixture weight distribution map P4 based on the weighted disc dilation image P23 and the weighted vessel dilation image P43.
In other examples, the blending weight generation module 400 may be configured to blend-weight the optic disc dilation image and the vessel dilation image to generate a blending weight distribution map. That is, the blended weight profile may be generated from blending weighting of the optic disc dilation image and the vessel dilation image. In this case, training the artificial neural network based on the mixed weight distribution map generated from the expanded optic disc image and the expanded blood vessel image can reduce the error of the blood vessel detection algorithm on the blood vessel boundary.
For example, in some examples, as shown in fig. 9, the mixing weight profile may be a second mixing weight profile P4. Specifically, the disc dilation image P23 and the vessel dilation image P43 may be weighted separately (see fig. 9 (a) and 9 (b)), and the weighted disc dilation image P23 and the weighted vessel dilation image P43 may be mixed weighted to generate the second mixed weight distribution map P4.
In some examples, as shown in fig. 9, the second mixed weight profile P4 may include a blood vessel region a40 within the second optic disc, a non-blood vessel region a40 'within the second optic disc, a blood vessel region a41 outside the second optic disc, and a non-blood vessel region a41' outside the second optic disc based on the four basic regions of the expanded optic disc expansion region A3, the non-optic disc expansion region A3', the blood vessel expansion region A5, and the non-blood vessel expansion region A5'.
In some examples, the optic disc expansion area A3 may be weighted with a first weight w 1 'let the weight of the non-optic disc expanded area A3' be the second weight w 2 The weight of the blood vessel expansion area A5 is a third weight v 1 The weight of the non-vascular dilation region A5' is a fourth weight v 2 '. The weight of the vessel region a40 in the second disc is the first weight multiplied by a third weight, i.e. w 1 'v 1 '. The weight of the non-vascular region A40 'in the second optic disc is the first weight multiplied by the fourth weight, i.e., w' 1 v 2 '. The weight of the second optic disc outer vessel region A41 is the second weight multiplied by a third weight, i.e. w 2 'v 1 '. The weight of the non-vascular region A41' outside the second optic disc is the second weight multiplied by the fourth weight, i.e. w 2 'v 2 ' see fig. 9 (c)).
In the present disclosure, the second mixing weight profile P4 differs from the first mixing weight profile P3 in that: the first mixed weight distribution map P3 is a mixed weight distribution map generated based on the disc labeling image and the blood vessel image, and the second mixed weight distribution map P4 is a mixed weight distribution map generated based on the disc inflation image and the blood vessel inflation image. Therefore, for the mixing weighting process of the second mixing weight profile P3, reference may be made to the mixing weighting process of the first mixing weight profile P4, which will not be described in detail.
Examples of the present disclosure are not limited thereto, and for example, the mixed weight distribution map may be generated based on the disc labeling image and the blood vessel inflation image, or the mixed weight distribution map may be generated based on the disc inflation image and the blood vessel image.
In this embodiment, as described above, training system 10 may include model training module 500. Model training module 500 may include an artificial neural network. The model training module 500 may train the artificial neural network based on the pre-processed fundus image, the annotation image, and the mixed weight profile. Wherein the pre-processed fundus image may be generated by the image pre-processing module 200. The annotation image may be generated by the acquisition module 100. The mixing weight profile may be generated by the mixing weight generation module 400.
Specifically, the disc labeling image and/or the cup labeling image included in the labeling image may be used as a true value to predict each pixel point of the preprocessed fundus image, and the loss function weight (i.e., the coefficient of the loss function) of each pixel point in the preprocessed fundus image may be assigned by means of the mixed weight distribution map. Training the artificial neural network based on the loss function, and optimizing the output of the artificial neural network to obtain an optimal model of the artificial neural network. In this case, the artificial neural network has a good segmentation accuracy and generalization capability and can automatically extract glaucoma image features.
In some examples, as described above, the glaucoma image feature may be a cup-to-disc ratio (CDR), in which case the cup-to-disc ratio of the fundus image may be predicted based on an optimal model, and glaucoma lesions that may exist in the fundus image may be accurately identified.
Examples of the present disclosure are not limited thereto and the artificial neural network may be replaced with other image feature extraction models. Preferably, other image feature extraction models may employ UNet or its modified type as the artificial neural network for glaucoma image feature extraction.
In the present disclosure, a loss function may be used to calculate the loss, the merits of metric model prediction. Wherein the difference between the predicted value and the true value of the model based on the artificial neural network with respect to a single sample may be referred to as a loss. The smaller the loss, the better the model. A single sample in the present invention may refer to each pixel point in the pre-processed fundus image.
In some examples, the loss function may use a predefined loss function, which in some examples may be a cross entropy loss function, a Dice loss function, or the like. Wherein the cross entropy loss function is a function that measures the difference between the true distribution and the predicted distribution, and the Dice loss function is a set similarity measure function.
Specifically, taking the cross entropy loss function as an example, the loss function of each pixel point in the preprocessed fundus image is:
wherein c represents a predicted category of each pixel point of the preprocessed fundus image, and the predicted category comprises two categories of a visual cup and a visual disk. (i, j) represents coordinates of pixel points in the pre-processed fundus image.A value representing a pixel point with coordinates (i, j) in the cup-marked image or the disc-marked image as a true value of a pixel point with coordinates (i, j) in the pre-processed fundus image,/->The predicted value of the pixel point with coordinates (i, j) in the pre-processing fundus image is represented. w (w) c Weights for each category.
In some examples, the weights of the loss functions of the individual pixel points in the pre-processed fundus image may be assigned using a mixed weight profile. As described above, the mixed weight profile may include a fundus region and a background region. Wherein the fundus region may be a region within the outline of the eyeball. The fundus region may include four blend regions of a vascular region within the first optic disc, a non-vascular region within the first optic disc, a vascular region outside the first optic disc, and a non-vascular region outside the first optic disc of the first blend weight profile. The background area may be an area outside the outline of the eyeball. The background region may be a partial region of the non-vascular region outside the first optic disc. In some examples, the disc region may be weighted with a first weight, the non-disc region with a second weight, the vessel region with a third weight, the non-vessel region with a fourth weight, and the background region with a zero weight (i.e., the non-vessel region outside the first disc has a zero weight belonging to the background region). The values of the individual pixels in the mixed weight distribution map (i.e., the weights of the loss functions of the individual pixels in the pre-processed fundus image) may be as shown in the following equation (11):
Wherein w is i,j Is the weight of the pixel point with the coordinates of (i, j), p i,j Is pixel point (i, j), w 1 Is the weight (first weight) of the pixel point in the video disc, w 2 Is the weight of the pixel point outside the video disc (second weight), v 1 Weights (third weights) of pixel points of the blood vessel region, v 2 Weights (fourth weights) of pixels in the non-vascular region, R 1 R is the pixel point set of the blood vessel region in the video disc 2 R is the pixel point set of the non-vascular area in the video disc 3 R is the pixel point set of the blood vessel region outside the video disc 4 R is the pixel point set of the non-vascular area outside the video disc 5 Is a set of pixels of the background region.
In some examples, the loss function L of the artificial neural network may be obtained based on the weights of the loss functions of the individual pixel points:
L=∑ i,j (w i,j *loss i,j ) … … (12)
Wherein w is i,j Is the weight of the pixel point with the coordinates of (i, j), loss i,j Is the loss function of the pixel point with coordinates (i, j). Therefore, the artificial neural network can be trained based on the loss function so as to optimize the output of the artificial neural network and further obtain an optimal model.
In some examples, the artificial neural network parameters may be optimized using a minimum gradient descent method, and the adjustment may be performed according to the direction in which the loss function is most rapidly descending. Thereby, the training system 10 can be optimized by means of the coefficients in the loss function. In other examples, parameter optimization may be performed using a random gradient descent method.
In some examples, the model training module 500 may extract glaucoma image features in the fundus image using an optimal model, thereby predicting a lesion that may be present in the fundus image. In some examples, the fundus images in the test set may be identified using a trained artificial neural network, resulting in an average identification accuracy of, for example, up to 90% or more. From this, the training system 10 according to the present embodiment can obtain improved glaucoma lesion determination accuracy while taking into account fundus clinical conditions.
The training method of the present disclosure for artificial neural network-based glaucoma image feature extraction is described in detail below in conjunction with fig. 10. The training method for glaucoma image feature extraction based on the artificial neural network according to the present disclosure may be simply referred to as a training method. The training method according to the present disclosure is applied to the training system 10 described above. Fig. 10 shows a flowchart of a training method of artificial neural network-based glaucoma image feature extraction according to an embodiment of the present disclosure.
In this embodiment, the training method for glaucoma image feature extraction based on an artificial neural network may include the steps of: a fundus image and a labeling image are prepared (step S100), the fundus image is preprocessed to form a preprocessed fundus image (step S200), a blood vessel region is detected to form a blood vessel image (step S300), a mixed weight distribution map is formed based on the labeling image and the blood vessel image (step S400), and an artificial neural network is trained based on the preprocessed fundus image, the labeling image, and the mixed weight distribution map (step S500). Under the condition, the artificial neural network is trained based on the preprocessed fundus image, the marked image and the mixed weight distribution diagram, so that the artificial neural network can give consideration to a blood vessel area and a video disc area in training, the learning of small blood vessel walking is optimized, and unbalance of positive and negative samples is restrained. Therefore, the accuracy of the artificial neural network on glaucoma image feature extraction can be improved.
In step S100, a fundus image may be prepared. The fundus image may be an image on the fundus captured by a fundus camera or other fundus photographing apparatus, may be a color fundus image, may be an image of an RGB mode, a CMYK mode, a Lab mode, a grayscale mode, or the like. For a specific description, reference may be made to the acquisition module 100, which is not described here again.
In step S100, a labeling image may be prepared. In some examples, the annotation image may include a disc annotation image and a cup annotation image. The labeling image can be manually labeled by a doctor with rich experience on the optic disc area and the optic cup area in the fundus image, so that the optic disc labeling image and the optic cup labeling image are obtained. Thus, the accuracy of labeling the optic disc area and the cup area can be improved. For a specific description, reference may be made to the acquisition module 100, which is not described here again.
In step S200, the fundus image may be preprocessed to obtain a preprocessed fundus image. In some examples, the bottom-of-eye image may be cropped, normalized, etc. during the preprocessing. Therefore, the fundus image can be converted into an image in a fixed standard form, the difference of different fundus images can be overcome, and the performance of the artificial neural network is improved. In some examples, the bottom-of-eye image may be noise-reduced, grayscaled during the preprocessing. Thus, the characteristics of the glaucoma image can be highlighted. In some examples, scaling, flipping, translation and the like can be performed on the bottom-of-eye image during the preprocessing process, so that the data volume of training the artificial neural network can be increased, and the generalization capability of the artificial neural network can be improved. For a specific description, reference may be made to the image preprocessing module 200, which is not described herein.
In step S300, blood vessel region detection may be performed on the pre-processed fundus image to form a blood vessel image including a blood vessel region. In some examples, vessel region detection may be performed based on franki filtering to form a vessel image. In this case, the blood vessel region in the blood vessel image is significant compared with the blood vessel region in the pre-processed fundus image. Therefore, the later identification and treatment of the vascular region can be facilitated. In other examples, implementations may also be implemented using matched filtering algorithms, adaptive contrast enhancement algorithms, two-dimensional Gabor filtering algorithms, or using other types of artificial neural networks, etc. Thus, a suitable algorithm or artificial neural network can be selected according to different requirements to realize the detection of the blood vessel region relative to the detection of the blood vessel region. For details, reference may be made to the blood vessel region detection module 300, which is not described herein.
In step S400, the disc labeling image obtained in step S100 and the blood vessel image obtained in step S300 may be subjected to hybrid weighting to generate a hybrid weight distribution map. Under the condition, the artificial neural network is trained by utilizing the mixed weight distribution diagram, so that the artificial neural network can give consideration to a blood vessel area and a video disc area in training, optimize learning of small blood vessel running and inhibit unbalance of positive and negative samples.
In step S400, the blended weight map may include four blended regions of an intra-optic vessel region, an intra-optic non-vessel region, an extra-optic vessel region, and an extra-optic non-vessel region. In some examples, the mixed weight profile may include a fundus region and a background region, and the weight of the background region may be made zero, thereby enabling reduced interference of the background region with the extraction of glaucoma image features by the artificial neural network during training. For details, reference may be made to the mixing weight generation module 400, which is not described herein.
In step S400, in some examples, the disc region may be weighted with a first weight, the non-disc region with a second weight, the vessel region with a third weight, and the non-vessel region with a fourth weight. The weight of the vessel region within the disc is the first weight multiplied by the third weight. The weights of the non-vascular regions within the optic disc are the first weight multiplied by the fourth weight. The weights of the vessel regions outside the optic disc are the second weight multiplied by the third weight. The weights of the non-vascular regions outside the optic disc are the second weight multiplied by the fourth weight. Thus, the weights of the in-disc blood vessel region, the in-disc non-blood vessel region, the out-of-disc blood vessel region, and the out-of-disc non-blood vessel region can be obtained from the weights of the disc region, the non-disc region, the blood vessel region, and the non-blood vessel region, respectively. In some examples, the weight of the optic disc region may be made greater than the weight of the non-optic disc region, and the weight of the vascular region may be made greater than the weight of the non-vascular region. For details, reference may be made to the mixing weight generation module 400, which is not described herein.
In step S400, in other examples, the optic disc region in the optic disc labeling image may be expanded to form a optic disc expanded image including the optic disc region. Because the area near the optic disc affects the segmentation of the optic cup or the optic disc and further affects the extraction of the glaucoma image features, the expansion image of the optic disc is obtained through expansion processing so as to carry out the subsequent related processing based on the expansion image of the optic disc, thereby improving the accuracy of the extraction of the glaucoma image features. For details, reference may be made to the disc expansion module 410, which is not described here.
In step S400, in other examples, a blood vessel region in the blood vessel image may be dilated to form a blood vessel dilation image including the blood vessel region. In this case, an error in detecting the boundary of the blood vessel based on the blood vessel detection algorithm can be reduced by the expansion process. Details may be found in the vessel expansion module 420, and will not be described here.
In step S400, in other examples, the above-described disc dilation image and vessel dilation image may be hybrid weighted to generate a hybrid weight distribution map. In this case, using the hybrid weight distribution map generated based on the expanded optic disc image and the expanded blood vessel image to train the artificial neural network can reduce the error of the blood vessel detection algorithm on the blood vessel boundary. For details, reference may be made to the mixing weight generation module 400, which is not described herein.
In step S500, the artificial neural network may be trained based on the pre-processed fundus image, the annotation image, and the mixed weight profile. The pre-processing fundus image may be generated by step S200. The annotation image can be generated by step S100. The mixing weight profile may be generated by step S400. In some examples, the loss function weights, i.e., coefficients of the loss functions, of the individual pixel points in the pre-processed fundus image may be assigned by means of a mixed weight profile. Training the artificial neural network based on the loss function, and optimizing the output of the artificial neural network to obtain an optimal model of the artificial neural network. In this case, the artificial neural network has a good segmentation accuracy and generalization capability and can automatically extract glaucoma image features. For details, see model training module 500, which is not described in detail herein.
While the disclosure has been described in detail in connection with the drawings and embodiments, it should be understood that the foregoing description is not intended to limit the disclosure in any way. Modifications and variations of the present disclosure may be made as desired by those skilled in the art without departing from the true spirit and scope of the disclosure, and such modifications and variations fall within the scope of the disclosure.
Claims (8)
1. A training method for glaucoma image feature extraction based on an artificial neural network is characterized in that,
comprising the following steps:
a fundus image and a labeling image are prepared,
the labeling image comprises a video disc labeling image of a video disc region and a video cup labeling image of a video cup region;
preprocessing the fundus image to obtain a preprocessed fundus image, and generating a blood vessel image containing a blood vessel region according to a blood vessel detection result;
performing mixed weighting on the video disc labeling image and the blood vessel image to generate a mixed weight distribution map; and is also provided with
Training an artificial neural network based on the pre-processed fundus image, the annotation image and the mixed weight profile,
when the mixed weighting is carried out, the weight of the video disc area is larger than that of the non-video disc area, and the weight of the blood vessel area is larger than that of the non-blood vessel area; the mixed weight distribution map comprises an intra-optic disc blood vessel region, an intra-optic disc non-blood vessel region, an extra-optic disc blood vessel region and an extra-optic disc non-blood vessel region;
the weight of the video disc area is made to be a first weight, the weight of the non-video disc area is made to be a second weight, the weight of the blood vessel area is made to be a third weight, the weight of the non-blood vessel area in the video disc is made to be a fourth weight, the weight of the blood vessel area in the video disc is made to be the first weight multiplied by the third weight, the weight of the non-blood vessel area in the video disc is made to be the first weight multiplied by the fourth weight, the weight of the blood vessel area outside the video disc is made to be the second weight multiplied by the third weight, and the weight of the non-blood vessel area outside the video disc is made to be the second weight multiplied by the fourth weight.
2. The training method of claim 1, wherein,
expanding the optic disc area in the optic disc labeling image to form an optic disc expanded image;
expanding the vessel region in the vessel image to form a vessel expanded image;
and performing mixed weighting on the video disc expansion image and the blood vessel expansion image to generate the mixed weight distribution map.
3. The training method of claim 1, wherein,
and in the training process, acquiring coefficients of a loss function in the artificial neural network based on the mixed weight distribution diagram, and training the artificial neural network based on the loss function.
4. The training method of claim 1, wherein,
vessel region detection is performed based on franki filtering to form the vessel image.
5. A training system for glaucoma image feature extraction based on an artificial neural network is characterized in that,
comprising the following steps:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module acquires fundus images and annotation images, and the annotation images comprise video disc annotation images marked with video disc areas and video cup annotation images marked with video cup areas;
An image preprocessing module for preprocessing the fundus image to obtain a preprocessed fundus image;
a blood vessel region detection module that performs blood vessel region detection on the pre-processed fundus image to form a blood vessel image;
a mixing weight generation module that performs mixing weighting on the optic disc labeling image and the blood vessel image to generate a mixing weight distribution map; and
a model training module for training the artificial neural network based on the preprocessed fundus image, the labeling image and the mixed weight distribution diagram,
when the mixed weighting is carried out, the weight of the video disc area is larger than that of the non-video disc area, and the weight of the blood vessel area is larger than that of the non-blood vessel area; the mixed weight distribution map comprises an intra-optic disc blood vessel region, an intra-optic disc non-blood vessel region, an extra-optic disc blood vessel region and an extra-optic disc non-blood vessel region;
the weight of the video disc area is made to be a first weight, the weight of the non-video disc area is made to be a second weight, the weight of the blood vessel area is made to be a third weight, the weight of the non-blood vessel area in the video disc is made to be a fourth weight, the weight of the blood vessel area in the video disc is made to be the first weight multiplied by the third weight, the weight of the non-blood vessel area in the video disc is made to be the first weight multiplied by the fourth weight, the weight of the blood vessel area outside the video disc is made to be the second weight multiplied by the third weight, and the weight of the non-blood vessel area outside the video disc is made to be the second weight multiplied by the fourth weight.
6. The training system of claim 5, wherein the training system comprises,
expanding the optic disc area in the optic disc labeling image to form an optic disc expanded image;
expanding the vessel region in the vessel image to form a vessel expanded image;
and performing mixed weighting on the video disc expansion image and the blood vessel expansion image to generate the mixed weight distribution map.
7. The training system of claim 5, wherein the training system comprises,
and in the training process, acquiring coefficients of a loss function in the artificial neural network based on the mixed weight distribution diagram, and training the artificial neural network based on the loss function.
8. The training system of claim 5, wherein the training system comprises,
vessel region detection is performed based on franki filtering to form the vessel image.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311806582.8A CN117788407A (en) | 2019-12-04 | 2020-07-18 | Training method for glaucoma image feature extraction based on artificial neural network |
CN202311798837.0A CN117764957A (en) | 2019-12-04 | 2020-07-18 | Glaucoma image feature extraction training system based on artificial neural network |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2019112287255 | 2019-12-04 | ||
CN201911228725 | 2019-12-04 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311806582.8A Division CN117788407A (en) | 2019-12-04 | 2020-07-18 | Training method for glaucoma image feature extraction based on artificial neural network |
CN202311798837.0A Division CN117764957A (en) | 2019-12-04 | 2020-07-18 | Glaucoma image feature extraction training system based on artificial neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113012093A CN113012093A (en) | 2021-06-22 |
CN113012093B true CN113012093B (en) | 2023-12-12 |
Family
ID=76383107
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010701373.7A Active CN113011450B (en) | 2019-12-04 | 2020-07-18 | Training method, training device, recognition method and recognition system for glaucoma recognition |
CN202311798837.0A Pending CN117764957A (en) | 2019-12-04 | 2020-07-18 | Glaucoma image feature extraction training system based on artificial neural network |
CN202310297714.2A Pending CN116824203A (en) | 2019-12-04 | 2020-07-18 | Glaucoma recognition device and recognition method based on neural network |
CN202311806582.8A Pending CN117788407A (en) | 2019-12-04 | 2020-07-18 | Training method for glaucoma image feature extraction based on artificial neural network |
CN202010702643.6A Active CN113012093B (en) | 2019-12-04 | 2020-07-18 | Training method and training system for glaucoma image feature extraction |
CN202310321384.6A Pending CN116343008A (en) | 2019-12-04 | 2020-07-18 | Glaucoma recognition training method and training device based on multiple features |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010701373.7A Active CN113011450B (en) | 2019-12-04 | 2020-07-18 | Training method, training device, recognition method and recognition system for glaucoma recognition |
CN202311798837.0A Pending CN117764957A (en) | 2019-12-04 | 2020-07-18 | Glaucoma image feature extraction training system based on artificial neural network |
CN202310297714.2A Pending CN116824203A (en) | 2019-12-04 | 2020-07-18 | Glaucoma recognition device and recognition method based on neural network |
CN202311806582.8A Pending CN117788407A (en) | 2019-12-04 | 2020-07-18 | Training method for glaucoma image feature extraction based on artificial neural network |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310321384.6A Pending CN116343008A (en) | 2019-12-04 | 2020-07-18 | Glaucoma recognition training method and training device based on multiple features |
Country Status (1)
Country | Link |
---|---|
CN (6) | CN113011450B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113768460B (en) * | 2021-09-10 | 2023-11-14 | 北京鹰瞳科技发展股份有限公司 | Fundus image analysis system, fundus image analysis method and electronic equipment |
CN115578577A (en) * | 2021-10-11 | 2023-01-06 | 深圳硅基智能科技有限公司 | Eye ground image recognition device and method based on tight frame marks |
US11941809B1 (en) * | 2023-07-07 | 2024-03-26 | Healthscreen Inc. | Glaucoma detection and early diagnosis by combined machine learning based risk score generation and feature optimization |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101909141A (en) * | 2009-06-03 | 2010-12-08 | 晨星软件研发(深圳)有限公司 | Method and device for adjusting television image |
CN106408564A (en) * | 2016-10-10 | 2017-02-15 | 北京新皓然软件技术有限责任公司 | Depth-learning-based eye-fundus image processing method, device and system |
CN106651888A (en) * | 2016-09-28 | 2017-05-10 | 天津工业大学 | Color fundus image optic cup segmentation method based on multi-feature fusion |
CN106725295A (en) * | 2016-11-29 | 2017-05-31 | 瑞达昇科技(大连)有限公司 | A kind of miniature check-up equipment, device and its application method |
CN108520522A (en) * | 2017-12-31 | 2018-09-11 | 南京航空航天大学 | Retinal fundus images dividing method based on the full convolutional neural networks of depth |
WO2018215855A1 (en) * | 2017-05-23 | 2018-11-29 | Indian Institute Of Science | Automated fundus image processing techniques for glaucoma prescreening |
CN108921227A (en) * | 2018-07-11 | 2018-11-30 | 广东技术师范学院 | A kind of glaucoma medical image classification method based on capsule theory |
CN109658423A (en) * | 2018-12-07 | 2019-04-19 | 中南大学 | A kind of optic disk optic cup automatic division method of colour eyeground figure |
CN109829877A (en) * | 2018-09-20 | 2019-05-31 | 中南大学 | A kind of retinal fundus images cup disc ratio automatic evaluation method |
CN109919938A (en) * | 2019-03-25 | 2019-06-21 | 中南大学 | The optic disk of glaucoma divides map acquisition methods |
CN110110782A (en) * | 2019-04-30 | 2019-08-09 | 南京星程智能科技有限公司 | Retinal fundus images optic disk localization method based on deep learning |
CN110473188A (en) * | 2019-08-08 | 2019-11-19 | 福州大学 | A kind of eye fundus image blood vessel segmentation method based on Frangi enhancing and attention mechanism UNet |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9107617B2 (en) * | 2009-11-16 | 2015-08-18 | Agency For Science, Technology And Research | Obtaining data for automatic glaucoma screening, and screening and diagnostic techniques and systems using the data |
EP2888718B1 (en) * | 2012-08-24 | 2018-01-17 | Agency For Science, Technology And Research | Methods and systems for automatic location of optic structures in an image of an eye, and for automatic retina cup-to-disc ratio computation |
CN108122236B (en) * | 2017-12-18 | 2020-07-31 | 上海交通大学 | Iterative fundus image blood vessel segmentation method based on distance modulation loss |
CN109215039B (en) * | 2018-11-09 | 2022-02-01 | 浙江大学常州工业技术研究院 | Method for processing fundus picture based on neural network |
CN109658395B (en) * | 2018-12-06 | 2022-09-09 | 代黎明 | Optic disc tracking method and system and eye fundus collection device |
-
2020
- 2020-07-18 CN CN202010701373.7A patent/CN113011450B/en active Active
- 2020-07-18 CN CN202311798837.0A patent/CN117764957A/en active Pending
- 2020-07-18 CN CN202310297714.2A patent/CN116824203A/en active Pending
- 2020-07-18 CN CN202311806582.8A patent/CN117788407A/en active Pending
- 2020-07-18 CN CN202010702643.6A patent/CN113012093B/en active Active
- 2020-07-18 CN CN202310321384.6A patent/CN116343008A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101909141A (en) * | 2009-06-03 | 2010-12-08 | 晨星软件研发(深圳)有限公司 | Method and device for adjusting television image |
CN106651888A (en) * | 2016-09-28 | 2017-05-10 | 天津工业大学 | Color fundus image optic cup segmentation method based on multi-feature fusion |
CN106408564A (en) * | 2016-10-10 | 2017-02-15 | 北京新皓然软件技术有限责任公司 | Depth-learning-based eye-fundus image processing method, device and system |
CN106725295A (en) * | 2016-11-29 | 2017-05-31 | 瑞达昇科技(大连)有限公司 | A kind of miniature check-up equipment, device and its application method |
WO2018215855A1 (en) * | 2017-05-23 | 2018-11-29 | Indian Institute Of Science | Automated fundus image processing techniques for glaucoma prescreening |
CN109598733A (en) * | 2017-12-31 | 2019-04-09 | 南京航空航天大学 | Retinal fundus images dividing method based on the full convolutional neural networks of depth |
CN108520522A (en) * | 2017-12-31 | 2018-09-11 | 南京航空航天大学 | Retinal fundus images dividing method based on the full convolutional neural networks of depth |
CN108921227A (en) * | 2018-07-11 | 2018-11-30 | 广东技术师范学院 | A kind of glaucoma medical image classification method based on capsule theory |
CN109829877A (en) * | 2018-09-20 | 2019-05-31 | 中南大学 | A kind of retinal fundus images cup disc ratio automatic evaluation method |
CN109658423A (en) * | 2018-12-07 | 2019-04-19 | 中南大学 | A kind of optic disk optic cup automatic division method of colour eyeground figure |
CN109919938A (en) * | 2019-03-25 | 2019-06-21 | 中南大学 | The optic disk of glaucoma divides map acquisition methods |
CN110110782A (en) * | 2019-04-30 | 2019-08-09 | 南京星程智能科技有限公司 | Retinal fundus images optic disk localization method based on deep learning |
CN110473188A (en) * | 2019-08-08 | 2019-11-19 | 福州大学 | A kind of eye fundus image blood vessel segmentation method based on Frangi enhancing and attention mechanism UNet |
Non-Patent Citations (2)
Title |
---|
Pardha Saradhi Mittapalli等.Segmentation of optic disk and optic cup from digital fundus images for the assessment of glaucoma.《Biomedical Signal Processing and Control》.2016,第34-46页. * |
李轶轩.基于深度学习的青光眼形态特征自动识别方法研究.《中国优秀硕士学位论文全文数据库医药卫生科技辑》.2019,第E073-24页. * |
Also Published As
Publication number | Publication date |
---|---|
CN113011450A (en) | 2021-06-22 |
CN113012093A (en) | 2021-06-22 |
CN113011450B (en) | 2023-04-07 |
CN116343008A (en) | 2023-06-27 |
CN117764957A (en) | 2024-03-26 |
CN116824203A (en) | 2023-09-29 |
CN117788407A (en) | 2024-03-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Salazar-Gonzalez et al. | Segmentation of the blood vessels and optic disk in retinal images | |
CN113012093B (en) | Training method and training system for glaucoma image feature extraction | |
KR20200004841A (en) | System and method for guiding a user to take a selfie | |
Sinthanayothin | Image analysis for automatic diagnosis of diabetic retinopathy | |
David et al. | Retinal Blood Vessels and Optic Disc Segmentation Using U‐Net | |
Calimeri et al. | Optic disc detection using fine tuned convolutional neural networks | |
Li et al. | Vessel recognition of retinal fundus images based on fully convolutional network | |
Sharma et al. | Machine learning approach for detection of diabetic retinopathy with improved pre-processing | |
Sakthivel et al. | An automated detection of glaucoma using histogram features | |
Manchalwar et al. | Detection of cataract and conjunctivitis disease using histogram of oriented gradient | |
Shaik et al. | Glaucoma identification based on segmentation and fusion techniques | |
Gupta et al. | Comparative study of different machine learning models for automatic diabetic retinopathy detection using fundus image | |
Verma et al. | Machine learning classifiers for detection of glaucoma | |
Mookiah et al. | Computer aided diagnosis of diabetic retinopathy using multi-resolution analysis and feature ranking frame work | |
Li et al. | A deep-learning-enabled monitoring system for ocular redness assessment | |
KR102282334B1 (en) | Method for optic disc classification | |
Hussein et al. | Convolutional Neural Network in Classifying Three Stages of Age-Related Macula Degeneration | |
Taş et al. | Detection of retinal diseases from ophthalmological images based on convolutional neural network architecture. | |
Kumari et al. | Automated process for retinal image segmentation and classification via deep learning based cnn model | |
Mathias et al. | Categorization of diabetic retinopathy and identification of characteristics to assist effective diagnosis | |
Hussein et al. | Automatic classification of AMD in retinal images | |
Rajanna et al. | Neural networks with manifold learning for diabetic retinopathy detection | |
Akshita et al. | Diabetic retinopathy classification using deep convolutional neural network | |
Lavanya et al. | Retinal vessel feature extraction from fundus image using image processing techniques | |
Hubert et al. | Advances in Early Detection and Monitoring of Retinopathy in Preterm Infants Using CNN and MLP Models |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |