CN110110782A - Retinal fundus images optic disk localization method based on deep learning - Google Patents

Retinal fundus images optic disk localization method based on deep learning Download PDF

Info

Publication number
CN110110782A
CN110110782A CN201910359261.5A CN201910359261A CN110110782A CN 110110782 A CN110110782 A CN 110110782A CN 201910359261 A CN201910359261 A CN 201910359261A CN 110110782 A CN110110782 A CN 110110782A
Authority
CN
China
Prior art keywords
optic disk
fundus images
model
retinal fundus
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910359261.5A
Other languages
Chinese (zh)
Inventor
万程
俞秋丽
游齐靖
彭琦
徐佩园
华骁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Starway Intelligent Technology Co Ltd
Original Assignee
Nanjing Starway Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Starway Intelligent Technology Co Ltd filed Critical Nanjing Starway Intelligent Technology Co Ltd
Priority to CN201910359261.5A priority Critical patent/CN110110782A/en
Publication of CN110110782A publication Critical patent/CN110110782A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Abstract

The present invention discloses a kind of retinal fundus images optic disk localization method based on deep learning, it include: the selection of data set, the pretreatment of data, the selection of model, eye fundus image after pretreated is input in designed model with corresponding optic disk label, obtains optic disk probability distribution graph identical with original image size;Candidate region of the biggish pixel of number as optic disk in probability graph is found out in such a way that threshold value is set, and calculates the center of gravity of these points and is construed as the center of optic disk, realizing the positioning of optic disk.The present invention realizes the automatic positioning of the optic disk of retinal fundus images, and precision is high, and speed is fast, tests on multiple data sets and obtains good result.

Description

Retinal fundus images optic disk localization method based on deep learning
Technical field
The present invention relates to a kind of retinal fundus images optic disk localization method based on deep learning, belongs at medical image Reason field.
Background technique
In eye fundus image, optic disk is the important parameter used in many disease Shi Douhui of diagnosis, such as glaucoma patient The ratio of optic cup and optic disk is noticeably greater than normal person, be exactly to the simplest method of the judgement of glaucoma measure cup disc ratio, and this Need the segmentation of optic disk that can accomplish accurate as far as possible.In addition to this, the characteristic readily identified due to optic disk, optic disk positions past The auxiliary positioning at other positions is realized toward the prime for disease detection algorithm.However, being easy in eye fundus image imaging process Influenced by various factors, occur noise, edge blurry, block, uneven illumination situations such as, along with human individual's difference compared with Greatly, it increases accurate positioning and divides the difficulty of optic disk.
Recent domestic scholar has done a large amount of research about the positioning and segmentation of optic disk.Wherein, the side of optic disk positioning Method rough can be divided into two classes: appearance characteristics based on optic disk and be based on vessel properties.Appearance characteristics are broadly divided into two classes, One kind be it is higher with respect to other regional luminances using optic disk position, for color in faint yellow, another kind is using optic disk close to round The characteristics of judged.These algorithms are generally straightforward, and runing time is very short, the disadvantage is that robustness is poor, by lesion The influence of the factors such as position, illumination, noise is very big, and accuracy rate is lower in complicated eye fundus image.Inspection based on vessel properties It is the outwardly extending section start of optical fundus blood vessel that optic disk, which is utilized, in survey method, and blood vessel herein is larger compared with thick and density, in optic disk The heart is often positioned in the characteristics of intersection of these blood vessels.The advantages of this method, is that accuracy rate and robustness are higher, the disadvantage is that Algorithm comparison is complicated, and the time needed for handling image is very long, is unable to reach the actually required speed of service.
Summary of the invention
Goal of the invention: present invention aims in view of the deficiencies of the prior art, provide a kind of view based on deep learning Film eye fundus image optic disk localization method realizes the optic disk positioning of accurately and rapidly retinal fundus images.
Technical solution: the retinal fundus images optic disk localization method of the present invention based on deep learning, feature exist In including the following steps:
(1) choose retinal fundus images data set and divide training set and test set, to the data of training set and test set into The corresponding pretreatment operation of row;
(2) convolutional neural networks model is constructed based on existing VGG network model, removes the full articulamentum of VGG network model, directly It connects using two-dimensional characteristic pattern as output, when training the model, the mode that all network parameters are all made of random initializtion is assigned Give initial value;
(3) image in training set is input to convolutional neural networks model with corresponding optic disk label to be trained, Zhi Houyong Trained model is predicted on test set, obtains the probability distribution graph of corresponding optic disk position;
(4) candidate region of the biggish place of probability as optic disk position is filtered out in such a way that threshold value is set, and calculates these The center of gravity in region finally determines the position of optic disk.
Eye fundus image after pretreated is input to designed end-to-end convolutional Neural net with corresponding optic disk label In network, optic disk probability distribution graph identical with original image size is obtained.The value of each pixel indicates the corresponding original of point in figure A possibility that figure position belongs to the probability of optic disk, and the number the big, and model is considered optic disk is bigger.Finally by setting threshold value Mode finds out candidate region of the biggish pixel of number as optic disk in probability graph, calculates the center of gravity of these points and is thought It is the center of optic disk, realizes the positioning of optic disk.
Further improve above scheme, the retinal fundus images data set include disclosed data set ORIGA, It is all as training set, the other half image and STARE to choose each half image of ORIGA and MESSIDOR by MESSIDOR and STARE Image is as test set.
Further, lead to increased modelling difficulty because of different dimension data collection and reduce to count to reduce Be counted as this, improve the speed of model training, the pretreatment operation in the step (1) include to all pictures on each channel Be individually subtracted corresponding mean value, and these pictures are narrowed down into an identical size and will be divided into the image of training set into Row Random-Rotation and translation are to realize that image data expands.
Further, the step (3) include: will be pretreated after training set image as convolutional neural networks model Input, calculate the loss between the output of propagated forward and optic disk label, and to the ginseng of model by way of backpropagation Number, which is realized, to be updated, by repeatedly updating until the parameter of convolutional neural networks model can indicate the feature of input picture.
Further, the master network of the convolutional neural networks is made of 5 identical components, is passed through between each component 2 × 2 maximum pond layer connection;Wherein, each component is made of the convolutional layer that several convolution kernel sizes are 3 × 3;It chooses The output of three most deep components simultaneously carries out operation of deconvoluting accordingly to them, so that the ruler of the size of these outputs and input It is very little identical;Later, the output after operation of deconvoluting is carried out to the output for splicing to the end in four dimensions;The convolution mind Loss function through network model using cross entropy using the benefit of cross entropy be when error is larger weight update amplitude more Greatly, weight updates slow when error is smaller, cross entropy is defined as:
Wherein input picture is,Indicate the label of correspondence image,Indicate net The parameter of network, here,, it is assumed that function representation are as follows:
Further, threshold value is set as T=0.9 in the step (4), and calculates these candidate points by following formula Position of centre of gravity:
WhereinIndicate the probability value of respective coordinates,WithIt is the coordinate of respective pixel.
The utility model has the advantages that the invention discloses a kind of retinal fundus images optic disk localization method based on deep learning, right Input picture designs and has trained the deep learning network knot based on pixel classification from image to image end to end Structure model, compared with traditional optic disk localization method, the present invention has taken into account accuracy and the speed of service, on different data sets Preferable effect is obtained, prime processing and auxiliary diagnosis can be provided for the pathological diagnosis of retinal fundus images, realize view The automatic positioning of the optic disk of nethike embrane eye fundus image, precision is high, and speed is fast, tests on multiple data sets and obtains good knot Fruit.
Detailed description of the invention
Fig. 1 is the structural schematic diagram of optic disk positioning network.
Fig. 2 is original input picture.
Fig. 3 is the sample that test data concentrates some accurate positionins.
Fig. 4 is the sample of some positioning failures.
Specific embodiment
Technical solution of the present invention is described in detail below by attached drawing, but protection scope of the present invention is not limited to The embodiment.
Embodiment 1: the retinal fundus images optic disk localization method provided by the invention based on deep learning, first to eye Base map picture carries out pretreatment operation and passes through instruction later using pretreated image as the input of the full convolutional neural networks of depth Experienced mode makes e-learning to the distribution map of optic disk position, filters out optic disk point that may be present simultaneously finally by setting threshold value The center of gravity for calculating these points obtains the final position of optic disk, and algorithm flow is as shown in Figure 1.
Main flow includes the selection of data set, the pretreatment of data, the selection of model with it is trained and determine final Positioning result.Firstly, the picture of training data subtracts corresponding mean value in tri- channels RGB respectively, later by these images Size conversion is unified size: 400*600, in order to be reduced because of increased mould caused by different dimension data collection Type design difficulty and reduction calculate cost, improve the speed of model training.Then, eye fundus image after pretreated with it is corresponding Optic disk label be input in designed end-to-end convolutional neural networks, obtain optic disk probability identical with original image size point Butut.The value of each pixel indicates that the corresponding original image position of the point belongs to the probability of optic disk, the more big then model of number in figure A possibility that being considered optic disk is bigger.The biggish pixel conduct of number in probability graph is found out finally by the mode of setting threshold value The candidate region of optic disk calculates the center of gravity of these points and is construed as the center of optic disk, realizing the positioning of optic disk.This hair The automatic positioning of the bright optic disk for realizing retinal fundus images, precision is high, and speed is fast, tests and is obtained very on multiple data sets Good result.
The method of the present invention and technical effect are illustrated below by specific example.
Step 1: training dataset and test set are selected from disclosed three data sets ORIGA, MESSIDOR and STARE It takes.ORIGA includes the image that 650 secondary resolution ratio are 3072 × 2048 altogether, and all images pass through high-resolution eyeground phase Machine is obtained to guarantee its quality.MESSIDOR data set shares 1200 colored eye fundus images, wherein 540 normal pictures, 660 Illness image is opened, includes three kinds of resolution ratio: 1440x960,2240x1488 and 2304x1536.ORIGA and MESSIDOR data Label of the collection all containing corresponding optic disk, the form of label, which is that size is identical with original image, records whether each pixel belongs to view The two values matrix of disk, preservation format are .mat.The resolution ratio of STARE is 700 × 605, wherein most of image includes lesion Point or the bad situation of quality.When dividing training set and test set, ORIGA and MESSIDOR respectively select half for training, Totally 880.The other half image is used to test for totally 1240 with all images of STARE.
Step 2: the pretreatment operations such as mean value, change dimension of picture, translation, overturning are carried out to training data.Firstly, Their mean value is individually subtracted to each channel of all pictures, these pictures are then narrowed down into 400 × 600 sizes, are passed through Training set is overturn and translated at random, data collection is expanded.
Step 3: during model training, will be pretreated after image as the input of the full convolutional neural networks of depth, The loss between the output and label of propagated forward is calculated, and the parameter of model is realized by way of backpropagation and is updated, By repeatedly updating until the parameter of model can be good at indicating the feature of input picture.
Step 4: being predicted using sample of the trained model to test set, obtains each pixel in correspondence image Whether belong to the probability graph of optic disk, thresholding finally is carried out to the probability graph of depth network and center of gravity is asked to obtain final optic disk Optic disk positioning is realized in position.
Master network is made of 5 identical components, is connected between each component by 2 × 2 maximum pond layer.Wherein, Each component is made of the convolutional layer that several convolution kernel sizes are 3 × 3.In view of the intensification with the number of plies, model The feature practised is more abstract, we choose the output of three most deep components and carry out operation of deconvoluting accordingly to them, So that the size of these outputs is identical as the size of input.Later, the output after deconvoluting is spliced in four dimensions Obtain output to the end.The loss function of the network is to work as error using the benefit for intersecting entropy function using entropy function is intersected Weight updating decision when larger, weight updates slow when error is smaller.Cross entropy is defined as:
Wherein, input picture is,Indicate the label of correspondence image.Indicate net The parameter of network.Here,, it is assumed that function can indicate are as follows:.
In addition, corresponding coefficient can balance the difference of pixel quantity, therefore the loss letter in loss function by adjusting it Number can be applied in the unbalanced sample of class.
In step 4, the purpose that threshold value is arranged is the point for removing model judgement inaccuracy, only retains and is most likely to be optic disk The pixel of position is as candidate region.Herein, threshold value is set as T=0.9.Later, the position of centre of gravity of these candidate points is calculated, Center of gravity can be obtained by formula once:
,
Wherein,Indicate the probability value of respective coordinates,, It is the coordinate of respective pixel.
It selects multistep learning rate strategy to change learning rate in the present invention, is gradually reduced according to the number of iterations.When reaching When maximum the number of iterations or loss function value tend towards stability, the full convolutional neural networks deconditioning of depth obtains depth segmentation Network model parameter.
Experimental Hardware: central processing unit is 2.8GHZ Intel Xeon E5-1603, and graphics processor is tall and handsome reaches GTX1080, video memory 8GB.Experiment software: operating system is Ubuntu14 .04LTS, deep learning tool Caffe.
Parameter comprising weight and biasing in each network layer in the trained full convolutional network model of depth, using this Inventive method is verified on multiple data sets, comprehensively considers locating speed and locating accuracy, and effect is higher than others side Method.Arithmetic result is as shown in Figure 3 to Figure 4.
As described above, must not be explained although the present invention has been indicated and described referring to specific preferred embodiment For the limitation to invention itself.It without prejudice to the spirit and scope of the invention as defined in the appended claims, can be right Various changes can be made in the form and details for it.

Claims (6)

1. the retinal fundus images optic disk localization method based on deep learning, which comprises the steps of:
(1) choose retinal fundus images data set and divide training set and test set, to the data of training set and test set into The corresponding pretreatment operation of row;
(2) convolutional neural networks model is constructed based on existing VGG network model, removes the full articulamentum of VGG network model, directly It connects using two-dimensional characteristic pattern as output, when training the model, the mode that all network parameters are all made of random initializtion is assigned Give initial value;
(3) image in training set is input to convolutional neural networks model with corresponding optic disk label to be trained, Zhi Houyong Trained model is predicted on test set, obtains the probability distribution graph of corresponding optic disk position;
(4) candidate region of the biggish place of probability as optic disk position is filtered out in such a way that threshold value is set, and calculates these The center of gravity in region finally determines the position of optic disk.
2. the retinal fundus images optic disk localization method according to claim 1 based on deep learning, it is characterised in that: The retinal fundus images data set includes disclosed data set ORIGA, MESSIDOR and STARE, choose ORIGA and Each half image of MESSIDOR is as training set, the other half image and all images of STARE as test set.
3. the retinal fundus images optic disk localization method according to claim 1 based on deep learning, it is characterised in that: Pretreatment operation in the step (1) include all pictures are individually subtracted with corresponding mean value on each channel, and by this A little pictures narrow down to an identical size and the image for being divided into training set are carried out Random-Rotation and translation to realize figure As data extending.
4. the retinal fundus images optic disk localization method according to claim 1 based on deep learning, it is characterised in that: The step (3) include: will be pretreated after training set image as the input of convolutional neural networks model, to biography before calculating Loss between the output broadcast and optic disk label, and the parameter of model is realized by way of backpropagation and is updated, through excessive Secondary update is until the parameter of convolutional neural networks model can indicate the feature of input picture.
5. the retinal fundus images optic disk localization method according to claim 4 based on deep learning, it is characterised in that: The master network of the convolutional neural networks is made of 5 identical components, passes through 2 × 2 maximum pond layer between each component Connection;Wherein, each component is made of the convolutional layer that several convolution kernel sizes are 3 × 3;Choose three most deep components It exports and operation of deconvoluting accordingly is carried out to them, so that the size of these outputs is identical as the size of input;Later, it will go Output after convolution operation carries out the output for splicing to the end in four dimensions;The loss of the convolutional neural networks model Function uses cross entropy, and the benefit using cross entropy is that the amplitude that weight updates when error is larger is bigger, the smaller Shi Quan of error It updates again slowly, cross entropy is defined as:
Wherein input picture is,Indicate the label of correspondence image,Indicate net The parameter of network, here,, it is assumed that function representation are as follows:
6. the retinal fundus images optic disk localization method according to claim 1 based on deep learning, it is characterised in that: Threshold value is set as T=0.9 in the step (4), and the position of centre of gravity of these candidate points is calculated by following formula:
WhereinIndicate the probability value of respective coordinates,WithIt is the coordinate of respective pixel.
CN201910359261.5A 2019-04-30 2019-04-30 Retinal fundus images optic disk localization method based on deep learning Pending CN110110782A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910359261.5A CN110110782A (en) 2019-04-30 2019-04-30 Retinal fundus images optic disk localization method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910359261.5A CN110110782A (en) 2019-04-30 2019-04-30 Retinal fundus images optic disk localization method based on deep learning

Publications (1)

Publication Number Publication Date
CN110110782A true CN110110782A (en) 2019-08-09

Family

ID=67487704

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910359261.5A Pending CN110110782A (en) 2019-04-30 2019-04-30 Retinal fundus images optic disk localization method based on deep learning

Country Status (1)

Country Link
CN (1) CN110110782A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992309A (en) * 2019-11-07 2020-04-10 吉林大学 Fundus image segmentation method based on deep information transfer network
CN111079858A (en) * 2019-12-31 2020-04-28 杭州迪普科技股份有限公司 Encrypted data processing method and device
CN111583256A (en) * 2020-05-21 2020-08-25 北京航空航天大学 Dermatoscope image classification method based on rotating mean value operation
CN111667490A (en) * 2020-05-07 2020-09-15 清华大学深圳国际研究生院 Eye fundus picture cup optic disk segmentation method
CN113012093A (en) * 2019-12-04 2021-06-22 深圳硅基智能科技有限公司 Training method and training system for glaucoma image feature extraction

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644418A (en) * 2017-09-26 2018-01-30 山东大学 Optic disk detection method and system based on convolutional neural networks
CN108614286A (en) * 2018-05-14 2018-10-02 中国科学院高能物理研究所 A kind of flash detection method with three-dimensional position resolution capability
CN108717569A (en) * 2018-05-16 2018-10-30 中国人民解放军陆军工程大学 It is a kind of to expand full convolutional neural networks and its construction method
CN109598733A (en) * 2017-12-31 2019-04-09 南京航空航天大学 Retinal fundus images dividing method based on the full convolutional neural networks of depth

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644418A (en) * 2017-09-26 2018-01-30 山东大学 Optic disk detection method and system based on convolutional neural networks
CN109598733A (en) * 2017-12-31 2019-04-09 南京航空航天大学 Retinal fundus images dividing method based on the full convolutional neural networks of depth
CN108614286A (en) * 2018-05-14 2018-10-02 中国科学院高能物理研究所 A kind of flash detection method with three-dimensional position resolution capability
CN108717569A (en) * 2018-05-16 2018-10-30 中国人民解放军陆军工程大学 It is a kind of to expand full convolutional neural networks and its construction method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110992309A (en) * 2019-11-07 2020-04-10 吉林大学 Fundus image segmentation method based on deep information transfer network
CN110992309B (en) * 2019-11-07 2023-08-18 吉林大学 Fundus image segmentation method based on deep information transfer network
CN113012093A (en) * 2019-12-04 2021-06-22 深圳硅基智能科技有限公司 Training method and training system for glaucoma image feature extraction
CN113012093B (en) * 2019-12-04 2023-12-12 深圳硅基智能科技有限公司 Training method and training system for glaucoma image feature extraction
CN111079858A (en) * 2019-12-31 2020-04-28 杭州迪普科技股份有限公司 Encrypted data processing method and device
CN111667490A (en) * 2020-05-07 2020-09-15 清华大学深圳国际研究生院 Eye fundus picture cup optic disk segmentation method
CN111583256A (en) * 2020-05-21 2020-08-25 北京航空航天大学 Dermatoscope image classification method based on rotating mean value operation
CN111583256B (en) * 2020-05-21 2022-11-04 北京航空航天大学 Dermatoscope image classification method based on rotating mean value operation

Similar Documents

Publication Publication Date Title
CN110110782A (en) Retinal fundus images optic disk localization method based on deep learning
CN110197493B (en) Fundus image blood vessel segmentation method
CN106920227B (en) The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method
CN108198184B (en) Method and system for vessel segmentation in contrast images
CN104463140B (en) A kind of colored eye fundus image optic disk automatic positioning method
CN111815574B (en) Fundus retina blood vessel image segmentation method based on rough set neural network
CN108510473A (en) The FCN retinal images blood vessel segmentations of convolution and channel weighting are separated in conjunction with depth
CN108520522A (en) Retinal fundus images dividing method based on the full convolutional neural networks of depth
CN106530283A (en) SVM (support vector machine)-based medical image blood vessel recognition method
CN106408564A (en) Depth-learning-based eye-fundus image processing method, device and system
CN110264424A (en) A kind of fuzzy retinal fundus images Enhancement Method based on generation confrontation network
CN109829877A (en) A kind of retinal fundus images cup disc ratio automatic evaluation method
CN107330449A (en) A kind of BDR sign detection method and device
CN106683080B (en) A kind of retinal fundus images preprocess method
CN110110709A (en) A kind of red white corpuscle differential counting method, system and equipment based on image procossing
CN109166095A (en) A kind of ophthalmoscopic image cup disk dividing method based on generation confrontation mechanism
CN108764342B (en) Semantic segmentation method for optic discs and optic cups in fundus image
CN107180421A (en) A kind of eye fundus image lesion detection method and device
CN110786824B (en) Coarse marking fundus oculi illumination bleeding lesion detection method and system based on bounding box correction network
CN110555845A (en) Fundus OCT image identification method and equipment
CN102567734B (en) Specific value based retina thin blood vessel segmentation method
CN104102899B (en) Retinal vessel recognition methods and device
CN106934816A (en) A kind of eye fundus image Segmentation Method of Retinal Blood Vessels based on ELM
JP2019192215A (en) 3d quantitative analysis of retinal layers with deep learning
CN109697719A (en) A kind of image quality measure method, apparatus and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190809