CN109829877A - A kind of retinal fundus images cup disc ratio automatic evaluation method - Google Patents

A kind of retinal fundus images cup disc ratio automatic evaluation method Download PDF

Info

Publication number
CN109829877A
CN109829877A CN201811099755.6A CN201811099755A CN109829877A CN 109829877 A CN109829877 A CN 109829877A CN 201811099755 A CN201811099755 A CN 201811099755A CN 109829877 A CN109829877 A CN 109829877A
Authority
CN
China
Prior art keywords
image
optic
cup
optic disk
fundus images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811099755.6A
Other languages
Chinese (zh)
Inventor
郭璠
赵鑫
谢斌
赵于前
廖胜辉
梁毅雄
邹北骥
麦宇翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central South University
Original Assignee
Central South University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central South University filed Critical Central South University
Priority to CN201811099755.6A priority Critical patent/CN109829877A/en
Publication of CN109829877A publication Critical patent/CN109829877A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a kind of retinal fundus images cup disc ratio automatic evaluation methods, comprising the following steps: A: optic disk area image is extracted from retinal fundus images;Step B: build and train optic disk optic cup to divide network based on depth convolutional neural networks;Step C: retinal fundus images to be measured are obtained into optic disk area image to be measured by step A, then optic disk area image to be measured is input to optic disk optic cup segmentation network, to export optic disk dicing masks image to be measured and optic cup dicing masks image to be measured;Step D: according to optic disk dicing masks image to be measured and optic cup dicing masks image to be measured, the cup disc ratio of retinal fundus images is calculated.The method of the present invention speed of service is fast, and effect is good, does not need not only manually to participate in but also cost price is low, have very strong versatility, can be widely applied to glaucoma auxiliary screening.

Description

A kind of retinal fundus images cup disc ratio automatic evaluation method
Technical field
The invention belongs to Image Information Processing fields, and in particular to one kind is based on image procossing and depth convolutional neural networks Retinal fundus images cup disc ratio automatic evaluation method.
Background technique
Glaucoma is the disease of most serious in global second blinding ophthalmology disease and irreversibility blinding ophthalmology disease. Although glaucoma can't be cured completely, the detection and treatment of early stage can be effectively reduced blinding possibility.Ophthalmology doctor Teacher is often used Heidelberg Heidelberg retina tomography (HRT) and eye coherence tomography (OCT) goes detection glaucoma, but these It operates all very time-consuming and operates the skill that instrument needs profession.However digital fundus image is widely used in glaucoma inspection It surveys and more economical accurate.Compared to OCT and HRT, the ophthalmology disease that digital fundus camera is easier to be used for basis is examined It is disconnected.Furthermore the mode of artificial detection is not suitable for the detection of large-scale crowd glaucoma, therefore designs reliable early-stage glaucoma Detection system is extremely important.
Generally speaking, in addition to raised intraocular pressure, widened cup disc ratio (CDR) is to assess glaucoma on vertical direction Important indicator.Desired accurate calculation goes out CDR, accurately divides to optic disk and optic cup and becomes a very important task.
Currently, the segmentation for optic disk is broadly divided into two steps: optic disk positioning and optic disk segmentation.It is first right that Li et al. people proposes Retinal fundus images carry out brightest pixel cluster, generate optic disk candidate region.Then principal component point is carried out to candidate region Original eyeground figure and its projection minimum distance place to optic disk space are considered optic disk center and (are loaded in by analysis Proceedings of International Conference on Image Processing, 2001, volume 2).It should Algorithm is weak for the eye fundus image positioning robustness of different image-forming conditions, and location Calculation is very time-consuming.Zhun Fan et al. is mentioned Training pattern is gone to obtain optic disk edge using the mode of Structure learning out, furthermore the rear place based on threshold value and Huffman circle transformation Reason is used to finely determine that optic disk edge (is loaded in IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2018, volume 22).Although such mode can obtain higher optic disk segmentation result, excessively Rely on the characteristics of image of engineer.
Relative to optic disk, the fuzzy edge of optic cup makes optic cup segmentation more difficult.Wong et al. according to vascular distribution and Morphological feature carries out optic cup segmentation.The coarse extraction of cup boundary is obtained first with level set as a result, becoming then in conjunction with small echo The comprehensive identification vessel boundary of the marginal information changed and extracted, finally determines cup boundary according to the blood vessel angle change of extraction. Introduction for this method, can be with reference papers " Automated detection of kinks from blood Vessels for optic cup segmentation in retinal images " (it is loaded in International Society for Optics and Photonics, 2009).Optic cup is excessively relied on by such method to extract optic cup The extraction of neighbouring vascular bending angle, the robustness of the algorithm is not strong when low contrast.
In terms of cup disc ratio calculates patent, Jin Xiaoliang et al. (patent publication No. CN106214120A) passes through threshold method It extracts optic disk and optic disk and proposes to calculate cup disc ratio using optic disk and optic cup minimum circumscribed circle diameter ratio, but optic disk and optic cup It is class oval structure, therefore the method for minimum circumscribed circle undoubtedly can bring error to the calculating of cup disc ratio.Liu Jiang et al. passes through threshold Value and level set techniques extract optic disk and cup boundary respectively, adaptive by smoothed out optic disk optic cup data input one Model is answered to calculate cup disc ratio (patent publication No. CN102112044A), obtains method and Li Yixuan of cup disc ratio etc. in this way It is much like that people (patent publication No. CN107704886A) by convolutional neural networks directly extracts cup disc ratio, both leads to A trainable model is crossed to infer cup disc ratio, only the optic disk and optic cup dividing method of the two are slightly different.It is above-mentioned this The main problem of a little methods be its by additionally train a model go to predict the method for cup disc ratio not only computation complexity compared with Height, and since there are errors for the segmentation of optic cup optic disk, thus cause prediction model that can increase cup disc ratio meter because of cumulative errors The unreliability of calculation.
In this context, a kind of strong robustness is studied, accuracy is high and can carry out cup to retinal fundus images automatically Disk is more particularly important than the method for calculating.
Summary of the invention
The view based on image procossing and depth convolutional neural networks that technical problem to be solved by the invention is to provide a kind of Nethike embrane eye fundus image cup disc ratio automatic evaluation method.It solves existing optic disk optic cup dividing method and needs the artificial feature, no extracted The problem of optic disk and optic cup precisely capable of being partitioned into simultaneously.
The technical solution adopted in the present invention is as follows:
A kind of retinal fundus images cup disc ratio automatic evaluation method, comprising the following steps:
Step A: optic disk area image is extracted from retinal fundus images;
Step B: build and train optic disk optic cup to divide network based on depth convolutional neural networks:
Step B1: optic disk optic cup segmentation network is built based on depth convolutional neural networks, the optic disk optic cup divides network Including coding layer and decoding layer;First coding layer includes the input layer for inputting optic disk area image, the last one decoding Layer includes the output layer for exporting optic disk dicing masks image and optic cup dicing masks image;
Step B2, sample retinal fundus images are obtained into optic disk area image as sample optic disk administrative division map by step A Picture obtains the true uncalibrated image of sample optic disk to the true calibration of sample optic disk area image progress and sample optic cup is really demarcated Image, using sample optic disk area image as input, with the true uncalibrated image of sample optic disk and the true uncalibrated image of sample optic cup As output, training optic disk optic cup divides network;
Step C: obtaining optic disk area image to be measured by step A for retinal fundus images to be measured, then by optic disk area to be measured Area image is input to optic disk optic cup segmentation network, to export optic disk dicing masks image to be measured and optic cup dicing masks figure to be measured Picture;
Step D: according to optic disk dicing masks image to be measured and optic cup dicing masks image to be measured, retina eyeground figure is calculated The cup disc ratio of picture.
The program from retinal fundus images by extracting optic disk area image, so that retinal fundus images The calculation amount of cup disc ratio is smaller, obtains optic disk dicing masks image simultaneously by depth convolutional neural networks and optic cup segmentation is covered Film image to realize end-to-end optic disk optic cup while divide, and then accurately calculates cup according to the optic disk and optic cup that are partitioned into Disk ratio.This method speed of service is fast, and effect is good, does not need not only manually to participate in but also cost price is low, have very strong general Property.
Further, the optic disk optic cup segmentation network further includes several skip floors, and the skip floor is set to corresponding coding Between layer and decoding layer, the last one characteristic pattern of each coding layer is cascaded to corresponding size respectively and corresponds to position by several skip floors The decoding layer set.
The problem of gradient passback that design skip floor facilitates in network training process prevents gradient from disappearing, while coding layer is thick Rough adopted but with detailed spatial location information characteristic pattern of speaking in a low voice facilitates decoding layer to optic disk in retina eyeground and optic cup point Cut the building of exposure mask.
Further, the optic disk optic cup segmentation network further includes several feature inclusion layers, and the feature inclusion layer is set to Between two adjacent coding layers, several feature inclusion layers are respectively by down-sampled 2 times of the input length and width difference of a upper coding layer The characteristic pattern for cascading a coding layer Chi Huahou afterwards is input to present encoding layer.
Feature inclusion layer makes each layer of coding layer all introduce original image input and present encoding under different scale All layers of characteristic pattern before layer, to play the shared effect of feature.
It further, is cross entropy loss function for training the loss function L (p, g) of the optic disk optic cup segmentation network Lcross-entropyWith dice function LdiceFusion, specific formula are as follows:
L (p, g)=Lcross-entropy+Ldice
Wherein,Indicate ith pixel category in sample optic disk dicing masks image or sample optic cup dicing masks image In the probability of c class, wcIndicate that c class pixel intersects the weight of entropy loss and dice loss in two classified weights,It indicates Ith pixel belongs to the probability of c class in the true uncalibrated image of sample optic disk or the true uncalibrated image of sample optic cup, and N is indicated Optic disk optic cup divides the number of all pixels point in the output figure of network, and K indicates picture in the output figure of optic disk optic cup segmentation network The classification number of element.
Further, the concrete processing procedure of the step A is as follows:
Step A1, using Morphological scale-space original color retinal fundus images, the enhancing of gray level retina eyeground is obtained Image F';
Step A2, brightest area is extracted for gray level retina eyeground enhancing image F', obtains brightest pixel region eye Base map is as f (i);
Step A3, vessel segmentation image f (bv) is extracted from original retinal fundus images;
Step A4, optic disk region is positioned based on sliding window confidence level, extracts optic disk area image:
Step A41, the blood vessel point that the gray level retina eyeground enhancing image F' and step A3 that step A1 is obtained is obtained It cuts result figure f (bv) and carries out image co-registration, obtain blending image f (ibv):
Step A42, brightest pixel region eye fundus image f (i), vessel segmentation image f (bv) and fusion are schemed respectively As f (ibv) carries out the calculating of sliding window confidence level:
According to retinal fundus images optic disk pre-set radius r, the sliding window for the use of size being 3r × 3r is step with r/2 It is long, brightest pixel region eye fundus image f (i), vessel segmentation image f (bv) and blending image f (ibv) are scanned respectively, and Respectively obtain the sliding window confidence score figure S (i), S (bv), S (ibv) of each image;Wherein, institute in sliding window is calculated There is confidence score of the sum of the gray value of pixel as current sliding window mouth;
To the cunning of brightest pixel region eye fundus image f (i), vessel segmentation image f (bv) and blending image f (ibv) Dynamic window confidence score figure S (i), S (bv), S (ibv) are normalized respectively, and by the sliding window after normalization Confidence score figure S (i) ', S (bv) ' and S (ibv) ' are merged, and the sliding window confidence level of retinal fundus images is obtained Merge shot chart S:
Sliding window position corresponding to confidence level fusion highest scoring is chosen as optic disk position, by the optic disk band of position It cuts out and from retinal fundus images, obtain the optic disk area image of retinal fundus images.
Automatic positioning optic disk area image is realized using sliding window technique, enhances the accuracy and robust of location algorithm Property.
Further, the concrete processing procedure of the step A1 is as follows:
Retinal fundus images are carried out with top cap transformation and the transformation of bottom cap respectively, the top cap for obtaining retinal fundus images becomes Change image GTWith bottom cap changing image GB:
GB=FB-F;
Wherein, F (x, y) indicates that the gray level image of retinal fundus images, B (u, v) indicate the knot of retinal fundus images Constitutive element,, respectively represent opening operation and closed operation,Respectively represent corrosion and dilation operation;
According to the gray level image F of retinal fundus images, top cap changing image GTWith bottom cap changing image GBAsh is calculated Spending grade retina eyeground enhances image F':
F'=F+GT-GB
Due to different fundus imaging equipment under difficult environmental conditions to fundus imaging will cause eyeground contrast it is low or Whole partially dark situation, the program use Morphological scale-space, that is, use top cap transformation brighter to enhance in darker background Target, cap transformation in bottom in brighter background then on the contrary, enhance darker target, to efficiently realize eye fundus image enhancing.
Further, the concrete processing procedure of the step A2 is as follows:
Step A21, the histogram of retinal fundus images is calculated;
Step A22, brightest pixel number is superimposed by brightest pixel;
Step A23, stop superposition when brightest pixel number accumulative number of pixels preset ratio total more than entire image;
Step A24, in retinal fundus images up unless brightest area pixel, obtains brightest pixel region eyeground figure As f (i).
Further, the concrete processing procedure of the step A3 is as follows:
Step A31, from original retinal fundus images extract green channel numerical value, to retinal fundus images it is green Chrominance channel image FG
Step A32, by limitation contrast self-adapting histogram equilibrium algorithm, to green channel images FGDegree of comparing Enhancing, obtains enhanced green channel eye fundus image F 'G
Step A33, to enhanced green channel eye fundus image F 'GThe transformation of bottom cap and top cap transformation are carried out respectively, and are counted Difference is calculated, the blood-vessel image F with noise is obtainedvessel;Since angiosomes are darker compared with periphery background area, converted by top cap It is converted with bottom cap, enhances brighter target in darker background, enhance darker target, Ke Yigao in brighter background Effect realizes blood vessel segmentation;
Fvessel=GB(F′G)-GT(F′G)
Wherein, GB(F′G) and GT(F′G) respectively represent enhanced green channel eye fundus image F 'GCarry out bottom cap transformation and The image that top cap converts;
Step A34, using median filtering to the blood-vessel image F with noisevesselIt is handled, obtains final blood vessel point Cut result images f (bv).Since the obtained image of step A33 can have some noise spots, these noise spots are made an uproar similar to the spiced salt The noise like can be effectively suppressed using median filtering in sound.
Further, the concrete processing procedure of the step D is as follows:
Step D1, the obtained optic disk dicing masks image to be measured of step C and optic cup dicing masks image to be measured are carried out Morphological dilations and etching operation remove isolated point, obtain optic disk dicing masks pretreatment image to be measured and optic cup to be measured segmentation Exposure mask pretreatment image;
Step D2, optic disk dicing masks pretreatment image to be measured and optic cup dicing masks pretreatment image to be measured are mentioned respectively Optic disk edge and optic cup edge are taken, ellipse fitting is then carried out, obtains final optic disk optic cup dicing masks image;
Step D3, the final optic disk optic cup dicing masks image obtained based on step D2 calculates optic disk in vertical direction Difference and optic cup are in the difference of vertical direction, and wherein optic disk is vertical disc diameter VDD in the maximum difference of vertical direction, depending on Cup is vertical optic cup diameter VCD in the maximum difference of vertical direction, according to vertical disc diameter VDD and vertical optic cup diameter VCD Calculate cup disc ratio CDR:
Further, the ellipse fitting in the step D2 includes: to optic disk dicing masks pretreatment image to be measured and to be measured Optic cup dicing masks pretreatment image extracts optic disk edge and optic cup edge using canny algorithm, then to optic disk edge and view Cup edge carries out least square method and carries out ellipse fitting.
Beneficial effect
The invention discloses a kind of retinal fundus images cup disc ratio based on image procossing and depth convolutional neural networks Automatic evaluation method, the method includes the steps of: A: optic disk area image is extracted from retinal fundus images;Step B: Build and train optic disk optic cup to divide network based on depth convolutional neural networks;Step C: retinal fundus images to be measured are pressed Step A obtains optic disk area image to be measured, then optic disk area image to be measured is input to optic disk optic cup segmentation network, with output Optic disk dicing masks image to be measured and optic cup dicing masks image to be measured;Step D: according to optic disk dicing masks image to be measured and Optic cup dicing masks image to be measured, calculates the cup disc ratio of retinal fundus images.By by optic disk area image from retina eye It is extracted in base map picture, so that the calculation amount of retinal fundus images cup disc ratio is smaller, it is same by depth convolutional neural networks When obtain optic disk dicing masks image and optic cup dicing masks image, to realize end-to-end optic disk optic cup while divide, in turn Cup disc ratio is accurately calculated according to the optic disk and optic cup that are partitioned into.This method speed of service is fast, and effect is good, is not only not required to very important person Work participates in and cost price is low, has very strong versatility.According to the physiological structure feature of optic disk and optic cup, pass through utilization Blood vessel, enhancing retinal fundus images are extracted in the transformation of bottom cap and top cap transformation in Digital Image Processing, in conjunction with sliding window vocal mimicry Art realizes that optic disk accurately quickly positions.This method can be widely applied to glaucoma auxiliary screening.
Detailed description of the invention
Fig. 1 is the retinal fundus images cup disc ratio based on image procossing and depth convolutional neural networks in present example Automatic evaluation method flow chart;
Fig. 2 is the retinal fundus images cup disc ratio based on image procossing and depth convolutional neural networks in present example The detail flowchart of automatic evaluation method;
Fig. 3 is the flow chart for positioning optic disk position in present example according to retinal fundus images;
Fig. 4 is that the optic disk of embodiment 1 positions each step effect picture;Wherein Fig. 4 (a) is original color retina eyeground figure Picture, Fig. 4 (b) are bottom cap changing image, and Fig. 4 (c) is top cap changing image, and Fig. 4 (d) is gray level retina eyeground enhancing figure Picture, Fig. 4 (e) are brightest pixel region eye fundus image, and Fig. 4 (f) is vessel segmentation image, and Fig. 4 (g) is blending image, figure 4 (h) be optic disk position positioning result figure;
Fig. 5 is that the optic disk optic cup in present example based on depth convolutional neural networks divides network structure;
Fig. 6 is the correlated results figure that embodiment 1 is divided by convolutional neural networks;Wherein Fig. 6 (a) is segmentation network Emerging region is felt in input, and Fig. 6 (b) is the optic disk optic cup structure obtained after convolutional neural networks are divided, wherein intermediate circular portion generation Table optic cup structure, annular section represent optic disk structure.Fig. 6 (c) is the optic disk optic cup structure results figure after ellipse fitting.
Fig. 7 is that the optic disk of embodiment 2 positions each step effect picture;Wherein Fig. 7 (a) is original color retina eyeground figure Picture, figure (b) are bottom cap changing image, and Fig. 7 (c) is top cap changing image, and Fig. 7 (d) is gray level retina eyeground enhancing figure Picture, Fig. 7 (e) are brightest pixel region eye fundus image, and Fig. 7 (f) is vessel segmentation image, and Fig. 7 (g) is blending image, figure 7 (h) be optic disk position positioning result figure;
Fig. 8 is the correlated results figure that embodiment 2 is divided by convolutional neural networks;Wherein Fig. 8 (a) is segmentation network Emerging region is felt in input, and Fig. 8 (b) is the optic disk optic cup structure obtained after convolutional neural networks are divided, wherein intermediate circular portion generation Table optic cup structure, annular section represent optic disk structure.Fig. 8 (c) is the optic disk optic cup structure results figure after ellipse fitting.
Fig. 9 is that the optic disk of embodiment 3 positions each step effect picture;Wherein Fig. 9 (a) is original color retina eyeground figure Picture, Fig. 9 (b) are bottom cap changing image, and Fig. 9 (c) is top cap changing image, and Fig. 9 (d) is gray level retina eyeground enhancing figure Picture, Fig. 9 (e) are brightest pixel region eye fundus image, and Fig. 9 (f) is vessel segmentation image, and Fig. 9 (g) is blending image, figure 9 (h) be optic disk position positioning result figure;
Figure 10 is the correlated results figure that embodiment 3 is divided by convolutional neural networks;Wherein Figure 10 (a) is the defeated of segmentation network Enter to feel emerging region, Figure 10 (b) is the optic disk optic cup structure obtained after convolutional neural networks are divided, wherein intermediate circular portion represents Optic cup structure, annular section represent optic disk structure, and Figure 10 (c) is the optic disk optic cup structure results figure after ellipse fitting.
Specific embodiment
The present invention will be further described for explanation with reference to the accompanying drawing:
Embodiment 1:
The present embodiment is for retinal fundus images, and cup disc ratio is assessed automatically to carry out as follows, whole implementation Process is as shown in Figure 1, implementing procedure in detail is as shown in Figure 2:
Step A: being positioned using image procossing and optic disk region of the sliding window method to retinal fundus images, fixed The optic disk region of position is extracted from retinal fundus images as emerging region (i.e. ROI region) is felt, and obtains retina eyeground The optic disk area image of image, the process for positioning optic disk region are as shown in Figure 3.
Step A1, using Morphological scale-space original color retinal fundus images, the enhancing of gray level retina eyeground is obtained Image F':
Due to different fundus imaging equipment under difficult environmental conditions to fundus imaging will cause eyeground contrast it is low or Whole partially dark situation, Morphological scale-space can be used to enhance partially dark eye fundus image to achieve the effect that enhancing.
Gray level image is converted by colored retinal fundus images [shown in such as Fig. 4 (a)] first, to gray scale eye fundus image point Not carry out image bottom cap transformation and top cap transformation difference, obtain the top cap changing image G of retinal fundus imagesT[such as Fig. 4 (b) shown in] and bottom cap changing image GB[shown in such as Fig. 4 (c)]:
GB=FB-F;
Wherein, F (x, y) indicates that the gray level image of retinal fundus images, B (u, v) indicate the knot of retinal fundus images Constitutive element, structural element is square structure in the present embodiment and size is 40 × 40.
, respectively represent the opening operation and closed operation,Respectively represent corrosion and dilation operation;
According to the gray level image F of retinal fundus images, top cap changing image GTWith bottom cap changing image GBAsh is calculated Spend shown in grade retina eyeground enhancing image F'[such as Fig. 4 (d)]:
F'=F+GT-GB (1)
Step A2, brightest area is extracted for gray level retina eyeground enhancing image F', obtains brightest pixel region eye Base map is as f (i):
Step A21, the histogram of retinal fundus images is calculated;
Step A22, brightest pixel number is superimposed by brightest pixel;
Step A23, stop superposition when brightest pixel number accumulative number of pixels preset ratio total more than entire image, In the present embodiment, preset ratio 6.5%;
Step A24, in retinal fundus images up unless brightest area pixel, i.e., set 0 for non-brightest area, obtain Brightest pixel region eye fundus image f (i) [shown in such as Fig. 4 (e)].
Step A3, vessel segmentation image f is extracted from original color retinal fundus images [shown in such as Fig. 4 (a)] (bv):
Step A31, from original retinal fundus images extract green channel numerical value, to retinal fundus images it is green Chrominance channel image FG
Step A32, by limitation contrast self-adapting histogram equilibrium algorithm, i.e. CLAHE algorithm, to green channel images FGDegree of comparing enhancing, obtains enhanced green channel eye fundus image F 'G
Step A33, to enhanced green channel eye fundus image FG' the transformation of bottom cap and top cap transformation are carried out respectively, and count Difference is calculated, the blood-vessel image F with noise is obtainedvessel
Fvessel=GB(F′G)-GT(F′G) (2)
Wherein, GB(F′G) and GT(F′G) respectively represent enhanced green channel eye fundus image F 'GCarry out bottom cap transformation and The image that top cap converts;
Step A34, using median filtering to the blood-vessel image F with noisevesselIt is handled, obtains final blood vessel point Result images f (bv) [shown in such as Fig. 4 (f)] is cut, wherein the window size parameter of median filtering is 21.
Step A4, optic disk region is positioned based on sliding window confidence level, extracts optic disk area image:
Step A41, the blood vessel point that the gray level retina eyeground enhancing image F' and step A3 that step A1 is obtained is obtained It cuts result figure f (bv) and carries out image co-registration, obtain blending image f (ibv) [shown in such as Fig. 4 (g)]:
Step A42, to brightest pixel region eye fundus image f (i) [shown in such as Fig. 4 (e)], vessel segmentation image f (bv) [such as Fig. 4 (f) is shown] and blending image f (ibv) [shown in such as Fig. 4 (g)] carry out the calculating of sliding window confidence level:
According to retinal fundus images optic disk pre-set radius r, sliding window (wherein optic disk half that size is 3r × 3r is used Diameter be r), using r/2 as step-length, respectively scan brightest pixel region eye fundus image f (i), vessel segmentation image f (bv) and Blending image f (ibv), and respectively obtain the sliding window confidence score figure S (i) of each image, S (bv), S (ibv);Wherein, Calculate confidence score of the sum of the gray value of all pixels point as current sliding window mouth in sliding window;
Wherein, optic disk pre-set radius be to sample optic disk area image at really calibration optic disk position calculated maximum Value, so as to optic disk position positioning after, can by the preset optic disk radius as parameter and extract one piece include regard Then the area image of disk position obtains optic disk dicing masks by the segmentation network after training, calculates finally by exposure mask The precision diameter of current optic disk.
In order to compare in the same magnitude, to brightest pixel region eye fundus image f (i), vessel segmentation image f (bv) and place is normalized in the sliding window confidence score figure S (i) of blending image f (ibv), S (bv), S (ibv) respectively Reason:
Respectively to brightest pixel region eye fundus image f (i), vessel segmentation image f (bv) and blending image f (ibv) Sliding window confidence score figure S (i), S (bv), S (ibv) using maximum value normalization calculate, the cunning after being normalized Dynamic window confidence level S (i) ', S (bv) ' and S (ibv) ':
Wherein, max () represents the maximum value for calculating all sliding window confidence scores.
Sliding window confidence score figure S (i) ', S (bv) ' after normalization is merged with S (ibv) ', depending on The sliding window confidence level of nethike embrane eye fundus image merges shot chart S:
Sliding window position corresponding to confidence level fusion highest scoring is chosen as optic disk position [positioning result such as Fig. 4 (h) shown in], the optic disk band of position is cut out to come from retinal fundus images, obtains the optic disk area of retinal fundus images Area image [shown in such as Fig. 6 (a)].
Step B: build and train optic disk optic cup to divide network based on depth convolutional neural networks.
Step B1, build optic disk optic cup segmentation network, the size of convolution kernel involved in the segmentation network built, step-length with And coding layer and the number of plies of decoding layer are preset by experiment experience, the initial weight parameter of network is provided by random number, is such as schemed Shown in 5:
The optic disk optic cup segmentation network includes five layers of coding layer and four layer decoder layers, preceding four layer decoders layer and coding layer one One is correspondingly arranged;
Coding layer includes a series of convolutional layer, batch normalization layer, active coating and maximum pond layer.For convenience, we It will claim comprising one group of size is 3 × 3 convolution kernels and step-length is 1 convolutional layer, one layer batch of normalization layer and one layer of relu active coating For a conv_bn_relu module.The specific design of the coding layer is input layer, the received input picture of input layer first It is one 256 × 256 × 3 RGB image matrix.Following hard on after input layer is three conv_bn_relu modules, wherein each The number of convolution kernel is all 32 in module.Input layer and the 3 conv_bn_relu modules followed closely are known as first layer and compiled by us Code layer.By first layer coding layer, the last one conv_bn_relu module obtains the maximum pond that characteristic pattern is 2 × 2 by size Change layer, obtains the characteristic pattern of 128 × 128 × 32 sizes in this way;The characteristic pattern that pond is obtained is similar to first layer coding layer Input is 2 conv_bn_relu modules after following closely, and wherein the convolution kernel number in module is 64, we will be by pond Operation and 2 conv_bn_relu modules are known as second layer coding layer.Similar second layer coding layer, by pondization and 2 Conv_bn_relu module obtains third layer coding layer, and wherein convolution kernel number is 128 in module.Similarly, there is the 4th layer of volume Code layer and layer 5 coding layer, wherein conv_bn_relu module contains 256 and 512 convolution kernels respectively.Coding layer it is last One layer is layer 5, and the coding characteristic figure size finally obtained is 16 × 16 × 512.
16 times are reduced by the characteristic pattern scale obtained after coding layer coding, but all features are all coded into 512 In a characteristic pattern.In order to reconstruct the dicing masks with sizes such as input pictures, decoding layer is needed to 512 spies after coding Sign figure decoding and reconstituting and the dicing masks figure for generating optic disk and optic cup.Wherein in coding structure, there are four layers of coding layer to carry out Pondization operation adds in the same level for the coding layer that corresponding pondization operates to restore the size of character pair image Add decoding layer, restores characteristic pattern size by transposition convolution operation, that is, fall in the same level addition of first layer coding layer Number first layer decoding layer adds layer decoder layer second from the bottom in the same level of second layer coding layer, in third and fourth layer of coding layer It is designed using same structure, the last layer coding layer (layer 5 coding layer) is without corresponding decoding layer.Therefore, in coding and decoding In structure, the number of plies of decoding layer is one layer fewer than the coding number of plies.The decoding layer is mainly by transposition convolutional layer and conv_bn_relu Module composition.The characteristic pattern that layer 5 coding layer (the last layer coding layer) is obtained carries out transposition convolution, wherein transposition convolution Containing 256 convolution kernels, the size of convolution kernel is 2 × 2 and step-length is 2, follows 2 conv_bn_ closely after transposition convolutional layer Relu module (convolution kernel number is 256).We will be known as containing a transposition convolutional layer and 2 conv_bn_relu modules Decoding layer fourth from the last layer, wherein the 4th layer of coding layer it is corresponding with decoding layer fourth from the last layer.Similar decoding fourth from the last layer Structure, decode in layer third from the bottom that convolution is 128 in transposition convolutional layer and 2 conv_bn_relu modules, decoding is reciprocal Third layer is corresponding with coding third layer.Similarly, the convolution kernel number that layer second from the bottom and layer last contain respectively is decoded Respectively 64 and 32.Wherein there are also most after conv_bn_relu module for layer decoder layer (the last layer decoding layer) last Later layer output layer, output layer activation primitive are sigmoid, and convolution kernel size is 3 × 3 and number is 2.
The output layer is the output characteristic pattern in 2 channels, i.e., optic disk optic cup segmentation network exports optic disk dicing masks respectively Image and optic cup dicing masks image, in addition activation primitive is sigmoid.
Optic disk optic cup segmentation network structure further includes several skip floors, the skip floor be set to corresponding coding layer with Between decoding layer, the last one characteristic pattern of each coding layer is cascaded to corresponding size corresponding position respectively by several skip floors Decoding layer.
Skip floor is responsible for for the characteristic pattern of coding layer being transmitted directly to decoding layer, and such design facilitates network training process In gradient passback the problem of preventing gradient from disappearing, while the coarse justice but with detailed spatial location information of speaking in a low voice of coding layer Characteristic pattern facilitates building of the decoding layer to optic disk and optic cup dicing masks in retina eyeground.Specific skip floor design be by Each layer of coding layer the last one characteristic pattern is cascaded to the decoding layer of corresponding size corresponding position, wherein the volume of skip floor connection The characteristic pattern of code layer characteristic pattern and decoding layer characteristic pattern number (port number) having the same, the reconciliation of concatenated coding layer characteristic pattern Code layer characteristic pattern makes characteristic pattern in decoding layer double.Specific to being exactly that coding layer first layer skip floor is connected to decoding in network Layer layer last, coding layer second layer skip floor are connected to decoding layer layer second from the bottom, the jump of remaining coding layer and decoding interlayer Layer connection design is identical as the second layer.
The optic disk optic cup segmentation network structure further includes several feature inclusion layers, and the feature inclusion layer is set to adjacent Two coding layers between, the input length and width of a upper coding layer are distinguished down-sampled 2 times of rear classes respectively by several feature inclusion layers The characteristic pattern for joining a coding layer Chi Huahou is input to present encoding layer.
It is the design done for coding layer that feature is shared, feature it is shared so that coding layer not only can be according to upper one layer Input of the characteristic pattern as next layer of coding layer also makes the input before all present encoding layers also as the defeated of current layer Enter to assist current layer feature extraction.The shared design concrete operations of the feature are by the input length and width of first layer coding layer point Input of the characteristic pattern of first layer coding layer Chi Huahou as second layer coding layer is cascaded after 2 times not down-sampled;By described second The input length and width of layer coding layer cascade second layer coding layer Chi Huahou characteristic pattern after distinguishing down-sampled 2 times is compiled as third layer The input of code layer;4th layer identical as the input of layer 5 coding layer and the design of third layer, and such design is so that each layer Coding layer all introduces all layers before the input of the original image under different scale and present encoding layer of characteristic pattern, to rise The effect shared to feature.
The combination of function of two kinds of different fields is got up to make by the optic disk optic cup segmentation Web vector graphic convergence strategy of the present embodiment It is two classified weight cross entropy loss functions for the loss function for dividing network for repetitive exercise optic disk optic cup Lcross-entropyWith dice function LdiceFusion, in the present invention, network total losses is the loss of optic disk dicing masks and optic cup point Cut the sum of exposure mask loss.The wherein specific formula of costing bio disturbance between dicing masks and true calibration exposure mask are as follows:
L (p, g)=Lcross-entropy+Ldice
Wherein,Indicate that optic disk optic cup segmentation network exports the probability that ith pixel in image belongs to c class,It indicates Ith pixel is the probability of c class, w in the true uncalibrated image of sample optic disk or the true uncalibrated image of sample optic cupcIt indicates C class pixel intersects the weight of entropy loss and dice loss in two classified weights, in this example wcClassify as tradeoff two Weight intersects entropy loss and the weight of dice loss is arranged to 0.5.
In the present embodiment, sample optic disk dicing masks image is calculated by above-mentioned loss function really to demarcate with sample optic disk Optic disk penalty values L between imageDAnd between sample optic cup dicing masks image and the true uncalibrated image of sample optic cup Optic cup penalty values LC, by optic disk penalty values LDWith optic cup penalty values LCThe sum of as optic disk optic cup segmentation network penalty values, from And optic disk optic cup is trained to divide network.
Optic disk penalty values L is being calculated using loss functionDWhen, sample optic disk dicing masks image and sample optic disk are really marked The pixel for determining image has 2 kinds of classifications, i.e. K=2, and the category label of optic disk pixel is c=1, optic disk background pixel point Category label is c=2;According to sample optic disk dicing masks image and the true uncalibrated image of sample optic disk, optic disk penalty values are calculated LD
Optic cup penalty values L is being calculated using loss functionCWhen, sample optic cup dicing masks image and sample optic cup are really marked The pixel for determining image has 2 kinds of classifications, i.e. K=2, and the category label of optic cup pixel is c=1, optic cup background pixel point Category label is c=2;According to sample optic cup dicing masks image and the true uncalibrated image of sample optic cup, optic cup penalty values are calculated LC
Step B2, sample retinal fundus images are obtained into optic disk area image as sample optic disk administrative division map by step A Picture obtains the true uncalibrated image of sample optic disk to the true calibration of sample optic disk area image progress and sample optic cup is really demarcated Image, using sample optic disk area image as input, with the true uncalibrated image of sample optic disk and the true uncalibrated image of sample optic cup As output, training optic disk optic cup divides network.
Sample optic disk area image is input to the optic disk optic cup segmentation network currently built, prediction obtains optic disk segmentation and covers Film image and optic disk dicing masks image, and according to really calibration obtains the true uncalibrated image of sample optic disk and sample optic cup is true Real uncalibrated image, calculates prediction error, updates network weight according to backpropagation and gradient descent algorithm.Then further according to more Network after new network weight predicts optic disk dicing masks image and optic cup dicing masks image again, calculates optic disk penalty values The sum of with optic cup penalty values, network weight is updated again, iteration above process training optic disk optic cup segmentation network parameter is until pre- Result is surveyed without significant change, obtains optic disk optic cup segmentation network.
Further, the present invention carries out accelerating to train optic disk optic cup segmentation network parameter using GPU, to greatly reduce training Time.The optimization algorithm that training uses is Adam, so that training network development process is more efficient, saves memory, and is effectively located in Manage sparse gradient and noise problem.Initial learning rate is 0.01, wherein carrying out learning rate drop when loss function stops declining Ten times of operation, so that obtain optic disk optic cup segmentation network is beneficial to be promoted so that network is more restrained in the training process The network performance that training obtains.
Step C: obtaining optic disk area image to be measured by step A for retinal fundus images to be measured, then by optic disk area to be measured Area image is input in optic disk optic cup segmentation network, to export optic disk dicing masks image to be measured and optic cup dicing masks to be measured Image.
Step D: according to optic disk dicing masks image to be measured and optic cup dicing masks image to be measured, retina eyeground figure is calculated The cup disc ratio of picture:
Step D1, to the obtained optic disk dicing masks image to be measured of step C and optic cup dicing masks image to be measured [to two Image co-registration obtains shown in Fig. 6 (b), and intermediate circular portion represents optic cup structure, and annular section represents optic disk structure] carry out Morphological dilations and etching operation obtain optic disk dicing masks pretreatment image to be measured and optic cup dicing masks to be measured pretreatment Image;
Step D2, optic disk dicing masks pretreatment image to be measured and optic cup dicing masks pretreatment image to be measured are used Canny algorithm effectively extracts optic disk edge and optic cup edge, then carries out minimum two to the optic disk edge of extraction and optic cup edge Multiplication carries out ellipse fitting, using the ellipse after fitting as final optic disk optic cup dicing masks image [shown in such as Fig. 6 (c)];Its Two threshold parameters in middle canny algorithm are respectively set to 10 and 150;
Step D3, the final optic disk optic cup dicing masks image obtained based on step D2 calculates optic disk in vertical direction Difference and optic cup are in the difference of vertical direction, and wherein optic disk is vertical disc diameter VDD in the maximum difference of vertical direction, depending on Cup is vertical optic cup diameter VCD in the maximum difference of vertical direction, according to vertical disc diameter VDD and vertical optic cup diameter VCD Calculate cup disc ratio CDR:
The cup disc ratio for finally obtaining embodiment 1 is 0.59.
Embodiment 2:
Cup disc ratio is extracted to original color retinal fundus images [shown in such as Fig. 7 (a)] (size is 2048 × 3047).The One step is the positioning to optic disk.Original color retinal fundus images are changed to gray level image first, are carried out on gray level image Bottom cap converts to obtain cap changing image [such as Fig. 7 (b) shown in] on earth, similarly carries out top cap to gray scale eye fundus image and converts to obtain Top cap changing image [shown in such as Fig. 7 (c)].Then obtained top cap changing image, bottom cap changing image and original are combined Gray level retina eyeground enhancing image [shown in such as Fig. 7 (d)] is calculated according to formula (1) in gray scale eye fundus image.Then right Gray level retina eyeground enhances image zooming-out brightest area, and mode is to extract the histogram of enhancing eye fundus image, utilizes iteration Bulk billing system selects the brightest pixel point for accounting for total pixel number 6.5%, remaining pixel point value is all set 0, obtained eyeground figure The brightest area as in, i.e. brightest pixel region eye fundus image [shown in such as Fig. 7 (e)].
Then vessel segmentation image is extracted.Using the obtained top cap changing image of previous step [such as Fig. 7 (c) institute Show] and bottom cap changing image [such as Fig. 7 (b) shown in] the vessel segmentation image extracted is calculated according to formula (2) [such as Shown in Fig. 7 (f)].It is merged followed by fusion gray level retina eyeground enhancing image and vessel segmentation image Image [shown in such as Fig. 7 (g)].Finally blending image, blood vessel segmentation image and brightest pixel region eye fundus image are carried out respectively Confidence calculations based on sliding window carry out maximum value normalized to all sliding window scores that three figures obtain, Confidence level addition after finally three figure corresponding position sliding windows are normalized is averaged, the highest sliding window of total score Position corresponds to optic disk position, shown in optic disk positioning result figure such as Fig. 7 (h).
Second step is to be split to optic disk and optic cup by convolutional neural networks.The optic disk position positioned by the first step It sets and is cut into the emerging region of sense, i.e. optic disk area image, as shown in Fig. 8 (a), divide optic disk area image as optic disk optic cup The input of network.Network is built according to such as embodiment 1, wherein shown in the last loss function of network such as formula (3).It builds Good network needs to first pass through training, and training process suggestion is accelerated using GPU, and batch size of training is 32, optimizer Adam And starting learning rate is 0.01, wherein carrying out the operation that learning rate drops ten times when loss function stops declining.By training Network come predict input the emerging region of sense, obtain the segmentation result figure as shown in Fig. 8 (b), the result figure segmented first led to Excessive erosion expansion removal isolated point, then obtains final optic disk optic cup segmentation result figure using ellipse fitting, such as Fig. 8 (c) It is shown.
Third step is the calculating of cup disc ratio.The optic disk optic cup segmentation result figure obtained by second step calculates vertical direction Optic disk and optic cup diameter, the cup disc ratio for calculating example 2 further according to formula (4) is 0.79.
Embodiment 3:
Original color retinal fundus images [shown in such as Fig. 9 (a)] (size is 2048 × 3047) extract cup disc ratio.First Step is the positioning to optic disk.Original color retinal fundus images are changed to gray level image first, bottom is carried out on gray level image Cap converts to obtain cap changing image [such as Fig. 9 (b) shown in] on earth, similarly carries out top cap to gray scale eye fundus image and convert to be pushed up Cap changing image [shown in such as Fig. 9 (c)].Then obtained top cap changing image, bottom cap changing image and original ash are combined Gray level retina eyeground enhancing image [shown in such as Fig. 9 (d)] is calculated according to formula (1) in degree eye fundus image.Then to ash Spending grade retina eyeground enhances image zooming-out brightest area, and mode is to extract the histogram of enhancing eye fundus image, tired using iteration Meter mode selects the brightest pixel point for accounting for total pixel number 6.5%, remaining pixel point value is all set 0, obtained eye fundus image Middle brightest area, i.e. brightest pixel region eye fundus image [shown in such as Fig. 7 (e)].
Then vessel segmentation image is extracted.Using the obtained top cap changing image of previous step [such as Fig. 9 (c) institute Show] and bottom cap changing image [such as Fig. 9 (b) shown in] the vessel segmentation image extracted is calculated according to formula (2) [such as Shown in Fig. 9 (f)].It is merged followed by fusion gray level retina eyeground enhancing image and vessel segmentation image Image [shown in such as Fig. 9 (g)].Finally blending image, blood vessel segmentation image and brightest pixel region eye fundus image are carried out respectively Confidence calculations based on sliding window carry out maximum value normalized to all sliding window scores that three figures obtain, Confidence level addition after finally three figure corresponding position sliding windows are normalized is averaged, the highest sliding window of total score Position corresponds to optic disk position, shown in optic disk positioning result figure such as Fig. 9 (h).
Second step is to be split to optic disk and optic cup by convolutional neural networks.The optic disk position positioned by the first step It sets and is cut into the emerging region of sense, i.e. optic disk area image, as shown in Figure 10 (a), divide optic disk area image as optic disk optic cup The input of network.The emerging region of sense that input is predicted by trained network, obtains the segmentation result as shown in Figure 10 (b) Figure first passes through corrosion expansion removal isolated point to the result figure segmented, then obtains final optic disk using ellipse fitting Optic cup segmentation result figure, as shown in Figure 10 (c).
Third step is the calculating of cup disc ratio.The optic disk optic cup segmentation result figure obtained by second step calculates vertical direction Optic disk and optic cup diameter, the cup disc ratio for calculating example 3 further according to formula (4) is 0.6.
It should be noted that disclosed above is only specific example of the invention, the thought provided according to the present invention, ability The technical staff in domain can think and variation, should all fall within the scope of protection of the present invention.

Claims (10)

1. a kind of retinal fundus images cup disc ratio automatic evaluation method, which comprises the following steps:
Step A: optic disk area image is extracted from retinal fundus images;
Step B: build and train optic disk optic cup to divide network based on depth convolutional neural networks:
Step B1: optic disk optic cup segmentation network is built based on depth convolutional neural networks, the optic disk optic cup segmentation network includes Coding layer and decoding layer;First coding layer includes the input layer for inputting optic disk area image, the last one decoding layer packet Include the output layer for exporting optic disk dicing masks image and optic cup dicing masks image;
Step B2, right using sample retinal fundus images by step A acquisition optic disk area image as sample optic disk area image Sample optic disk area image carries out true calibration and obtains the true uncalibrated image of sample optic disk and the true uncalibrated image of sample optic cup, with Sample optic disk area image as input, using the true uncalibrated image of sample optic disk and the true uncalibrated image of sample optic cup as defeated Out, training optic disk optic cup divides network;
Step C: obtaining optic disk area image to be measured by step A for retinal fundus images to be measured, then by optic disk administrative division map to be measured As being input to optic disk optic cup segmentation network, to export optic disk dicing masks image to be measured and optic cup dicing masks image to be measured;
Step D: according to optic disk dicing masks image to be measured and optic cup dicing masks image to be measured, retinal fundus images are calculated Cup disc ratio.
2. retinal fundus images cup disc ratio automatic evaluation method according to claim 1, which is characterized in that the optic disk It further includes several skip floors that optic cup, which divides network, and the skip floor is set between corresponding coding layer and decoding layer, several skip floors The last one characteristic pattern of each coding layer is cascaded to the decoding layer of corresponding size corresponding position respectively.
3. retinal fundus images cup disc ratio automatic evaluation method according to claim 1, which is characterized in that the optic disk It further includes several feature inclusion layers that optic cup, which divides network, and the feature inclusion layer is set between two adjacent coding layers, if Dry feature inclusion layer cascades a upper coding layer Chi Huahou after the input length and width of a upper coding layer are distinguished down-sampled 2 times respectively Characteristic pattern be input to present encoding layer.
4. retinal fundus images cup disc ratio automatic evaluation method according to claim 1, which is characterized in that for training The loss function L (p, g) of the optic disk optic cup segmentation network is cross entropy loss function Lcross-entropyWith dice function Ldice's Fusion, specific formula are as follows:
Wherein,Indicate that ith pixel belongs to c in sample optic disk dicing masks image or sample optic cup dicing masks image The probability of class, wcIndicate that c class pixel intersects the weight of entropy loss and dice loss in two classified weights,Indicate sample view Ith pixel belongs to the probability of c class in the true uncalibrated image of disk or the true uncalibrated image of sample optic cup, and N indicates optic disk view The number of all pixels point in the output figure of cup segmentation network, K indicate the class of pixel in the output figure of optic disk optic cup segmentation network Shuo not.
5. retinal fundus images cup disc ratio automatic evaluation method according to claim 1, which is characterized in that the step The concrete processing procedure of A is as follows:
Step A1, using Morphological scale-space original color retinal fundus images, gray level retina eyeground enhancing image is obtained F';
Step A2, brightest area is extracted for gray level retina eyeground enhancing image F', obtains brightest pixel region eyeground figure As f (i);
Step A3, vessel segmentation image f (bv) is extracted from original retinal fundus images;
Step A4, optic disk region is positioned based on sliding window confidence level, extracts optic disk area image:
Step A41, the blood vessel segmentation knot that the gray level retina eyeground enhancing image F' and step A3 that step A1 is obtained is obtained Fruit schemes f (bv) and carries out image co-registration, obtains blending image f (ibv):
Step A42, respectively to brightest pixel region eye fundus image f (i), vessel segmentation image f (bv) and blending image f (ibv) calculating of sliding window confidence level is carried out:
According to retinal fundus images optic disk pre-set radius r, the sliding window for the use of size being 3r × 3r divides using r/2 as step-length Not Sao Miao brightest pixel region eye fundus image f (i), vessel segmentation image f (bv) and blending image f (ibv), and respectively To the sliding window confidence score figure S (i), S (bv), S (ibv) of each image;Wherein, all pixels point in sliding window is calculated Confidence score as current sliding window mouth of the sum of gray value;
To the sliding window of brightest pixel region eye fundus image f (i), vessel segmentation image f (bv) and blending image f (ibv) Mouthful confidence score figure S (i), S (bv), S (ibv) are normalized respectively, and by the sliding window confidence after normalization It spends shot chart S (i) ', S (bv) ' and S (ibv) ' to be merged, obtains the sliding window confidence level fusion of retinal fundus images Shot chart S:
Sliding window position corresponding to confidence level fusion highest scoring is chosen as optic disk position, by the optic disk band of position from view It cuts out and in nethike embrane eye fundus image, obtain the optic disk area image of retinal fundus images.
6. retinal fundus images cup disc ratio automatic evaluation method according to claim 5, which is characterized in that the step The concrete processing procedure of A1 is as follows:
Retinal fundus images are carried out with top cap transformation and the transformation of bottom cap respectively, obtains the top cap Transformation Graphs of retinal fundus images As GTWith bottom cap changing image GB:
GB=FB-F;
Wherein, F (x, y) indicates that the gray level image of retinal fundus images, B (u, v) indicate the structural elements of retinal fundus images Element,● opening operation and closed operation are respectively represented,Respectively represent corrosion and dilation operation;
According to the gray level image F of retinal fundus images, top cap changing image GTWith bottom cap changing image GBGray level is calculated Retina eyeground enhances image F':
F'=F+GT-GB
7. retinal fundus images cup disc ratio automatic evaluation method according to claim 5, which is characterized in that the step The concrete processing procedure of A2 is as follows:
Step A21, the histogram of retinal fundus images is calculated;
Step A22, brightest pixel number is superimposed by brightest pixel;
Step A23, stop superposition when brightest pixel number accumulative number of pixels preset ratio total more than entire image;
Step A24, in retinal fundus images up unless brightest area pixel, obtains brightest pixel region eye fundus image f (i)。
8. retinal fundus images cup disc ratio automatic evaluation method according to claim 5, which is characterized in that the step The concrete processing procedure of A3 is as follows:
Step A31, green channel numerical value is extracted from original retinal fundus images, to retinal fundus images green it is logical Road image FG
Step A32, by limitation contrast self-adapting histogram equilibrium algorithm, to green channel images FGDegree of comparing enhancing, Obtain enhanced green channel eye fundus image F 'G
Step A33, to enhanced green channel eye fundus image F 'GThe transformation of bottom cap and top cap transformation are carried out respectively, and calculate difference Value, obtains the blood-vessel image F with noisevessel
Fvessel=GB(F′G)-GT(F′G)
Wherein, GB(F′G) and GT(F′G) respectively represent enhanced green channel eye fundus image F 'GCarry out the transformation of bottom cap and top cap Convert obtained image;
Step A34, using median filtering to the blood-vessel image F with noisevesselIt is handled, obtains final blood vessel segmentation knot Fruit image f (bv).
9. retinal fundus images cup disc ratio automatic evaluation method according to claim 1, which is characterized in that the step The concrete processing procedure of D is as follows:
Step D1, form is carried out to the obtained optic disk dicing masks image to be measured of step C and optic cup dicing masks image to be measured Expansion and etching operation are learned, optic disk dicing masks pretreatment image to be measured and optic cup dicing masks pretreatment image to be measured are obtained;
Step D2, view is extracted respectively to optic disk dicing masks pretreatment image to be measured and optic cup dicing masks pretreatment image to be measured Plate edge and optic cup edge, then carry out ellipse fitting, obtain final optic disk optic cup dicing masks image;
Step D3, the final optic disk optic cup dicing masks image obtained based on step D2 calculates optic disk in the difference of vertical direction With optic cup in the difference of vertical direction, wherein optic disk is vertical disc diameter VDD in the maximum difference of vertical direction, and optic cup is perpendicular Histogram to maximum difference be vertical optic cup diameter VCD, cup is calculated according to vertical disc diameter VDD and vertical optic cup diameter VCD Disk ratio CDR:
10. retinal fundus images cup disc ratio automatic evaluation method according to claim 9, which is characterized in that the step Ellipse fitting in rapid D2 includes: to optic disk dicing masks pretreatment image to be measured and optic cup dicing masks pretreatment image to be measured Optic disk edge and optic cup edge are extracted using canny algorithm, then to optic disk edge and optic cup edge carry out least square method into Row ellipse fitting.
CN201811099755.6A 2018-09-20 2018-09-20 A kind of retinal fundus images cup disc ratio automatic evaluation method Pending CN109829877A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811099755.6A CN109829877A (en) 2018-09-20 2018-09-20 A kind of retinal fundus images cup disc ratio automatic evaluation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811099755.6A CN109829877A (en) 2018-09-20 2018-09-20 A kind of retinal fundus images cup disc ratio automatic evaluation method

Publications (1)

Publication Number Publication Date
CN109829877A true CN109829877A (en) 2019-05-31

Family

ID=66858729

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811099755.6A Pending CN109829877A (en) 2018-09-20 2018-09-20 A kind of retinal fundus images cup disc ratio automatic evaluation method

Country Status (1)

Country Link
CN (1) CN109829877A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110223291A (en) * 2019-06-20 2019-09-10 南开大学 A kind of training retinopathy height segmentation network method based on loss function
CN110298850A (en) * 2019-07-02 2019-10-01 北京百度网讯科技有限公司 The dividing method and device of eye fundus image
CN111862187A (en) * 2020-09-21 2020-10-30 平安科技(深圳)有限公司 Cup-to-tray ratio determining method, device, equipment and storage medium based on neural network
CN111986202A (en) * 2020-10-26 2020-11-24 平安科技(深圳)有限公司 Glaucoma auxiliary diagnosis device, method and storage medium
CN112215797A (en) * 2020-09-11 2021-01-12 嗅元(北京)科技有限公司 MRI olfactory bulb volume detection method, computer device and computer readable storage medium
CN112288720A (en) * 2020-10-29 2021-01-29 苏州体素信息科技有限公司 Deep learning-based color fundus image glaucoma screening method and system
CN112001920B (en) * 2020-10-28 2021-02-05 北京至真互联网技术有限公司 Fundus image recognition method, device and equipment
WO2021053656A1 (en) * 2019-09-19 2021-03-25 Artificial Learning Systems India Pvt Ltd System and method for deep network-based glaucoma prediction

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784647A (en) * 2017-09-29 2018-03-09 华侨大学 Liver and its lesion segmentation approach and system based on multitask depth convolutional network
CN108520522A (en) * 2017-12-31 2018-09-11 南京航空航天大学 Retinal fundus images dividing method based on the full convolutional neural networks of depth
CN109658423A (en) * 2018-12-07 2019-04-19 中南大学 A kind of optic disk optic cup automatic division method of colour eyeground figure

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784647A (en) * 2017-09-29 2018-03-09 华侨大学 Liver and its lesion segmentation approach and system based on multitask depth convolutional network
CN108520522A (en) * 2017-12-31 2018-09-11 南京航空航天大学 Retinal fundus images dividing method based on the full convolutional neural networks of depth
CN109658423A (en) * 2018-12-07 2019-04-19 中南大学 A kind of optic disk optic cup automatic division method of colour eyeground figure

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
FAUSTO MILLETARI,NASSIR NAVAB ET AL.: ""V-Net:Fully Convolutional neural Networks for Volumetric Medical Image Segmentation"", 《ARXIV》 *
OLAF RONNEBERGER等: ""U-Net: Convolutional Networks for Biomedical Image Segmentation"", 《ARXIV》 *
SIMON JEGOU,MICHAL DROZDZAL ET AL.: ""The One Hundred Layers Tiramisu:Fully Convolutional DenseNets for Semantic Segmentation"", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS》 *
TIAN LAN ET.AL.: ""RUN:Residual U-Net for Computer-Aided Detection of Pulmonary Nodules without Candidate Selection"", 《ARXIV》 *
吴慧,陈再良等: ""基于置信度计算的快速眼底图像视盘定位"", 《计算机辅助设计与图形学学报》 *
孙小菡: "《全国第17次光纤通信暨第18届集成光学术会议论文集》", 31 December 2017, 东南大学出版社 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110223291A (en) * 2019-06-20 2019-09-10 南开大学 A kind of training retinopathy height segmentation network method based on loss function
CN110223291B (en) * 2019-06-20 2021-03-19 南开大学 Network method for training fundus lesion point segmentation based on loss function
CN110298850A (en) * 2019-07-02 2019-10-01 北京百度网讯科技有限公司 The dividing method and device of eye fundus image
WO2021053656A1 (en) * 2019-09-19 2021-03-25 Artificial Learning Systems India Pvt Ltd System and method for deep network-based glaucoma prediction
CN112215797A (en) * 2020-09-11 2021-01-12 嗅元(北京)科技有限公司 MRI olfactory bulb volume detection method, computer device and computer readable storage medium
CN111862187B (en) * 2020-09-21 2021-01-01 平安科技(深圳)有限公司 Cup-to-tray ratio determining method, device, equipment and storage medium based on neural network
CN111862187A (en) * 2020-09-21 2020-10-30 平安科技(深圳)有限公司 Cup-to-tray ratio determining method, device, equipment and storage medium based on neural network
CN111986202A (en) * 2020-10-26 2020-11-24 平安科技(深圳)有限公司 Glaucoma auxiliary diagnosis device, method and storage medium
CN112001920B (en) * 2020-10-28 2021-02-05 北京至真互联网技术有限公司 Fundus image recognition method, device and equipment
CN112288720A (en) * 2020-10-29 2021-01-29 苏州体素信息科技有限公司 Deep learning-based color fundus image glaucoma screening method and system

Similar Documents

Publication Publication Date Title
CN109829877A (en) A kind of retinal fundus images cup disc ratio automatic evaluation method
CN108021916B (en) Deep learning diabetic retinopathy sorting technique based on attention mechanism
CN106920227B (en) The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method
CN103413120B (en) Tracking based on object globality and locality identification
CN107977932A (en) It is a kind of based on can differentiate attribute constraint generation confrontation network face image super-resolution reconstruction method
CN109448006A (en) A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism
CN109166126A (en) A method of paint crackle is divided on ICGA image based on condition production confrontation network
CN109345538A (en) A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks
CN101667289B (en) Retinal image segmentation method based on NSCT feature extraction and supervised classification
CN107437092A (en) The sorting algorithm of retina OCT image based on Three dimensional convolution neutral net
Seoud et al. Automatic grading of diabetic retinopathy on a public database
CN110197493A (en) Eye fundus image blood vessel segmentation method
CN108389220A (en) Remote sensing video image motion target real-time intelligent cognitive method and its device
CN110458133A (en) Lightweight method for detecting human face based on production confrontation network
CN110428432A (en) The deep neural network algorithm of colon body of gland Image Automatic Segmentation
US9480925B2 (en) Image construction game
CN112330684A (en) Object segmentation method and device, computer equipment and storage medium
CN103985113B (en) Tongue is as dividing method
CN110276356A (en) Eye fundus image aneurysms recognition methods based on R-CNN
CN108764342B (en) Semantic segmentation method for optic discs and optic cups in fundus image
CN106446805B (en) A kind of eyeground shine in optic cup dividing method and system
CN109903339A (en) A kind of video group personage's position finding and detection method based on multidimensional fusion feature
CN105513071B (en) A kind of topographic map symbols quality evaluating method
Pratama et al. Cholesterol Detection Based on Eyelid Recognition Using Convolutional Neural Network Method
CN110110782A (en) Retinal fundus images optic disk localization method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination