CN106920227B - The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method - Google Patents

The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method Download PDF

Info

Publication number
CN106920227B
CN106920227B CN201611228597.0A CN201611228597A CN106920227B CN 106920227 B CN106920227 B CN 106920227B CN 201611228597 A CN201611228597 A CN 201611228597A CN 106920227 B CN106920227 B CN 106920227B
Authority
CN
China
Prior art keywords
layer
retinal
network
segmentation
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611228597.0A
Other languages
Chinese (zh)
Other versions
CN106920227A (en
Inventor
蔡轶珩
高旭蓉
邱长炎
崔益泽
王雪艳
孔欣然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201611228597.0A priority Critical patent/CN106920227B/en
Publication of CN106920227A publication Critical patent/CN106920227A/en
Application granted granted Critical
Publication of CN106920227B publication Critical patent/CN106920227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Abstract

Based on the Segmentation Method of Retinal Blood Vessels that deep learning is combined with conventional method, it is related to computer vision and area of pattern recognition.Two kinds of gray level images are all used as the training sample of network by the present invention, and having done corresponding data amplification for the few problem of retinal image data includes elastic deformation, and smothing filtering etc. expands the broad applicability of the invention.The present invention divides depth network by the retinal vessel of building FCN-HNED, the process for realizing autonomous learning of the network high degree, the convolution feature of whole image can not only be shared, reduce feature redundancy, the generic of multiple pixels can be recovered from abstract feature again, the blood vessel segmentation figure that the CLAHE figure and Gauss matched filtering figure of retinal vascular images are inputted network respectively respectively obtains it is weighted and averaged to obtain more preferable more complete retinal vessel segmentation probability graph, the robustness and accuracy for improving blood vessel segmentation of this kind of processing mode high degree.

Description

The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method
Technical field
It is that one kind is mutually tied based on deep learning with conventional method the present invention relates to computer vision and area of pattern recognition The Segmentation Method of Retinal Blood Vessels of conjunction.
Background technique
Fundus imaging can be by retina image-forming to determine whether there is exception, wherein the observation for retinal vessel It is quite important.The diseases such as glaucoma, cataract and diabetes can all cause the lesion of retina optical fundus blood vessel.Retinopathy Patient increases year by year, if cannot treat in time, it will usually the patient for suffering from these diseases for a long time be caused to bear great pain very To blindness.However, retinopathy is to carry out Artificial Diagnosis by specialist at present, specialist is first to the eye fundus image of patient The label for carrying out manual blood vessel, then, then measures the relevant parameters such as required external caliber, bifurcation angle.Wherein, manual markings The process of blood vessel probably needs or so two hours, and diagnosis process takes a significant amount of time, and in order to save human and material resources, automation is mentioned The method for taking blood vessel is particularly important.The burden of specialist can not only be mitigated, remote districts can also be effectively solved and lack The problem of weary specialist.In view of the importance of retinal vessel segmentation, domestic and foreign scholars have done many researchs, have substantially divided non-prison It superintends and directs and measure of supervision.
Non-supervisory method is that blood vessel target, including matched filtering are extracted by certain rule, and Morphological scale-space, blood vessel chases after Track, multiscale analysis scheduling algorithm.Supervised learning is also called pixel characteristic classification method or machine learning techniques.It will by training Each pixel classifications are judged as blood vessel or non-vascular.It is broadly divided into two processes: feature extraction and classification.Feature extraction phases Generally include Gabor filtering, Gauss matched filtering, the methods of morphology enhancing.The classifier that sorting phase generally includes has Bayesian (naive Bayesian), SVM etc. classifiers.But this kind cannot be fine for the judgement of pixel each picture of consideration Element and contacting between the pixel of field around it.Thus there is CNN, it can be judged according to the feature of image block in imago Element is blood vessel or non-vascular, and by carrying out the automatic learning characteristic of multilayered structure, the feature for keeping these abstract is conducive to center The classification of pixel judges.But each pixel classify and is seldom related to global information, so that in the feelings for locally having lesion Under condition, classification failure;Secondly, each image at least hundreds of thousands pixel, if judged one by one, so that storage overhead is big, meter It is very low to calculate efficiency.
Summary of the invention
For the deficiency of existing algorithm, the invention proposes a kind of views combined based on deep learning with conventional method Film blood vessel segmentation method.It is targetedly pre-processed firstly, doing it according to the characteristics of retinal vessel, including carries out CLAHE (limitation Property contrast self-adapting histogram equilibrium) processing enables retinal vessel and background contrast with higher, carry out high The conventional method of this matched filtering enhances the minute blood vessel of retina well, and the present invention is proposed two kinds of grayscale images As all as the training sample of network.On this basis, we have done corresponding number for the few problem of retinal image data According to amplification include elastic deformation, smothing filtering etc., not only make data volume increase be conducive to deep learning network study and Training, it is often more important that simulate with the retinal images in various situation, can handle through the invention To good retinal vessel segmentation figure, the broad applicability of the invention is expanded.
Secondly, the present invention divides depth network by the retinal vessel of building FCN-HNED, by FCN (Fully Convolutional Network) network end-point obtains blood vessel probability graph and shallow-layer information HNED (HolisticallyNested Edge Detection) blood vessel probability graph carried out good fusion, obtain the retinal vessel segmentation figure needed for us, should The process for realizing autonomous learning of network high degree can not only share the convolution feature of whole image, and it is superfluous to reduce feature It is remaining, and the generic of multiple pixels can be recovered from abstract feature, realize a kind of end-to-end, view of pixel to pixel The method of retinal vasculature dividing method, this global input and global output is both simple and effective.Work as in retinal vessel detection The CLAHE figure and Gauss matched filtering figure of retinal vascular images are inputted the blood that network obtains it respectively by the middle present invention respectively Pipe segmentation figure is weighted and averaged to obtain more preferable more complete retinal vessel segmentation probability graph, this kind of processing mode is very big The robustness and accuracy for improving blood vessel segmentation of degree.
It adopts the following technical scheme that herein
1, it pre-processes
1) green channel relatively high to contrast in tri- channels RGB of colored retinal images extracts.Its Secondary, due to the problem of shooting angle etc., the brightness of collected retinal fundus images is often non-uniform or diseased region Domain due to it is excessively bright or excessively secretly show in the picture the problems such as contrast is not high be difficult with background distinguish, so, we carry out Normalized.Then, CLAHE processing is carried out to the retinal images after normalization and improves retinal fundus images quality, The brightness of weighing apparatus eye fundus image makes it be more suitable for Subsequent vessel extraction.
Retinal vessel after CLAHE processing is capable of high degree while enhancing blood vessel and background contrasts The self character of retinal vessel is kept, however, working as since wherein minute blood vessel and background are much like in subsequent deep learning In can not split well, in view of this, the characteristics of present invention is moved towards using the cross section grayscale image of blood vessel in Gauss, Retinal vessel after CLAHE is handled carries out Gauss matched filtering processing, enables the table of minute blood vessel high degree Reveal and.Since the direction of blood vessel is arbitrary, herein using the Gaussian kernel template of 12 different directions come to retina Image carries out matched filtering, using its peak response as the response of the pixel.Dimensional Gaussian matched filtering kernel function k (x, y) It may be expressed as:
Wherein σ indicates the variance of Gaussian curve, and L indicates the retinal blood length of tube that y-axis is truncated, the width of filter window [- 3 σ, 3 σ] the i.e. value range of kernel function x is selected, selects lesser σ numerical value to be set as 0.5, enables minute blood vessel very big Degree is enhanced.
In order to fully take into account the overall permanence and the wherein characteristic of minute blood vessel of retinal images, we are by CLAHE Retinal vessel figure and Gauss matched filtering figure after processing are all used as the sample of training, can greatly promote network segmentation Performance.
2, data amplification and building training sample
Since training depth network needs a large amount of data, only existing retinal images are far from enough for training.In It is the expansion for needing to carry out training data different modes, increases data volume, improves trained and detection effect.Data amplification side Formula:
1) pretreated image is carried out the upper inferior translation in left and right is respectively 20 pixels, realizes the translation of e-learning Invariance.
2) by 1) treated, image carries out 45 ° respectively, and 90 °, 125 °, 180 ° of rotation intercepts maximum rectangle therein, It is this to convert the rotation robustness for not only increasing training data, and be 5 times originally by data augmentation.
3) in the amplification of general data, the blooming that retinal images are likely to occur never is considered, however, this hair Bright for example the shake of camera or patient accidentally move in view of in all cases, can all make retinal images one Determine the obscure portions in degree, so, 2) treated image set chosen and wherein 25% carries out 3 × 3 and 5 respectively by the present invention × 5 median filtering fuzzy operation enables network to have broad applicability for the retinal images of various fog-levels.
4) it is commonly only translated in previous retinal image data amplification, scaling, rotation etc., much up to not The considerations of to various situations to retinal images, in consideration of it, the present invention considers the different of vessel directions shape of retina etc. Property, to 3) treated, image set takes 25% into row stochastic flexible deformation for we, which expands mode for retina The segmentation of blood vessel has very important significance, it can help e-learning to the complicated retinal vessel in various directions, has Conducive to the promotion of retinal vessel segmentation accuracy rate in practical application.
5) since FCN is suitable for the image of any size, we carry out 50% and 75% contracting to 4) treated image Processing is put, thus amplification data.
Certainly, we similarly handle expert's standard drawing (ground truth) of retinal vessel segmentation, To be corresponded with sample.It is used as training set by the 3/4 of the good training sample data of component, 1/4 as verifying collection.
3, FCN-HNED network struction
FCN network: general FCN network layer is mainly made of 5 parts, input layer, convolutional layer, down-sampled layer, up-sampling Layer (warp lamination) and output layer.The network constructed in the present invention are as follows:
Input layer, two convolutional layers (C1, C2), the first down-sampled layer (pool1), two convolutional layers (C3, C4), the second drop Sample level (pool2), two convolutional layers (C5, C6), the down-sampled layer of third (pool3), two convolutional layers (C7, C8), the 4th drop Sample level (pool4), two convolutional layers (C9, C10), first up-samples layer (U1), two convolutional layers (C11, C12), on second Sample level (U2), two convolutional layers (C13, C14), third up-sample layer (U3), two convolutional layers (C15, C16), adopt on the 4th Sample layer (U4), two convolutional layers (C17, C18), destination layer (output layer).Form the symmetrical U-shaped depth network structure in a front and back Frame.
Since the feature resolution of the low layer of FCN network is higher, and high layer information embodies stronger semantic information, for The regions such as the parts of lesions of retinal images blood vessel classification have good robustness, but simultaneously FCN network finally obtain with The output of input sample same size can but lose the detailed information of many lesser targets and part, thus, the present invention will be shallow The retinal vessel information of layer is in the method for edge detection HNED (Holisticallynested edge detection) in depth Learn multilayer information representation abundant in the case of degree supervision, largely solves object edge fuzzy problem.I.e. we By C2, C4, C6, a softmax classifier is added after C8 layers respectively, thus by the information of hidden layer by ground Truth be label in the case where study obtain retinal vessel probability graph, be referred to as side output 1, side output 2, side output 3, Side output 4.On this basis, we merge four side outputs with last output layer, to form FCN-HNED's Network structure, shallow-layer information is complementary with the progress of output layer information, obtain it is multiple dimensioned, it is at many levels, more similar with target sample Fusion feature figure plays very big effect to divide the fining of blood vessel, comes so that not needing subsequent special step of refining Carry out the fining of retinal vessel.
Convolutional layer of the invention all obtains an equal amount of characteristic pattern by way of zero padding, pooling layers the result is that So that feature is reduced, parameter is reduced, but pooling layers of purpose and is not only in that this.The present invention can be subtracted using max-pooling The offset that mean value is estimated caused by small convolutional layer parameter error, more retains texture information.Maximum pond layer of the invention is adopted Sample rate is 2.Up-sampling is the process of bilinear interpolation.
Activation primitive all uses ReLU in addition to Softmax classification layer in the building process of entire model, and loss function is to intersect Entropy.
Training: the training of network can be carried out after FCN-HNED network struction is good mention to the automated characterization of image It takes and learning process, per generation inputs 128 images, stop until network convergence.
Test: the CLAHE figure and Gauss matched filtering figure of every retinal images green channel figure are separately input to Trained network is tested, and the retinal vessel segmentation figure for respectively obtaining fusion is known asWithIt is rightWithThe retinal vessel segmentation probability graph for being weighted and averaged to the end.4, it post-processes
Segmentation figure is obtained to retinal vessel probability graph progress binaryzation is obtained in test.
Beneficial effect
1, the present invention is according to the different characteristics of retinal vessel, using targetedly data processing method, training data Whether the model that quality directly determines that training obtains is reliable, and whether accuracy rate reaches required level, and the present invention utilizes fuzzy Operation, elastic deformation etc. simulates various retina data being likely to occur well, while expanding data and reaching enough More quantity also can provide help to avoid training over-fitting for subsequent detection, and then it is quasi- to improve retinal vessel segmentation True rate.
2, the image after the present invention handles CLAHE treated retinal images and Gauss matched filtering inputs respectively Network is trained study, and the property of retinal vessel is not only made to obtain the abundant study under each performance level, Er Qiegao This matched filtering figure sufficiently compensates for CLAHE processing figure deficiency unsharp for minute blood vessel, greatly improves retina The performance of blood vessel segmentation.
3, the present invention can be rapidly performed by retinal images by the method for building deep learning network FCN-HNED Automatic Feature Extraction, it can carry out feature extraction to retinal fundus images from different levels, and retinal images are arrived in study In each pixel and the relationship around it between multiple neighborhoods, by its retinal vessel figure, the good table of advanced features Reveal and, so that it be made to have distinguished the internal feature of blood vessel and non-vascular well, realizes end-to-end, the blood of pixel to pixel Pipe segmentation promotes many times than the classification judging efficiency of traditional single pixel.
4, the present invention is merged using four sides output of shallow-layer feature with the end of FCN network output progress height, thus Realize the fining and robustness of blood vessel segmentation.So that blood vessel segmentation figure and the manual segmentation figure of expert reach consistent well Property.Meanwhile the automation for realizing retinal vessel segmentation of high degree, greatly reduce drain on manpower and material resources.
Detailed description of the invention
Fig. 1 is overall flow figure of the invention;
Fig. 2 is vessel cross-sections intensity profile figure;(a) one section of vessel graph (b) gray level
Fig. 3 is pretreating effect figure;(a) after image (c) Gauss matched filtering after original image (b) CLAHE processing Image
Fig. 4 is FCN-HNED network structure;
Fig. 5 is retinal vessel segmentation result.(a) first expert's hand of original image (b) retinal vessel segmentation figure (c) Dynamic segmentation figure
Specific embodiment
It is specifically described with reference to the accompanying drawing:
Technology frame chart of the invention is as shown in Figure 1.Specific implementation step difference is as follows:
1, it pre-processes
Each width retinal fundus images either training set or test set are all similarly pre-processed.
1) green channel relatively high to contrast in tri- channels RGB of colored retinal images extracts.Its Secondary, due to the problem of shooting angle etc., the brightness of collected retinal fundus images is often non-uniform or diseased region Domain due to it is excessively bright or excessively secretly show in the picture the problems such as contrast is not high be difficult with background distinguish, so, we carry out Then normalized carries out CLAHE processing to the retinal images after normalization and improves retinal fundus images quality, The brightness of weighing apparatus eye fundus image makes it be more suitable for Subsequent vessel extraction.
Retinal vessel after CLAHE processing is capable of high degree while enhancing blood vessel and background contrasts The self character of retinal vessel is kept, however, working as since wherein minute blood vessel and background are much like in subsequent deep learning In can not split well, in view of this, the characteristics of present invention is moved towards using the cross section grayscale image of blood vessel in Gauss, Retinal images are carried out to do dead matched filtering processing.As shown in Fig. 2, (a) is blood vessel grayscale image, it (b) is the cross section of blood vessel Gray value, the cross section of tiny blood vessel also present Gauss trend, so, the present invention CLAHE is handled after retina Blood vessel carries out Gauss matched filtering processing.Since the direction of blood vessel is arbitrary, the height of 12 different directions is used herein This core template to carry out matched filtering to retinal images, finds response of the corresponding peak response as the pixel.
In order to fully take into account the overall permanence and the wherein characteristic of minute blood vessel of retinal images, we are by CLAHE Retinal vessel figure and Gauss matched filtering figure after processing are all used as the sample of training, can greatly promote network segmentation Performance.
2, data amplification and building training sample
Since training depth network needs a large amount of data, only existing retinal images are far from enough for training.In It is the data augmentation for needing to carry out training data different modes, increases data volume, improves trained and detection effect.Data amplification Mode:
1) pretreated image is carried out the upper inferior translation in left and right is respectively 20 pixels, realizes the translation of e-learning Invariance.
2) by 1) treated, image carries out 45 ° respectively, and 90 °, 125 °, 180 ° of rotation intercepts maximum rectangle therein, It is this to convert the rotation robustness for not only increasing training data, and be 5 times originally by data augmentation.
3) in the amplification of general data, median filtering is not always used, however, the present invention is considered in various situations Under, for example the shake of camera or patient accidentally move, and can all make the obscure portions of retinal images to a certain extent Situation, so, the present invention by 2) treated image takes wherein 25% image carry out respectively 3 × 3 and 5 × 5 median filtering mould Paste operation enables network to have broad applicability for the retinal images of various fog-levels.
4) it is commonly only translated in previous retinal image data amplification, scaling, rotation etc., much up to not The considerations of to various situations to retinal images, in consideration of it, the present invention considers the different of vessel directions shape of retina etc. Property, to 3) treated, image takes 25% into row stochastic flexible deformation for we, which expands mode for retinal blood The segmentation of pipe has very important significance, it can help e-learning to the complicated retinal vessel in various directions, favorably The promotion of retinal vessel segmentation accuracy rate in practical application.
5) since FCN is suitable for the image of any size, we carry out 50% and 75% contracting to 4) treated image Processing is put, thus amplification data.
Certainly, we similarly handle expert's standard drawing (ground truth) of retinal vessel segmentation, To be corresponded with sample.It is used as training set by the 3/4 of the good training sample data of component, 1/4 as verifying collection.
3, FCN-HNED network struction and training and test process
FCN network: general FCN network layer is mainly made of 5 parts, input layer, convolutional layer, down-sampled layer, up-sampling Layer (warp lamination) and output layer.The network constructed in the present invention are as follows: input layer, two convolutional layers (C1, C2), first is down-sampled Layer (pool1), two convolutional layers (C3, C4), the second down-sampled layer (pool2), two convolutional layers (C5, C6), third is down-sampled Layer (pool3), two convolutional layers (C7, C8), the 4th down-sampled layer (pool4), two convolutional layers (C9, C10), the first up-sampling Layer (U1), two convolutional layers (C11, C12), the second up-sampling layer (U2), two convolutional layers (C13, C14), third up-sample layer (U3), two convolutional layers (C15, C16), the 4th up-sampling layer (U4), two convolutional layers (C17, C18), destination layer (output layer). Form the symmetrical U-shaped depth network architecture in a front and back.
Wherein convolution process is accomplished by
f(X;W, b)=W*sX+b (2)
Wherein, f (X;W, b) it is that output is characterized figure, X is the input feature vector figure of preceding layer, and W and b are convolution kernel and offset Value, *sConvolution operation is represented, unlike traditional CNN network, last full articulamentum is all exchanged with convolutional layer by FCN network, but It is to make characteristic pattern smaller and smaller by sequence of operations such as convolution sum down-samplings, is restored to image same with input picture Sample size, FCN is using up-sampling operation deconvolution in other words.
Intermediate convolutional layer of the invention all obtains an equal amount of characteristic pattern by way of zero padding, symmetrical U-shaped 3 × 3 filtering convolution kernels that all repeated application two is tightly connected in network carry out convolution operations, step-length 1, each convolutional layer back Have a ReLU activation primitive, pooling layers the result is that reducing feature, parameter is reduced, but pooling layers of purpose And it is not only in that this, it is able to maintain the rotation of certain invariance, translation etc., this structure core is 2 × 2, the max- that step-length is 2 Pooling layers, the offset for estimating mean value caused by convolutional layer parameter error can be reduced, more retain texture information.Each During down-sampling, the number of characteristic pattern all can be double, up-samples then opposite.In addition to this, in the last layer with 1 × 1 64 characteristic patterns are trained by convolution kernel by target mapping of standard output.
Activation primitive all uses ReLU in addition to Softmax classification layer in the building process of entire model, and loss function is to intersect Entropy.
HNED structure: blood vessel segmentation is regarded as edge detection problem by we, we use the network supervised based on depth Obtain four blood vessel probability graphs of shallow-layer FCN network.I.e. C2, C4, C6, C8 are added a softmax points by us respectively later Class device, by supervising network using Standard Segmentation result as the depth of target thus by the information of hidden layer with retinal vessel probability The form of figure is shown, and is referred to as side output 1, side output 2, side output 3, side output 4, is realized multiple dimensioned Feature Mapping The study of figure.
Since the low-level feature resolution ratio of FCN network is higher, and high layer information embodies stronger semantic information, for view The blood vessel classification in the regions such as the parts of lesions of nethike embrane image has good robustness, but finally obtains identical as input sample big Small output can but lose the detailed information of many lesser targets and part, thus, the present invention is by the retinal vessel of shallow-layer Information learns multilayer information representation abundant when depth is supervised in the method for edge detection HNED, largely solves Object edge of having determined fuzzy problem.On this basis, we merge four side outputs with last output layer, thus shape At the network structure of FCN-HNED, as shown in figure 4, by C1, C2 is 64 3 if input picture size is 512 × 512 × 3 filter obtains 64 characteristic patterns, C1 is made by way of to original image zero padding, it is 512 that C2 characteristic pattern, which keeps size, × 512, by down-sampled so that characteristic pattern is double, when reaching lowermost end C9 and C10,1024 characteristic pattern sizes are 32 × 32, Convolution realization later is similar with front, and the implementation of up-sampling is bilinear interpolation.The network structure is by shallow-layer information The run off vascular output layer blood vessel probability graph of probability graph and FCN network of four sides carries out Mutually fusion, is obtained by training more preferable With the more similar characteristic pattern of target sample, for divide blood vessel fining play very big effect so that not needing subsequent It is special to refine step to carry out the fining of retinal vessel.
Fusion process: in order to directly utilize the output probability figure after side output probability figure and FCN up-sampling, we are to it It is merged to obtain:Wherein, σ () indicates sigmoid function,Indicate m-th of side Output, hmIt is four side outputs and the blending weight that FCN is finally exported respectively with h, original fusion weight is all set as 1/5.It carries out The loss function of Weighted Fusion are as follows:
Wherein, Y indicates that standard blood vessel segmentation figure, that is, ground truth, Dist () indicates the probability after fusion The distance between figure and standard blood vessel segmentation figure, i.e. difference degree, weight is adjusted by way of study and moves closer to convergence, I Minimize its loss function by SDG (gradient descent method).
Training: the training of network can be carried out after FCN-HNED network struction is good mention to the automated characterization of image It takes and learning process, be carried out in two steps: the first step is manually chosen the intuitive picture 1280 of some comparisons and opened, first to constructing herein Model be trained, in per generation, inputs 128 images, after model convergence, model parameter is preserved because this 1280 picture contents are relatively intuitive simpler, and the semantic information of blood vessel non-vascular is than more visible, and the convergence rate of model is than very fast; Second step trains model on complete or collected works' training set again, but the initial value of model parameter is used and obtained in mono- Walk Parameter, the training time of model is greatly reduced in this way, so that the convergence rate of overall model is accelerated.
Training: each image training data is carried out after successively calculating by convolutional neural networks algorithm, obtains output one A fused blood vessel probability graph calculates the error of the probability graph with pixel generic each in corresponding standard drawing.According to Minimum error principle carries out each layer parameter in depth convolutional neural networks constructed by layer-by-layer feedback modifiers by error calculation. When error, which is gradually reduced, to tend towards stability, it is believed that network has been restrained, and training terminates, detection model needed for generating.
Test: the CLAHE figure and Gauss matched filtering figure of every retinal fundus images green channel figure are inputted respectively It is tested to trained network, the retinal vessel segmentation figure for respectively obtaining fusion is known asWithIt is right WithIt is weighted and averaged to obtain more vessel informations, has also obtained last retinal vessel segmentation probability graph.
4 post-processings
Binaryzation is carried out to comprehensive obtained retinal vessel probability graph and obtains segmentation figure, is showed consistent with expert's segmentation Binary map.By carrying out parameter evaluation to segmentation result, 96% or more accuracy rate is obtained, as shown in Figure 5.

Claims (1)

1. the Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method, which is characterized in that including following step It is rapid:
(1), it pre-processes
Tri- channels the RGB Green channel of colored retinal images is extracted, is normalized, to normalization Retinal images afterwards carry out CLAHE processing, and the retinal vessel after CLAHE is handled carries out Gauss matched filtering processing, Retinal vessel figure and Gauss matched filtering figure after CLAHE is handled all are used as the sample of training;
(2), data amplification and building training sample
Data expand mode:
1) pretreated image is carried out the upper inferior translation in left and right is respectively 20 pixels, realizes the translation invariant of e-learning Property;
2) by 1) treated, image carries out 45 ° respectively, and 90 °, 125 °, 180 ° of rotation intercepts maximum rectangle therein;
3) by 2) treated image set choose wherein 25% carry out respectively 3 × 3 and 5 × 5 median filtering fuzzy operation;
4) by 3) treated, image set taken 25% into row stochastic flexible deformation;
5) 4) scaling processing that treated image is carried out to 50% and 75%, thus amplification data;
The expert standard drawing ground truth of retinal vessel segmentation is similarly handled, thus a pair of with sample one It answers;
(3), FCN-Holistically-nested edge detection network struction
The network of building are as follows:
Input layer, down-sampled layer of pool1 of two convolutional layers C1, C2, the first, two down-sampled layers of convolutional layer C3, C4, the second Pool2, two convolutional layer C5, C6, third down-sampled layer pool3, two convolutional layer C7, C8, the four down-sampled layers pool4, two A convolutional layer C9, C10, the first up-sample layer U1, and two convolutional layers C11, C12, the second up-sample layer U2, two convolutional layer C13, C14, third up-sample the up-sampling of layer U3, two convolutional layer C15, C16, the 4th layer U4, two convolutional layer C17, C18, destination layer That is output layer;Form the symmetrical U-shaped depth network architecture in a front and back;
By C2, C4, C6, a softmax classifier is added after C8 layers respectively, thus by the information of hidden layer by ground Truth be label in the case where study obtain retinal vessel probability graph, be referred to as side output 1, side output 2, side output 3, Side output 4;Four side outputs are weighted with last output layer and are merged, to form FCN-Holistically- The network structure of nested edge detection;
Training: the instruction of network can be carried out after FCN-Holistically-nested edge detection network struction is good Practice to carry out Automatic Feature Extraction and learning process to image, per generation inputs 128 images, stops until network convergence Only;
Test: the CLAHE figure and Gauss matched filtering figure of every retinal images green channel figure are separately input to train Good network is tested, and the retinal vessel segmentation figure for respectively obtaining fusion is known asWithIt is rightWithInto The retinal vessel segmentation probability graph that row is weighted and averaged to the end;
(4) it post-processes
Segmentation figure is obtained to retinal vessel probability graph progress binaryzation is obtained in test.
CN201611228597.0A 2016-12-27 2016-12-27 The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method Active CN106920227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611228597.0A CN106920227B (en) 2016-12-27 2016-12-27 The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611228597.0A CN106920227B (en) 2016-12-27 2016-12-27 The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method

Publications (2)

Publication Number Publication Date
CN106920227A CN106920227A (en) 2017-07-04
CN106920227B true CN106920227B (en) 2019-06-07

Family

ID=59453388

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611228597.0A Active CN106920227B (en) 2016-12-27 2016-12-27 The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method

Country Status (1)

Country Link
CN (1) CN106920227B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109426773A (en) * 2017-08-24 2019-03-05 浙江宇视科技有限公司 A kind of roads recognition method and device
CN108122236B (en) * 2017-12-18 2020-07-31 上海交通大学 Iterative fundus image blood vessel segmentation method based on distance modulation loss
CN110276763A (en) * 2018-03-15 2019-09-24 中南大学 It is a kind of that drawing generating method is divided based on the retinal vessel of confidence level and deep learning
CN108492302B (en) * 2018-03-26 2021-04-02 北京市商汤科技开发有限公司 Neural layer segmentation method and device, electronic device and storage medium
CN108665461B (en) * 2018-05-09 2019-03-12 电子科技大学 A kind of breast ultrasound image partition method corrected based on FCN and iteration sound shadow
CN108830155A (en) * 2018-05-10 2018-11-16 北京红云智胜科技有限公司 A kind of heart coronary artery segmentation and knowledge method for distinguishing based on deep learning
CN110545373A (en) * 2018-05-28 2019-12-06 中兴通讯股份有限公司 Spatial environment sensing method and device
CN109285157A (en) * 2018-07-24 2019-01-29 深圳先进技术研究院 Myocardium of left ventricle dividing method, device and computer readable storage medium
CN109118495B (en) * 2018-08-01 2020-06-23 东软医疗系统股份有限公司 Retinal vessel segmentation method and device
CN109087302A (en) * 2018-08-06 2018-12-25 北京大恒普信医疗技术有限公司 A kind of eye fundus image blood vessel segmentation method and apparatus
CN109523569B (en) * 2018-10-18 2020-01-31 中国科学院空间应用工程与技术中心 optical remote sensing image segmentation method and device based on multi-granularity network fusion
CN109528155A (en) * 2018-11-19 2019-03-29 复旦大学附属眼耳鼻喉科医院 A kind of intelligent screening system and its method for building up suitable for the concurrent open-angle glaucoma of high myopia
CN110120047A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image Segmentation Model training method, image partition method, device, equipment and medium
CN109886982B (en) * 2019-04-24 2020-12-11 数坤(北京)网络科技有限公司 Blood vessel image segmentation method and device and computer storage equipment
CN110309849A (en) * 2019-05-10 2019-10-08 腾讯医疗健康(深圳)有限公司 Blood-vessel image processing method, device, equipment and storage medium
CN110222726A (en) * 2019-05-15 2019-09-10 北京字节跳动网络技术有限公司 Image processing method, device and electronic equipment
CN111091132B (en) * 2020-03-19 2021-01-15 腾讯科技(深圳)有限公司 Image recognition method and device based on artificial intelligence, computer equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825509A (en) * 2016-03-17 2016-08-03 电子科技大学 Cerebral vessel segmentation method based on 3D convolutional neural network
CN106096654A (en) * 2016-06-13 2016-11-09 南京信息工程大学 A kind of cell atypia automatic grading method tactful based on degree of depth study and combination
CN106203327A (en) * 2016-07-08 2016-12-07 清华大学 Lung tumor identification system and method based on convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120178099A1 (en) * 2011-01-10 2012-07-12 Indian Association For The Cultivation Of Science Highly fluorescent carbon nanoparticles and methods of preparing the same

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105825509A (en) * 2016-03-17 2016-08-03 电子科技大学 Cerebral vessel segmentation method based on 3D convolutional neural network
CN106096654A (en) * 2016-06-13 2016-11-09 南京信息工程大学 A kind of cell atypia automatic grading method tactful based on degree of depth study and combination
CN106203327A (en) * 2016-07-08 2016-12-07 清华大学 Lung tumor identification system and method based on convolutional neural networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Holistically-Nested Edge Detection;Saining Xie et al.;《2015 IEEE International Conference on Computer Vision》;20160218;全文

Also Published As

Publication number Publication date
CN106920227A (en) 2017-07-04

Similar Documents

Publication Publication Date Title
Jin et al. DUNet: A deformable network for retinal vessel segmentation
CN109493347B (en) Method and system for segmenting sparsely distributed objects in an image
US9779492B1 (en) Retinal image quality assessment, error identification and automatic quality correction
Van Grinsven et al. Fast convolutional neural network training using selective data sampling: Application to hemorrhage detection in color fundus images
CN108171698B (en) Method for automatically detecting human heart coronary calcified plaque
Rasti et al. Macular OCT classification using a multi-scale convolutional neural network ensemble
Lim et al. Integrated optic disc and cup segmentation with deep learning
WO2018000752A1 (en) Monocular image depth estimation method based on multi-scale cnn and continuous crf
Fraz et al. Application of morphological bit planes in retinal blood vessel extraction
Li et al. Vessels as 4-D curves: Global minimal 4-D paths to extract 3-D tubular surfaces and centerlines
Abràmoff et al. Retinal imaging and image analysis
Dutta et al. Classification of diabetic retinopathy images by using deep learning models
Quellec et al. Three-dimensional analysis of retinal layer texture: identification of fluid-filled regions in SD-OCT of the macula
CN108603922A (en) Automatic cardiac volume is divided
CN110475505A (en) Utilize the automatic segmentation of full convolutional network
CN104834898B (en) A kind of quality classification method of personage's photographs
Singh et al. Automated early detection of diabetic retinopathy using image analysis techniques
CN108268870A (en) Multi-scale feature fusion ultrasonoscopy semantic segmentation method based on confrontation study
CN108257135A (en) The assistant diagnosis system of medical image features is understood based on deep learning method
CN106780466A (en) A kind of cervical cell image-recognizing method based on convolutional neural networks
CN106774863B (en) Method for realizing sight tracking based on pupil characteristics
Xiao et al. Weighted res-unet for high-quality retina vessel segmentation
EP1565880B1 (en) Image processing system for automatic adaptation of a 3-d mesh model onto a 3-d surface of an object
CN106530295A (en) Fundus image classification method and device of retinopathy
Dash et al. A thresholding based technique to extract retinal blood vessels from fundus images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant