CN110120055A - Fundus fluorescein angiography image based on deep learning is without perfusion area automatic division method - Google Patents
Fundus fluorescein angiography image based on deep learning is without perfusion area automatic division method Download PDFInfo
- Publication number
- CN110120055A CN110120055A CN201910294122.9A CN201910294122A CN110120055A CN 110120055 A CN110120055 A CN 110120055A CN 201910294122 A CN201910294122 A CN 201910294122A CN 110120055 A CN110120055 A CN 110120055A
- Authority
- CN
- China
- Prior art keywords
- layer
- image
- module
- perfusion area
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000010412 perfusion Effects 0.000 title claims abstract description 60
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000013534 fluorescein angiography Methods 0.000 title claims abstract description 18
- 238000013135 deep learning Methods 0.000 title claims abstract description 16
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 35
- 230000011218 segmentation Effects 0.000 claims abstract description 29
- 230000008569 process Effects 0.000 claims abstract description 6
- 238000005070 sampling Methods 0.000 claims description 36
- 230000007246 mechanism Effects 0.000 claims description 22
- 230000006870 function Effects 0.000 claims description 11
- 230000004927 fusion Effects 0.000 claims description 8
- 238000012952 Resampling Methods 0.000 claims description 7
- 230000003321 amplification Effects 0.000 claims description 7
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims 1
- 206010012689 Diabetic retinopathy Diseases 0.000 abstract description 19
- 238000011282 treatment Methods 0.000 abstract description 7
- 239000002671 adjuvant Substances 0.000 abstract 1
- 238000013532 laser treatment Methods 0.000 abstract 1
- 230000003902 lesion Effects 0.000 description 5
- 238000003745 diagnosis Methods 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 239000003814 drug Substances 0.000 description 3
- 238000002583 angiography Methods 0.000 description 2
- 229940079593 drug Drugs 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- GNBHRKFJIUUOQI-UHFFFAOYSA-N fluorescein Chemical compound O1C(=O)C2=CC=CC=C2C21C1=CC=C(O)C=C1OC1=CC(O)=CC=C21 GNBHRKFJIUUOQI-UHFFFAOYSA-N 0.000 description 2
- NOESYZHRGYRDHS-UHFFFAOYSA-N insulin Chemical compound N1C(=O)C(NC(=O)C(CCC(N)=O)NC(=O)C(CCC(O)=O)NC(=O)C(C(C)C)NC(=O)C(NC(=O)CN)C(C)CC)CSSCC(C(NC(CO)C(=O)NC(CC(C)C)C(=O)NC(CC=2C=CC(O)=CC=2)C(=O)NC(CCC(N)=O)C(=O)NC(CC(C)C)C(=O)NC(CCC(O)=O)C(=O)NC(CC(N)=O)C(=O)NC(CC=2C=CC(O)=CC=2)C(=O)NC(CSSCC(NC(=O)C(C(C)C)NC(=O)C(CC(C)C)NC(=O)C(CC=2C=CC(O)=CC=2)NC(=O)C(CC(C)C)NC(=O)C(C)NC(=O)C(CCC(O)=O)NC(=O)C(C(C)C)NC(=O)C(CC(C)C)NC(=O)C(CC=2NC=NC=2)NC(=O)C(CO)NC(=O)CNC2=O)C(=O)NCC(=O)NC(CCC(O)=O)C(=O)NC(CCCNC(N)=N)C(=O)NCC(=O)NC(CC=3C=CC=CC=3)C(=O)NC(CC=3C=CC=CC=3)C(=O)NC(CC=3C=CC(O)=CC=3)C(=O)NC(C(C)O)C(=O)N3C(CCC3)C(=O)NC(CCCCN)C(=O)NC(C)C(O)=O)C(=O)NC(CC(N)=O)C(O)=O)=O)NC(=O)C(C(C)CC)NC(=O)C(CO)NC(=O)C(C(C)O)NC(=O)C1CSSCC2NC(=O)C(CC(C)C)NC(=O)C(NC(=O)C(CCC(N)=O)NC(=O)C(CC(N)=O)NC(=O)C(NC(=O)C(N)CC=1C=CC=CC=1)C(C)C)CC1=CN=CN1 NOESYZHRGYRDHS-UHFFFAOYSA-N 0.000 description 2
- 238000002647 laser therapy Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 201000004569 Blindness Diseases 0.000 description 1
- 208000002249 Diabetes Complications Diseases 0.000 description 1
- 206010012655 Diabetic complications Diseases 0.000 description 1
- 102000004877 Insulin Human genes 0.000 description 1
- 108090001061 Insulin Proteins 0.000 description 1
- 208000017442 Retinal disease Diseases 0.000 description 1
- 206010038923 Retinopathy Diseases 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004204 blood vessel Anatomy 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000003759 clinical diagnosis Methods 0.000 description 1
- 238000004195 computer-aided diagnosis Methods 0.000 description 1
- 239000002872 contrast media Substances 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 229960002143 fluorescein Drugs 0.000 description 1
- 229940020947 fluorescein sodium Drugs 0.000 description 1
- 229940125396 insulin Drugs 0.000 description 1
- 230000004060 metabolic process Effects 0.000 description 1
- 230000004089 microcirculation Effects 0.000 description 1
- 230000002911 mydriatic effect Effects 0.000 description 1
- 210000004218 nerve net Anatomy 0.000 description 1
- 210000000944 nerve tissue Anatomy 0.000 description 1
- 230000035764 nutrition Effects 0.000 description 1
- 235000016709 nutrition Nutrition 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000004393 prognosis Methods 0.000 description 1
- 238000002601 radiography Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 208000024891 symptom Diseases 0.000 description 1
- 230000004382 visual function Effects 0.000 description 1
- 230000004393 visual impairment Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A10/00—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
- Y02A10/40—Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of fundus fluorescein angiography image based on deep learning is without perfusion area automatic division method.The present invention is trained the convolutional neural networks of building without perfusion area eyeground contrastographic picture eyeground contrastographic picture for doctor's manual segmentation mark, meet the final output value of convolutional neural networks that doctor marks as a result, so as to carry out automatic segmentation identification eyeground without perfusion area to diabetic retinopathy using trained convolutional neural networks.Method of the invention passes through the eyeground contrastographic picture for having no perfusion area position mark, realizes the feature needed for learning in training example eyeground contrastographic picture library automatically using deep learning and carries out semantic segmentation.Continue to optimize the parameter of convolutional neural networks in the training process, extract data characteristics, thus adjuvant clinical use in when carrying out Fundus laser treatment to diabetic retinopathy, identify no perfusion area in need for the treatment of, accurately assist Fundus laser.
Description
Technical field
The invention belongs to image processing techniques neighborhoods, and in particular to a kind of fundus fluorescein angiography image based on deep learning
Without perfusion area automatic division method.
Background technique
Diabetic retinopathy (DR) is insulin metabolism exception, causes ocular tissue, nerve and blood vessel microcirculation change,
The nutrition of eye and the damage of visual function are caused, is the most common diabetic complication.The whole world has more than 6,000 ten thousand DR trouble at present
Person, illness initial stage has no obvious abnormal symptom, but eventually results in visual loss, has become one of four big diseases causing blindnesses.Cause
This, the early discovery of DR, early treatment are extremely important, closely related with the eyesight of patient's prognosis.
Fundus laser is the most important treatment means of current DR.But it is needed experienced when Fundus laser is treated at present
Retinopathy doctor identifies many places to real-time and precise referring to the energy of fundus fluorescein angiography (FFA) Lai Dingwei lesion and setting laser
The eyeground DR in need for the treatment of lesion is not available on FFA image there are certain difficulty and accurately assists Fundus laser without perfusion area.DR
Computer-aided diagnosis (CADx) system significantly reduce the diagosis pressure of doctor, improve work efficiency, it is most important that
The influence of factor and individual subjective factor when can reduce diagnosis, improves the accuracy of clinical diagnosis.The CADx system of DR is also at present
Can not intelligent recognition need the lesion of laser therapy, also fail to combine closely auxiliary diagnosis and treatment, still have greatly improved
Space.Realize the DR lesion that intelligent recognition needs Fundus laser to treat, it is necessary to which precisely being marked by oculist needs in FFA image
Then no perfusion area to be treated is trained and learns to the FFA image of mark by constructing convolutional neural networks (CNN).
Construct based on FFA picture depth learn without perfusion area identifying system, accurately identification needs the DR lesion of laser therapy, is reality
The key technology of existing DR Brilliant Eyes bottom Laser navigation system, has urgent clinical demand in DR diagnosis and treatment.
Summary of the invention
In order to solve the problems in background technique, the present invention provides a kind of fundus fluorescein angiography figure based on deep learning
As no perfusion area automatic division method, this method can make full use of the information in FFA image, to realize that semantic segmentation identifies
DR without perfusion area.
The technical solution adopted by the invention is as follows:
The present invention the following steps are included:
Step 1: collecting and segmentation marks fundus fluorescein angiography image, be containing no perfusion by fundus fluorescein angiography image tagged
Area, without no perfusion area and the image of three classes containing laser spot, no perfusion area image will be contained and be put into data set;
Step 2: by being pre-processed containing no perfusion area image in step 1 data set, by by pretreated containing no filling
Infuse the data of area's image construction tranining database;The pretreatment is flat to image denoising is successively carried out containing no perfusion area image
Cunning, picture superposition, image drop sampling and pixel normalized;
Step 3: the training dataset after being expanded after being expanded to the tranining database data in step 2, amplification
Method is using Image Reversal and the random method that Gaussian noise is added;
Step 4: mark without perfusion area contour line containing no perfusion area image to what training data after amplification was concentrated, use
Unrestrained fill of water will be converted to binary segmentation image containing no perfusion area image after marking contour line;
Step 5: building convolutional neural networks;
Step 6: using the convolutional neural networks of step 4 treated training dataset training step 5, according to setting when training
The parameter of fixed learning rate adjustment convolutional neural networks, thus the volume for no perfusion area semantic segmentation after repeatedly being trained
Product neural network;
Step 7: last in convolutional neural networks by the convolutional neural networks in image input step 6 to be split after training
The class probability of each pixel in image to be split is calculated by softmax function for one layer of output valve, and then realizes
The semantic segmentation of image perfusion area to be split.
In the step 1, fundus fluorescein angiography image mainly by forming containing laser spot and without two class image of laser spot,
In the image containing laser spot be divided into containing no perfusion area image and without no perfusion area image.
The convolutional neural networks building process of the step 5 is as follows:
Convolutional neural networks mainly by input layer, four up-sampling modules, four down sample modules, output convolution module,
Output convolutional layer is sequentially connected with data pass order to be formed, and input layer inputs first up-sampling module, four up-sampling moulds
Block is sequentially connected, and the 4th up-sampling module inputs first down sample module, and four down sample modules are sequentially connected, and the 4th
Down sample module is connected to output convolutional layer through output convolution module output;
Up-sampling module includes two convolution modules and maximum pond layer, and two convolution modules are sequentially connected output to maximum
Pond layer;Down sample module includes two convolution modules and down-sampling layer, and two convolution modules are sequentially connected output to down-sampling
Layer;
Wherein, second convolution mould of second convolution module and the 4th down sample module of first up-sampling module
Fusion has attention mechanism, second convolution module of second up-sampling module and the second of third down sample module between block
Fusion has attention mechanism between a convolution module, and third up-samples second convolution module and second down-sampling mould of module
Fusion has an attention mechanism between second convolution module of block, second convolution module of the 4th up-sampling module with first
Fusion has attention mechanism between second convolution module of down sample module;
The attention mechanism include sequentially connected input layer, the linear elementary layer of rectification, 1*1 convolutional layer,
Sigmoid function category layer and resampling layer, input layer are made of two 1*1 convolutional layers, two 1*1 convolutional layers of input layer
Input is respectively that the output of second convolution module in corresponding up-sampling module is rolled up with second in corresponding down sample module
The output of volume module, two 1*1 convolutional layers of input layer are exported through ReLU activation primitive layer to one-dimensional convolutional layer after being added, one-dimensional
Convolutional layer is exported through Sigmoid activation primitive layer to resampling layer, the output of resampling layer and in corresponding down sample module
The output that the output of two convolution modules is multiplied as attention mechanism, the output of each attention mechanism and corresponding down sample module
Output after the output splicing of middle down-sampling layer as corresponding down sample module.
The convolution module and output convolution module by convolutional layer, batch normalization layer and the linear elementary layer of rectification according to
Secondary connection composition.
The learning rate that sets is 0.1 when the step 5 training, and training round is 500 wheels, learning rate respectively 250 wheels with
Decay at 400 wheels, attenuation rate 0.1;Trained learning rate is less than or equal to the preceding learning rate once trained every time.
Beneficial effects of the present invention
The present invention can apply to data volume relatively small database and be realized certainly after data amplification by deep learning
It is dynamic to learn required feature from tranining database and carry out discriminant classification, the data for judgement are constantly corrected in the training process
Feature and adjustment convolutional neural networks parameter, so that the sensibility and specificity in clinical application is improved, with training example
The accuracy and reliability of the increased number of FFA image, semantic segmentation also will further improve.
Detailed description of the invention
Fig. 1 is full convolutional neural networks model structure of the invention.
Fig. 2 is attention mechanism of the invention.
Specific embodiment:
The present invention will be further described with reference to the accompanying drawings and examples.
The core of semantic segmentation method of the present invention is: for comprising doctor divide mark containing no perfusion area FFA image,
Multilayer convolutional neural networks are established, the training based on FFA image to convolutional neural networks makes the final output of convolutional neural networks
Semantic segmentation result meet doctor mark as a result, so as to be carried out using trained convolutional neural networks to no perfusion area
Automatic segmentation identification.
Specific embodiment:
Based on the DR of deep learning without perfusion area intelligent identification Method the following steps are included:
Step 1: acquisition and segmentation mark fundus fluorescein angiography image
The FFA image of acquisition comes from 2nd Affiliated Hospital Zhejiang University School of Medicine Eye Center, and the FFA image of acquisition comes from
2nd Affiliated Hospital Zhejiang University School of Medicine Eye Center comes from 67 in 25 months of in August, 2016 in September, 2018
74 of patient, age at 28 years old to 84 years old, pass through confocal Fundus angiography instrument (the Heidelberg retina of Heidelberg
Angiograph, HRA) implement Fundus fluorescein (fundus fluoreseein angiography, FFA).By
Two oculists shoot, and eye fundus image resolution ratio is 768 × 768 pixels.Before shooting eyeground radiography, using mydriatic and
Fluorescein sodium contrast agent has serious refracting media problem that cannot take eye fundus image, then excludes patient to study it at this
Outside.Without perfusion area by five veteran oculists, according to diabetic retinopathy clinical guidelines (Diabetic
Retinopathy PPP-updated 2016) guide mark.Blind is used to expert diagnosis group, they can not obtain
The data of deep learning prediction.
Step 2: being pre-processed to containing no perfusion area image: by being carried out containing no perfusion area image in step 1 data set
Smooth image denoising, picture superposition, image drop sampling and the normalized pretreatment of pixel are successively carried out, by by pre-
The data containing no perfusion area image construction tranining database of reason.
Step 3: being carried out using overturning and the random method that Gaussian noise is added to the tranining database data in step 2
Amplification.
Step 4: mark without perfusion area contour line containing no perfusion area image to what training data after amplification was concentrated, use
Unrestrained fill of water will be converted to binary segmentation image containing no perfusion area image after marking contour line
Step 5: building convolutional neural networks;
Convolutional neural networks framework has used a kind of full convolution mind of the attention mechanism for no perfusion area semantic segmentation technology
Through network model, model structure is as shown in Figure 1.Input picture having a size of 512*512 passes through convolution module, i.e., 3*3 volumes twice
Product-batch normalization (batch normalization, BN)-rectification linear unit (rectified linear unit,
ReLU after) operating, 1/2 length and width dimensions are down-sampled to using maximum pond layer (maxpooling).After four aforesaid operations
Obtain final feature.Enter the up-sampling stage later, in the up-sampling stage, feature is same with the down-sampling stage after up-sampling
Size characteristic input attention mechanism generate weight distribution after feature, latter two merging features after input a convolution module,
The up-sampling stage includes four aforesaid operations, after obtaining result identical with original image size using a convolution module, is passed through
Convolution kernel is that the convolutional layer of 1*1 and softmax function obtain two Classification Semantics segmentation results.
For attention mechanism of the invention as shown in Fig. 2, x represents the characteristic pattern before down-sampling, m represents gate signal (gating
Signal), be the next stage feature of x, having a size of x by it is down-sampled it is primary after size, the two swashs after being added by ReLU
Function living;Sigmoid activation primitive is recycled to obtain characteristic pattern weight port number dimensionality reduction to 1 using 1*1 convolution later, and
Its size resampling (resample) is characterized to the size of figure x.The weight and x are carried out across connection (skip
Connection), each channel multiplication obtains the feature x ' after weighting.Attention mechanism can emphasize knowledge of the model to key area
Not.
Wherein, convolutional neural networks receive FFA contrastographic picture as input, the i.e. image of training data concentration.Full convolution
Full articulamentum is replaced with up-sampling and is used for semantic segmentation by neural network.The feature of extraction is reduced to output ruler by up-sampling
It is very little, so that the classification results of each pixel in original image are obtained, to obtain segmentation result.Attention mechanism is from human vision
Inspiration is obtained in attention mechanism, for the mankind when observing things, often observation pays attention to regional area according to demand.Attention mechanism is logical
It crosses weight distribution and simulates the process, using bottom (front layer of forward direction transmitting) feature by noticing that function obtains weight.Convolutional layer
(convolution) it for extracting characteristics of image, plays a decisive role to model recognition effect.ReLU will be shown as activation primitive
It writes feature and passes through model, filtering useless feature.
Step 6: training convolutional neural networks
Nerve net in the convolutional neural networks is repeatedly trained using the correspondence semantic segmentation training example FFA image
Network framework, according to the learning rate 0.1 of setting when training, training round is 500 wheels, and learning rate declines at 250 wheels and 400 wheels respectively
Subtract, attenuation rate 0.1.Using neural network configuration parameters described in SGD algorithm optimization, to obtain the multiple of no perfusion area identification
Convolutional neural networks after training;
Preferably, use intersection entropy function as model loss function when training, use SGD algorithm as optimizer,
Middle momentum is 0.9, and initial learning rate is 0.1, and weight decays to 0.0001;Cross entropy cost function has nonnegativity, works as reality
When output valve and desired value are close, cost function is close to 0.Its expression formula are as follows:
Wherein, yiFor the desired output of i-th of neuron, aiFor its real output value, n is the neuron for participating in calculating
Total number
Preferably, the inertia of the stochastic gradient descent algorithm simulation object of which movement based on momentum, it is certain when optimizing update
More new direction before retaining in degree, simultaneously also by the more new direction that this study fine tuning is final, to increase study
Stability, and there is certain ability for getting rid of local optimum.Its expression formula are as follows:
Δxt=m* Δ xt-1-α*gt
Wherein Δ xtWith Δ xt-1The displacement at respectively t and t-1 moment updates, and m, that is, momentum, α are learning rate, gtFor t moment
Gradient.
Preferably, when training convolutional neural networks, learning rate is reduced with model training stage, this example is using 0.1
Initial learning rate, training round are 500 wheels, and learning rate is decayed at 250 wheels and 400 wheels respectively, attenuation rate 0.1.
Step 7: image to be split is calculated by softmax function in the output valve of the last layer in convolutional neural networks
In each pixel class probability, take the maximum value of channel dimension to index to obtain no perfusion area semantic segmentation result.
This method from feature needed for training data focusing study and carries out semantic point automatically by deep learning, realization
It cuts, continues to optimize in training process for sentencing another characteristic and parameter.Early period test in, this method used 332 width by
Oculist's mark is trained without perfusion area FFA image, and it includes the eye fundus image without perfusion area that test set, which includes 60, this
Convolutional neural networks described in invention are after training, and semantic segmentation is without perfusion area registration up to 65.24%.More than
DR based on deep learning divides identifying system without perfusion area automatically, can be applied to hospital clinical, tele-medicine and auxiliary and controls
The fields such as treatment.
Claims (5)
1. the fundus fluorescein angiography image based on deep learning is without perfusion area automatic division method, which is characterized in that including following
Step:
Step 1: collect and segmentation mark fundus fluorescein angiography image, by fundus fluorescein angiography image tagged be containing no perfusion area,
Without no perfusion area and the image of three classes containing laser spot, no perfusion area image will be contained and be put into data set;
Step 2: by being pre-processed containing no perfusion area image in step 1 data set, by by pretreated containing no perfusion area
The data of image construction tranining database;The pretreatment is image denoising is smooth, schemes to successively carrying out containing no perfusion area image
Image contrast enhancing, image drop sampling and pixel normalized;
Step 3: the training dataset after being expanded after being expanded to the tranining database data in step 2, amplification method
Using Image Reversal and the random method that Gaussian noise is added;
Step 4: mark without perfusion area contour line containing no perfusion area image to what training data after amplification was concentrated, it is unrestrained using water
It fills and will be converted to binary segmentation image containing no perfusion area image after marking contour line;
Step 5: building convolutional neural networks;
Step 6: using the convolutional neural networks of step 4 treated training dataset training step 5, according to setting when training
Learning rate adjusts the parameter of convolutional neural networks, thus the convolution mind for no perfusion area semantic segmentation after repeatedly being trained
Through network;
Step 7: by the convolutional neural networks in image input step 6 to be split after training, the last layer in convolutional neural networks
Output valve the class probability of each pixel in image to be split is calculated by softmax function, and then realize to point
Cut the semantic segmentation of image perfusion area.
2. a kind of fundus fluorescein angiography image based on deep learning according to claim 1 is without the perfusion area side of segmentation automatically
Method, which is characterized in that in the step 1, fundus fluorescein angiography image is mainly by containing laser spot and without two class image of laser spot
Composition, wherein the image containing laser spot is divided into containing no perfusion area image and without no perfusion area image.
3. a kind of fundus fluorescein angiography image based on deep learning according to claim 1 is without the perfusion area side of segmentation automatically
Method, which is characterized in that the convolutional neural networks building process of the step 5 is as follows:
Convolutional neural networks are mainly by input layer, four up-sampling modules, four down sample modules, output convolution module, output
Convolutional layer is sequentially connected with data pass order and is formed, input layer input first up-sampling module, four up-sampling modules according to
Secondary connection, the 4th up-sampling module input first down sample module, and four down sample modules are sequentially connected, adopt under the 4th
Egf block is connected to output convolutional layer through output convolution module output;
Up-sampling module includes two convolution modules and maximum pond layer, and two convolution modules are sequentially connected output to maximum pond
Layer;Down sample module includes two convolution modules and down-sampling layer, and two convolution modules are sequentially connected output to down-sampling layer;
Wherein, first up-sampling module second convolution module and the 4th down sample module second convolution module it
Between fusion have an attention mechanism, second convolution module of second up-sampling module and second of third down sample module roll up
Fusion has an attention mechanism between volume module, and third up-samples second convolution module and second down sample module of module
Fusion has an attention mechanism between second convolution module, second convolution module of the 4th up-sampling module with first under adopt
Fusion has attention mechanism between second convolution module of egf block;
Each attention mechanism includes sequentially connected input layer, the linear elementary layer of rectification, 1*1 convolutional layer, Sigmoid letter
Number classification layer and resampling layer, input layer are made of two 1*1 convolutional layers, the input difference of two 1*1 convolutional layers of input layer
For output and second convolution module in corresponding down sample module of second convolution module in corresponding up-sampling module
Output, two 1*1 convolutional layers of input layer are exported through ReLU activation primitive layer to one-dimensional convolutional layer, one-dimensional convolutional layer warp after being added
Sigmoid activation primitive layer is exported to resampling layer, the output of resampling layer and second convolution in corresponding down sample module
The output of module is multiplied output as attention mechanism, the output of each attention mechanism and down-sampling in corresponding down sample module
Output after the output splicing of layer as corresponding down sample module.
4. a kind of fundus fluorescein angiography image based on deep learning according to claim 3 is without the perfusion area side of segmentation automatically
Method, which is characterized in that the convolution module and output convolution module is linear single by convolutional layer, batch normalization layer and rectification
First layer is sequentially connected composition.
5. a kind of fundus fluorescein angiography image based on deep learning according to claim 1 is without the perfusion area side of segmentation automatically
Method, which is characterized in that the learning rate that sets is 0.1 when the step 5 training, and training round is 500 to take turns, and learning rate exists respectively
Decay at 250 wheels and 400 wheels, attenuation rate 0.1;Trained learning rate is less than or equal to the preceding learning rate once trained every time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910294122.9A CN110120055B (en) | 2019-04-12 | 2019-04-12 | Fundus fluorography image non-perfusion area automatic segmentation method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910294122.9A CN110120055B (en) | 2019-04-12 | 2019-04-12 | Fundus fluorography image non-perfusion area automatic segmentation method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110120055A true CN110120055A (en) | 2019-08-13 |
CN110120055B CN110120055B (en) | 2023-04-18 |
Family
ID=67520991
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910294122.9A Active CN110120055B (en) | 2019-04-12 | 2019-04-12 | Fundus fluorography image non-perfusion area automatic segmentation method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110120055B (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111353980A (en) * | 2020-02-27 | 2020-06-30 | 浙江大学 | Fundus fluorescence radiography image leakage point detection method based on deep learning |
CN112957005A (en) * | 2021-02-01 | 2021-06-15 | 山西省眼科医院(山西省红十字防盲流动眼科医院、山西省眼科研究所) | Automatic identification and laser photocoagulation region recommendation algorithm for fundus contrast image non-perfusion region |
CN113112507A (en) * | 2021-03-30 | 2021-07-13 | 上海联影智能医疗科技有限公司 | Perfusion image analysis method, system, electronic device and storage medium |
CN113763327A (en) * | 2021-08-10 | 2021-12-07 | 上海电力大学 | CBAM-Res _ Unet-based power plant pipeline high-pressure steam leakage detection method |
CN114782452A (en) * | 2022-06-23 | 2022-07-22 | 中山大学中山眼科中心 | Processing system and device of fluorescein fundus angiographic image |
CN115762787A (en) * | 2022-11-24 | 2023-03-07 | 浙江大学 | Eyelid disease surgery curative effect evaluation method and system based on eyelid topological morphology analysis |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299242A (en) * | 2014-10-31 | 2015-01-21 | 中南大学 | Fluorescence angiography fundus image extraction method based on NGC-ACM |
CN106408562A (en) * | 2016-09-22 | 2017-02-15 | 华南理工大学 | Fundus image retinal vessel segmentation method and system based on deep learning |
CN108665447A (en) * | 2018-04-20 | 2018-10-16 | 浙江大学 | A kind of glaucoma image detecting method based on eye-ground photography deep learning |
CN108921817A (en) * | 2018-05-24 | 2018-11-30 | 浙江工业大学 | A kind of data enhancement methods for skin disease image |
CN108986124A (en) * | 2018-06-20 | 2018-12-11 | 天津大学 | In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method |
CN109146921A (en) * | 2018-07-02 | 2019-01-04 | 华中科技大学 | A kind of pedestrian target tracking based on deep learning |
CN109448006A (en) * | 2018-11-01 | 2019-03-08 | 江西理工大学 | A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism |
-
2019
- 2019-04-12 CN CN201910294122.9A patent/CN110120055B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104299242A (en) * | 2014-10-31 | 2015-01-21 | 中南大学 | Fluorescence angiography fundus image extraction method based on NGC-ACM |
CN106408562A (en) * | 2016-09-22 | 2017-02-15 | 华南理工大学 | Fundus image retinal vessel segmentation method and system based on deep learning |
CN108665447A (en) * | 2018-04-20 | 2018-10-16 | 浙江大学 | A kind of glaucoma image detecting method based on eye-ground photography deep learning |
CN108921817A (en) * | 2018-05-24 | 2018-11-30 | 浙江工业大学 | A kind of data enhancement methods for skin disease image |
CN108986124A (en) * | 2018-06-20 | 2018-12-11 | 天津大学 | In conjunction with Analysis On Multi-scale Features convolutional neural networks retinal vascular images dividing method |
CN109146921A (en) * | 2018-07-02 | 2019-01-04 | 华中科技大学 | A kind of pedestrian target tracking based on deep learning |
CN109448006A (en) * | 2018-11-01 | 2019-03-08 | 江西理工大学 | A kind of U-shaped intensive connection Segmentation Method of Retinal Blood Vessels of attention mechanism |
Non-Patent Citations (1)
Title |
---|
OZAN OKTAY等: ""Attention U-Net: Learning Where to Look for the Pancreas"", 《ARXIV.ORG》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111353980A (en) * | 2020-02-27 | 2020-06-30 | 浙江大学 | Fundus fluorescence radiography image leakage point detection method based on deep learning |
CN111353980B (en) * | 2020-02-27 | 2022-05-17 | 浙江大学 | Fundus fluorescence radiography image leakage point detection method based on deep learning |
CN112957005A (en) * | 2021-02-01 | 2021-06-15 | 山西省眼科医院(山西省红十字防盲流动眼科医院、山西省眼科研究所) | Automatic identification and laser photocoagulation region recommendation algorithm for fundus contrast image non-perfusion region |
CN113112507A (en) * | 2021-03-30 | 2021-07-13 | 上海联影智能医疗科技有限公司 | Perfusion image analysis method, system, electronic device and storage medium |
CN113112507B (en) * | 2021-03-30 | 2023-08-22 | 上海联影智能医疗科技有限公司 | Perfusion image analysis method, system, electronic equipment and storage medium |
CN113763327A (en) * | 2021-08-10 | 2021-12-07 | 上海电力大学 | CBAM-Res _ Unet-based power plant pipeline high-pressure steam leakage detection method |
CN113763327B (en) * | 2021-08-10 | 2023-11-24 | 上海电力大学 | Power plant pipeline high-pressure steam leakage detection method based on CBAM-Res_Unet |
CN114782452A (en) * | 2022-06-23 | 2022-07-22 | 中山大学中山眼科中心 | Processing system and device of fluorescein fundus angiographic image |
CN114782452B (en) * | 2022-06-23 | 2022-11-01 | 中山大学中山眼科中心 | Processing system and device of fluorescein fundus angiographic image |
CN115762787A (en) * | 2022-11-24 | 2023-03-07 | 浙江大学 | Eyelid disease surgery curative effect evaluation method and system based on eyelid topological morphology analysis |
Also Published As
Publication number | Publication date |
---|---|
CN110120055B (en) | 2023-04-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110120055A (en) | Fundus fluorescein angiography image based on deep learning is without perfusion area automatic division method | |
CN109859172A (en) | Based on the sugared net lesion of eyeground contrastographic picture deep learning without perfusion area recognition methods | |
US20220148191A1 (en) | Image segmentation method and apparatus and storage medium | |
CN107358605B (en) | The deep neural network apparatus and system of diabetic retinopathy for identification | |
CN109509178A (en) | A kind of OCT image choroid dividing method based on improved U-net network | |
CN109345538A (en) | A kind of Segmentation Method of Retinal Blood Vessels based on convolutional neural networks | |
CN106296699A (en) | Cerebral tumor dividing method based on deep neural network and multi-modal MRI image | |
CN108198620A (en) | A kind of skin disease intelligent auxiliary diagnosis system based on deep learning | |
CN106682616A (en) | Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning | |
CN109035255A (en) | A kind of sandwich aorta segmentation method in the CT image based on convolutional neural networks | |
Kamble et al. | Applications of artificial intelligence in human life | |
WO2019024380A1 (en) | Intelligent traditional chinese medicine diagnosis method, system and traditional chinese medicine system | |
CN108665447A (en) | A kind of glaucoma image detecting method based on eye-ground photography deep learning | |
CN108717869A (en) | Diabetic retinopathy diagnosis aid system based on convolutional neural networks | |
CN107016681A (en) | Brain MRI lesion segmentation approach based on full convolutional network | |
Sertkaya et al. | Diagnosis of eye retinal diseases based on convolutional neural networks using optical coherence images | |
CN109671094A (en) | A kind of eye fundus image blood vessel segmentation method based on frequency domain classification | |
CN108765422A (en) | A kind of retinal images blood vessel automatic division method | |
CN116563707B (en) | Lycium chinense insect pest identification method based on image-text multi-mode feature fusion | |
CN109859139A (en) | The blood vessel Enhancement Method of colored eye fundus image | |
CN108877923A (en) | A method of the tongue fur based on deep learning generates prescriptions of traditional Chinese medicine | |
Zou et al. | Artificial neural network to assist psychiatric diagnosis | |
CN107242876A (en) | A kind of computer vision methods for state of mind auxiliary diagnosis | |
Firke et al. | Convolutional neural network for diabetic retinopathy detection | |
CN115908241A (en) | Retinal vessel segmentation method based on fusion of UNet and Transformer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |