CN110516685A - Lenticular opacities degree detecting method based on convolutional neural networks - Google Patents
Lenticular opacities degree detecting method based on convolutional neural networks Download PDFInfo
- Publication number
- CN110516685A CN110516685A CN201910468518.0A CN201910468518A CN110516685A CN 110516685 A CN110516685 A CN 110516685A CN 201910468518 A CN201910468518 A CN 201910468518A CN 110516685 A CN110516685 A CN 110516685A
- Authority
- CN
- China
- Prior art keywords
- image
- training
- model
- lenticular opacities
- data set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 206010024214 Lenticular opacities Diseases 0.000 title claims abstract description 93
- 238000000034 method Methods 0.000 title claims abstract description 74
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 32
- 238000012549 training Methods 0.000 claims abstract description 90
- 238000007781 pre-processing Methods 0.000 claims abstract description 38
- 238000005286 illumination Methods 0.000 claims abstract description 33
- 238000013526 transfer learning Methods 0.000 claims abstract description 23
- 238000012360 testing method Methods 0.000 claims abstract description 20
- 238000001514 detection method Methods 0.000 claims abstract description 18
- 230000002708 enhancing effect Effects 0.000 claims abstract description 14
- 239000010410 layer Substances 0.000 claims description 21
- 239000013078 crystal Substances 0.000 claims description 16
- 230000003321 amplification Effects 0.000 claims description 15
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 15
- 239000000284 extract Substances 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 14
- 238000013528 artificial neural network Methods 0.000 claims description 11
- 238000010276 construction Methods 0.000 claims description 11
- 239000002356 single layer Substances 0.000 claims description 10
- 238000013519 translation Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 7
- 238000012545 processing Methods 0.000 claims description 6
- 210000000695 crystalline len Anatomy 0.000 claims description 5
- 238000002372 labelling Methods 0.000 claims description 3
- 230000007423 decrease Effects 0.000 claims description 2
- 238000011835 investigation Methods 0.000 abstract description 2
- 239000002775 capsule Substances 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 238000012216 screening Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 238000013480 data collection Methods 0.000 description 3
- 238000007689 inspection Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 241000208340 Araliaceae Species 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 238000013508 migration Methods 0.000 description 2
- 230000005012 migration Effects 0.000 description 2
- 210000003934 vacuole Anatomy 0.000 description 2
- 238000012800 visualization Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 208000002177 Cataract Diseases 0.000 description 1
- 206010061818 Disease progression Diseases 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002059 diagnostic imaging Methods 0.000 description 1
- 230000005750 disease progression Effects 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000008014 freezing Effects 0.000 description 1
- 238000007710 freezing Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000000243 solution Substances 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Evolutionary Biology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Lenticular opacities degree detecting method based on convolutional neural networks, the method steps are as follows: (1), to testing image using illumination enhancing method to carry out pretreatment and form preprocessing image data;(2), the preprocessing image data in (1) step is substituted into lenticular opacities degree detecting learning model and realizes detection, the Inception-V3 model and parameter that the present invention is crossed using ImageNet pre-training, and be trained to obtain disaggregated model using the thought of transfer learning, it can be achieved to pass through cell phone application real-time perfoming lenticular opacities investigation and comparison after the completion of the system.
Description
Technical field
The present invention provides a kind of lenticular opacities degree detecting method based on convolutional neural networks, belongs to image procossing neck
Domain.
Background technique
Lenticular opacities degree all can serve as a middle reference data in many fields, for example, passing through the data pair
Cataract is analyzed etc., the automatic classification method of lenticular opacities degree is had been great progress at present, but there are still one
A little problems.Firstly, existing method is using classification problem as research emphasis, and in terms of feature extraction, mainly still mostly
Feature is found using artificial, there is very big subjectivity.Moreover, because the data set for disclosing and having marked at present is only for research department
Divide the situation of lenticular opacities degree, and data volume is less, leads to not automatically extract feature using deep learning, be trained
Model is also unable to reach the effect generally studied.
Summary of the invention
Goal of the invention:
The present invention provides a kind of lenticular opacities degree detecting method based on convolutional neural networks, the purpose is to solve with
The problems of toward.
Technical solution
Lenticular opacities degree detecting method based on convolutional neural networks, it is characterised in that: the method steps are as follows:
(1), pretreatment is carried out using illumination enhancing method to testing image and forms preprocessing image data;
(2), the preprocessing image data in (1) step is substituted into lenticular opacities degree detecting learning model and realizes inspection
It surveys.
(2) construction method of the lenticular opacities degree detecting model in step is as follows:
(2.1), MSLPP data set is constructed;Then the image concentrated to data pre-processes;And it will be pretreated
MSLPP data set is divided into training set, verifying collection and test set;
(2.2), the data set progress model training formed using the image after the pretreatment in (2.1) step obtains crystalline
Body muddiness degree detecting learning model.
(2.1) building mode of MSLPP data set is as follows in step: collecting the eye crystal acquired in clinic by slit-lamp
Image, and it is classified as normal, early stage lenticular opacities and lenticular opacities three classes.
The complicated multiplicity of MSLPP data set sample collection environment, sample include wide variety.
The environment complexity multiplicity includes: bright ring border, dark situation and reflective situation;The specimen types multiplicity includes:
Specimen needle is to nuclearity lenticular opacities, cortex lenticular opacities and rear capsule lenticular opacities.
(2.1) preprocess method in step are as follows: first data set is handled using illumination enhancing method, then illumination is increased
Treated that data set utilizes the processing of data amplification for strong method.
Illumination enhances method with the following method:
By 299 × 299 pixel of input image size boil down to, if the triple channel pixel of image any point A (x, y)Wherein R (x, y), G (x, y), B (x, y) respectively represent in point A (x, y) it is red, green,
The brightness value in blue three channels, the range of luminance values in three channels is 0-255, then the average pixel value of imageIt indicates are as follows:
IfThen image each point pixel value becomesIfThen image each point pixel value becomesIfThen image each point pixel value is constant.
2.1) the data amplification in the pretreatment of step has following three kinds, in three kinds of methods selection one of them, two
Or all:
(1) it translates: the enhanced image of illumination being translated to 5-20 pixel respectively up and down and (translates 5-20 at this
Pixel, but the data used in the present invention is 12, and corresponding with the image generated in Fig. 4);So that image translation expands
Original multiple is increased to, which is the number of translation;
(2) rotate: by the enhanced image of illumination along reverse rotation (it is selective, can be only primary clockwise, can also be only
Primary counterclockwise, can also be each primary side by side clockwise and anticlockwise) 5 ° -20 ° (but the data used in the present invention is 15 °, and
It is corresponding with the image generated in Fig. 4);So that image rotation is expanded to original multiple, which is the number of rotation;
(3) mirror image: by the enhanced image of illumination, each mirror image is primary up and down, that is, spins upside down primary, left and right overturning
Once, so that image mirrors are expanded to original multiple, which is the number of mirror image;
If when two or the whole of three kinds of methods of selection wherein, after all enhancing just for the same illumination image amplification
Image carry out, then enhanced image is used together.
(2.2) step model training recycles pretreated MSLPP after carrying out transfer learning using convolutional neural networks
Data set continues to train to the model after transfer learning;
Convolutional neural networks are broadly divided into three convolutional layer, pond layer and full articulamentum parts;The convolution mind of final choice
It is the Inception-V3 model that Google is proposed based on GoogLeNet through network structure;
Transfer learning and the step of training are as follows: firstly, (ImageNet is a computer vision system based on ImageNet
Identify project name, be that the maximum database of image recognition, the database have been increased income, can be used directly in the world at present) image
The data set of mark carries out pre-training on Inception-V3 model, extracts the feature vector of one 2048 dimension;This stage
Turn one's knowledge to advantage migration, carries out feature extraction using pre-training weight, does not instruct to the weight parameter of Inception-V3
Practice;Then, the full Connection Neural Network that described eigenvector is inputted to a single layer includes Softmax classifier using one
The full Connection Neural Network of single layer, using pretreated MSLPP data set (refer to by oculist classify and mark
Image afterwards) final classification result is obtained after training.
Training process is as follows: (" training " refers to all training process after pretreatment herein, and " using having classified
The training of lenticular opacities crystal image after obtain final classification result " in " training " refer to it is micro- after transfer learning
Adjust the training of argument section.Step 1 two is corresponding to carry out transfer learning using convolutional neural networks, and step 3 four is corresponding using pre-
Treated, and training set continues to train to the model after transfer learning)
(1) load has removed the Inception-V3 model of full articulamentum and with ImageNet data set pre-training
The weight parameter obtained;
(2) full articulamentum structure is added on the Inception-V3 model after initializing as obtained in (one), and complete
Dropout strategy (being a kind of method for preventing model over-fitting) is added in articulamentum, ratio is set as 0.75;Extract one
The feature vector of 2048 dimensions;
(3) all feature extraction layers other than full articulamentum are freezed, learning rate is then set as 0.001, utilized
Pretreated MSLPP data training set training 1 epoch, iteration 550 times;
(4) all layers are thawed, using fine tuning transfer learning, continues to be trained with MSLPP data training set, uses
The method of stochastic gradient descent, initial learning rate are set as 0.01, train 100 epoch, each epoch iteration 550 times, every knot
One epoch of beam collects test model accuracy rate using verifying, if accuracy rate was improved compared with last time, this training parameter is saved, if quasi-
True rate reduces, then continues to train using previously stored parameter.It criticizes sample number batch_size and is set as 32, momentum momentum
It is set as 0.9.
(crowd sample number batch_size refers to sample number included in a bitch, is to need to be arranged in training process
A parameter.)
(the numerical value being previously mentioned in (2) (3) (4) design parameter set when being training pattern, although can be in certain model
It is adjusted within enclosing, but adjusting training parameter needs certain skill (according to the recall rate and loss function of training set and verifying collection
Value is adjusted), if arbitrarily adjustment not can guarantee trained result in range.)
(3) freeze feature extract layer only selects full articulamentum to be trained in step, should update weight in a small range,
In order to avoid destroying the good feature of pre-training.
Above-mentioned small range is referred in the case where freezing other layers, only changing full articulamentum, when training inside model
Weight coefficient variation is little.This is the explanation to way in step 3, is not required to the parameter manually adjusted.
Lenticular opacities degree-measuring system based on convolutional neural networks, it is characterised in that: the system includes that image is pre-
Processing module and detection module;
Image pre-processing module carries out pretreatment to testing image and forms preprocessing image data;By image pre-processing module
In image data substitute into detection module in lenticular opacities degree detecting learning model in realize detection;
Detection module includes MSLPP data set construction part module, image pre-processing module and model training module;
MSLPP data set construction part module constructs MSLPP data set;Then data are concentrated by image pre-processing module
Image is pre-processed;
The data set that model training module forms the image after image pre-processing module pretreatment carries out model training
Obtain learning model.
Advantageous effect:
The present invention provides a kind of lenticular opacities degree detecting method based on convolutional neural networks, and the present invention devises one
A automatic learning model of lenticular opacities degree feature based on convolutional neural networks, detailed process are as shown in Figure 1.Firstly, being
Solve the problems, such as that current lenticular opacities level data collection lacks, the present invention has collected the eye acquired in clinic by slit-lamp
Crystal image, and normal, early stage shape body muddiness and lenticular opacities three classes are classified as by eye sampling technician, construct MSLPP
Data set;Then image is pre-processed, primary operational is the amplification of illumination enhancing and data volume;Finally, in order to solve certainly
Dynamic the problem of extracting depth characteristic, the Inception-V3 model and parameter that the present invention is crossed using ImageNet pre-training, and adopt
Be trained with the thought of transfer learning to obtaining disaggregated model, can be achieved after the completion of the system by cell phone application in real time into
Row lenticular opacities investigation and comparison.
Detailed description of the invention
Fig. 1 is the automatic learning model flow chart of lenticular opacities feature based on convolutional neural networks;
Fig. 2 is MSLPP data set part sample image;
Fig. 3 is that illumination enhances front and back comparison diagram;
Fig. 4 is that data expand comparison diagram;
Fig. 5 is transfer learning Strategies Training schematic diagram;
Fig. 6 is model visualization characteristic pattern;
Fig. 7 is the example schematic of translation.Translation: citing: left figure is original image, and right figure is picture to x pixel of right translation,
The image after y pixel is translated still further below.
Specific embodiment
Lenticular opacities degree detecting method based on convolutional neural networks, the method steps are as follows:
(1), pretreatment is carried out using illumination enhancing method to testing image and forms preprocessing image data;
(2), the preprocessing image data in (1) step is substituted into lenticular opacities degree detecting learning model and realizes inspection
It surveys.
(2) construction method of the lenticular opacities degree detecting model in step is as follows:
(2.1), MSLPP data set is constructed;Then the image concentrated to data pre-processes;And it will be pretreated
MSLPP data set is divided into training set, verifying collection and test set;
(2.2), the data set progress model training formed using the image after the pretreatment in (2.1) step obtains crystalline
Body muddiness degree detecting learning model.
(2.1) building mode of MSLPP data set is as follows in step: collecting the eye crystal acquired in clinic by slit-lamp
Image, and it is classified as normal, early stage lenticular opacities and lenticular opacities three classes.
(2.1) preprocess method in step are as follows: first data set is handled using illumination enhancing method, then illumination is increased
Treated that data set utilizes the processing of data amplification for strong method.
Illumination enhances method with the following method:
By 299 × 299 pixel of input image size boil down to, if the triple channel pixel of image any point A (x, y)Wherein R (x, y), G (x, y), B (x, y) respectively represent in point A (x, y) it is red, green,
The brightness value in blue three channels, the range of luminance values in three channels is 0-255, then the average pixel value of imageIt indicates are as follows:
IfThen image each point pixel value becomesIfThen image each point pixel value becomesIfThen image each point pixel value is constant.
2.1) the data amplification in the pretreatment of step has following three kinds, in three kinds of methods selection one of them, two
Or all:
(1) it translates: the enhanced image of illumination is translated to 5-20 pixel respectively up and down;So that image translation expands
Original multiple is increased to, which is the number of translation;
(2) it rotates: by the enhanced image of illumination along 5 ° -20 ° of reverse rotation;So that image rotation is expanded to original times
Number, the multiple are the number of rotation;
(3) mirror image: by the enhanced image of illumination, each mirror image is primary up and down, that is, spins upside down primary, left and right overturning
Once, so that image mirrors are expanded to original multiple, which is the number of mirror image;
If when two or the whole of three kinds of methods of selection wherein, after all enhancing just for the same illumination image amplification
Image carry out, then enhanced image is used together.
(2.2) step model training recycles pretreated MSLPP after carrying out transfer learning using convolutional neural networks
Data set continues to train to the model after transfer learning;
Transfer learning and the step of training are as follows:
The first step, the data set based on ImageNet image labeling (is ImageNet data set, what the inside included is nature
Image), pre-training is carried out on Inception-V3 model, extracts the feature vector of one 2048 dimension;
Described eigenvector is inputted the full Connection Neural Network of a single layer by second step, the use of one includes Softmax
The full Connection Neural Network of the single layer of classifier obtains final classification knot after training using pretreated MSLPP data set
Fruit.
Pre-training process is carried out in the first step on Inception-V3 model are as follows:
(1) load has removed the Inception-V3 model of full articulamentum and with ImageNet data set pre-training
The weight parameter obtained;
(2) full articulamentum structure is added on the Inception-V3 model after initializing as obtained in (one), and complete
Dropout strategy is added in articulamentum, ratio is set as 0.75;Extract the feature vector of one 2048 dimension;ImageNet data
Collection is the public data collection marked, is belonged to known.
One full Connection Neural Network of single layer comprising Softmax classifier of use in second step, using pretreatment
The step of obtaining final classification result after MSLPP data set training afterwards is as follows:
(1) all feature extraction layers other than full articulamentum freeze to (only full articulamentum is corresponding here is exactly
Single layer recited above), learning rate is then set as 0.001, utilizes pretreated MSLPP data training set training 1
Epoch, iteration 550 times;
(2) by all layers thaw, using fine tuning transfer learning, continue to be trained with MSLPP data training set, using with
The method of machine gradient decline, initial learning rate are set as 0.01, train 100 epoch, each epoch iteration 550 times, every end
One epoch collects test model accuracy rate using verifying, if accuracy rate was improved compared with last time, this training parameter is saved, if accurately
Rate reduces, then continues to train using previously stored parameter.It criticizes sample number batch_size and is set as 32, momentum momentum is set
It is set to 0.9.
(3) freeze feature extract layer only selects full articulamentum to be trained in step, should update weight in a small range,
In order to avoid destroying the good feature of pre-training.
The system includes image pre-processing module and detection module;
Image pre-processing module carries out pretreatment to testing image and forms preprocessing image data;By image pre-processing module
In image data substitute into detection module in lenticular opacities degree detecting learning model in realize detection;
Detection module includes MSLPP data set construction part module, image pre-processing module and model training module;
MSLPP data set construction part module constructs MSLPP data set;Then data are concentrated by image pre-processing module
Image is pre-processed;
The data set that model training module forms the image after image pre-processing module pretreatment carries out model training
Obtain learning model.
The present invention is further described in detail below:
The present invention devises the lenticular opacities degree detecting method based on convolutional neural networks, and detailed process is as schemed
Shown in 1.Firstly, the present invention has collected in clinic by slit-lamp in order to solve the problems, such as that current lenticular opacities data set lacks
The eye crystal image of acquisition, and normal, early stage lenticular opacities and lenticular opacities three classes, structure are classified as by oculist
Build MSLPP data set;Then image is pre-processed, primary operational is the amplification of illumination enhancing and data volume;Finally, in order to
Solve the problems, such as to automatically extract depth characteristic, the Inception-V3 model and ginseng that the present invention is crossed using ImageNet pre-training
Number, and be trained to obtain disaggregated model using the thought of transfer learning, it can be achieved to pass through cell phone application after the completion of the system
The research of real-time perfoming lenticular opacities.And it is brilliant to cortex lenticular opacities, nuclearity as intermediate data to can use the data
Shape body muddiness and rear capsule lenticular opacities are generally with reference to applicable.
Data set and image preprocessing
MSLPP data set
Database is the important component for realizing deep learning system, and system screening can be enhanced in the database of high quality
Accuracy.But marked slit-lamp eye crystal image data set is disclosed due to lacking large size at present, it is therefore desirable to construct
Data set for lenticular opacities classification.The data set that the present invention uses for Shenyang Ai Luobo intelligent technology limited and
The cooperative development of Shenyang He Shi ophthalmology group, and it is named as MSLPP (Marked Slit Lamp Picture Project)
Data set.
MSLPP data set includes 16239 pictures altogether, and wherein lenticular opacities eye sample image 5302 is opened, and early stage is brilliant
Shape body muddiness eye sample image 5400 is opened, and the eye sample image 5537 of normal person is opened, and Image Acquisition was in 2015 to 2018
Year, from 2864 Healthy Peoples and 5532 lenticular opacities persons.
Collected image is the eye sample image of slit-lamp shooting in the data set, and used slit-lamp is main
For desk-top slit-lamp and mobile phone slit-lamp, the database part sample image is as shown in Figure 2.As seen from the figure, by slit of illumination
Lesser ring of Merkel is focused on, internal transparent or pale yellow background color is normal crystal, as shown in Fig. 2 (a);Inside is transparent to see Huang
Bottom, hot spot is secretly slightly early stage lenticular opacities, as shown in Fig. 2 (b);If there is obvious muddiness in inside, lesions position is visible to be
Lenticular opacities, as shown in Fig. 2 (c).
According to eye phacoscotasmus position it is different be divided into again cortex lenticular opacities, nuclearity lenticular opacities and after
Capsule lenticular opacities.Nuclearity lenticular opacities, initial stage core are yellow, with the color of disease progression core gradually deepen and in yellowish-brown
Color, brown, brownish black even black, as shown in Fig. 2 (c) -1;Cortex lenticular opacities, initial stage are visible in cortex of lens
To having vacuole and water gap to be formed, water gap forms spoke-like muddiness from periphery to central, enlarged, and the forward and backward cortex in crystalline lens periphery goes out
Now wedge shape is muddy, is in featheriness, as the phacoscotasmus that gos deep into of lenticular opacities is aggravated up to the complete muddiness that is creamy white, such as
Shown in Fig. 2 (c) -2;Capsule lenticular opacities afterwards, with examination with slitlamp microscope it can be seen that under rear capsule by many yellow dots,
The plate-like that small vacuole, crystalline particle are constituted is muddy, as shown in Fig. 2 (c) -3.In actual operation, nuclearity lenticular opacities and
Cortex lenticular opacities are relatively common, and then capsule lenticular opacities are less.
MSLPP data set has following features:
(1) sample standard deviation picks up from He Shi ophthalmologic hospital, shoots its eye to screening person using slit-lamp apparatus by oculist
Portion's crystal image, and screening classification is carried out by three attending physicians of He Shi ophthalmology;
(2) the complicated multiplicity of sample collection environment, bright ring border, dark situation and it is some reflective situations such as relate to;
(3) sample includes wide variety, can be directed to nuclearity lenticular opacities, cortex lenticular opacities and rear capsule are crystalline
Body is muddy.
Illumination enhancing
Brightness is the part paid close attention in image processing process.When the dataset acquisition, since practical screening environment is multiple
Miscellaneous multiplicity, captured sample image luminance difference is big, will affect deep learning accuracy rate, and therefore, it is necessary to carry out to sample image
Illumination adjusting, prominent sample characteristics, reducing is influenced by brightness of image difference bring, and specific practice is as follows:
By input image size boil down to 299 × 299, if the triple channel pixel of any point A (x, y)The then average pixel value of imageIt may be expressed as:
IfThen image each point pixel value becomesIfThen image each point pixel value becomesIfThen image each point pixel value is constant.Fig. 3 is that illumination enhances front and back comparison diagram.
Data amplification
It needs to count sample when carrying out image preprocessing in order to avoid over-fitting situation occurs when model training
Amplification processing is measured, is conducive to the performance for improving model, improves image classification accuracy rate.Data of the present invention expand mode
There are following three kinds:
(1) it translates: image is translated to 12 pixels respectively up and down
(2) it rotates: by image along 15 ° of reverse rotation
(3) mirror image: by image, each mirror image is primary up and down
Since desk-top slit-lamp is different with the illumination incidence of portable slit lamp, desk-top crack lamp wiring inclines from right side
Oblique 30 ° of injections, crystal section is shown on the left of the light of crack, and hand slit lamp light is injected from 30 ° of left side, section
It is shown on the right side of light, in order to eliminate the influence that this feature generates system, we will control picture in pretreatment
Mirror image processing.Specific amplification front and back comparison diagram is as shown in figure 4, three rows are respectively lenticular opacities, early stage crystalline lens from top to bottom
Muddy and normal picture amount of images enhancing front and back comparison diagram.
Model and method
Convolutional neural networks
The method that the present invention uses is convolutional neural networks, is broadly divided into three convolutional layer, pond layer and full articulamentum portions
Point.The convolutional neural networks structure of final choice is that Google is based on GoogLeNet (ILSVRC2014, the ImageNet
Champion's model that Large Scale Visual Recognition Challenge of 2014 competes) propose
Inception-V3 model.
In the entire network, the validity feature in image is extracted using convolutional layer and pond layer, introduced in a network non-thread
Property activation primitive, reduce dimension shared by validity feature, output can indicate the advanced features of input picture, finally, by connecting entirely
These features are used for the classification of the input picture to wanted screening by layer.In addition to the above-mentioned basic network topology being previously mentioned, this hair
It is bright that Dropout strategy is also increased in last full articulamentum, over-fitting can be effectively avoided, network generalization is improved,
Accelerate the training process of network.
Transfer learning
In the field of medical imaging, lacking a large amount of open data sets marked will be that deep learning is applied at medical image
One of problem in reason.In the insufficient situation of sample, appearance is not restrained or is trained during easily leading to model training
A series of problems, such as model generalization ability come is poor.Therefore, the method that the present invention uses transfer learning, to solve problem above.
Lenticular opacities classification method flow chart based on convolutional neural networks model transfer learning is as shown in Figure 5.Firstly,
Based on the data set of ImageNet image labeling, pre-training is carried out on Inception-V3 model, extracts one 2048 dimension
Feature vector.This stage turns one's knowledge to advantage migration, feature extraction is carried out using pre-training weight, not to Inception-V3
Weight parameter be trained, be compared with the traditional method, extract feature it is more efficient.Then, feature vector is inputted into a list
The full Connection Neural Network of layer, because trained Inception-V3 model is by original image abstraction at being more easier
The feature vector of classification, therefore include the full Connection Neural Network of single layer of Softmax classifier using one, using having classified
The training of lenticular opacities crystal image after obtain final classification result.The feature vector in this stage, input mainly undertakes
To the training mission of classifier, classifier is enabled preferably to complete classification based on extracted feature.
Experiment
Experimental data
MSLPP data set includes 5302 lenticular opacities crystal images, 5400 early stage lenticular opacities crystal patterns altogether
Picture, 5537 normal lens images.The data set is divided training set, verifying collection and test set by the present invention, wherein in three classes
Random respectively in not to take out 500 as test set, remaining sample is randomly divided into training set in the ratio of 6:1 and verifying collects.Its
In, it training set totally 12630, is opened comprising lenticular opacities 4083, early stage lenticular opacities 4241 are opened, 4306 normal;Verifying
It collection totally 2109, is opened comprising lenticular opacities 719, early stage lenticular opacities 659 are opened, 731 normal.Specific lower picture of all categories
Quantity is as shown in table 1.
Picture number comparison under table 1 is respectively classified
Training set and verifying collection data are expanded, test set remains unchanged.After amplification, training set and verifying collect total
Quantity increases 132651 by original 14739, and wherein lenticular opacities patient crystal slit-lamp image becomes 43218
, early stage lenticular opacities patient crystal slit-lamp image becomes 44100, and the crystal slit-lamp image of normal person becomes
45333, specific each classification quantity is shown in Table 2.
The amplification of table 2 front and back quantitative comparison's figure
Training process
All codes of the invention are all that (Keras is a high-rise neural network API, is write by pure Python with Keras
Into) be front end, with TensorFlow (TensorFlow be Google research and development second generation artificial intelligence learning system) for rear end
It completes, which is based on Ubuntu16.04 (64)+CUDA9.1+CUDNN9.0 system.The programming language used for
python。
Training process is as follows:
(1) load is removed the Inception-V3 model of full articulamentum and is obtained with ImageNet data set pre-training
Weight parameter;
(2) full articulamentum structure is added on Inception-V3 network after initialization, and is added in full articulamentum
Dropout strategy, ratio are set as 0.75;
(3) all feature extraction layers other than full articulamentum are freezed, learning rate is then set as 0.001, utilized
Pretreated training set training 1 epoch, iteration 550 times;(freeze feature extract layer can only select full articulamentum to carry out
Training, on the one hand can prevent over-fitting, on the other hand, due to continuing to train on one trained model, therefore should be
A small range updates weight, in order to avoid destroy the good feature of pre-training)
(4) all layers are thawed, using fine tuning transfer learning (fine-tune), continues to be instructed with MSLPP data set
Practice, using the method for stochastic gradient descent, initial learning rate is set as 0.01, trains 100 epoch, each epoch iteration 550
Secondary, one epoch of every end collects test model accuracy rate using verifying, if accuracy rate was improved compared with last time, saves this time training ginseng
Number continues to train if accuracy rate is reduced using previously stored parameter.It criticizes sample number batch_size and is set as 32, momentum
Momentum is set as 0.9.
Obtained model is visualized using quiver (for a visualization tool of Keras platform), is obtained
The characteristic pattern arrived is as shown in Figure 6.
System evaluation
After the completion of model training, verified using verifying the set pair analysis model, wherein the recall rate of lenticular opacities is
88.24%, the recall rate of early stage lenticular opacities is 86.63%, and normal recall rate is 97.51%.Then, we are again with instruction
Model is tested not in contact with the test set crossed when practicing, and test set includes image totally 1500, and wherein operator is determined as crystalline
Body muddiness has 500, and early stage lenticular opacities 500 are opened, and normal 500, situation is as shown in table 3 below after genealogical classification.
The present invention is using four kinds of common indexs come the performance of assessment system: accuracy rate Accuracy, recall rate Recall,
Accurate rate Precision and F1 index F1_meature.Accuracy rate is the overall measurement of classification performance, it is the correct sample of classification
The ratio between this number and total number of samples;Recall rate be in all positive example samples by point pair ratio;Accurate rate is divided into the sample of positive example
It is actually the ratio of positive example in this;F1 index is the harmomic mean of accurate rate and recall rate.Their calculation method is:
Wherein, TP, TN, FP, FN are true positives (true positive) respectively, and true negative (true negative) is false
The quantity of the positive (false positive) and false negative (false negative).By taking lenticular opacities sample as an example, " kidney-Yang
Property " mean that lenticular opacities sample is correctly classified as lenticular opacities.If lenticular opacities sample is mistakenly classified as
Other classification, we are referred to as " false negative "." true negative " is similar with the meaning of " false positive ", and " true negative " means other
Classification samples are not classified as lenticular opacities mistakenly, and " false positive " to mean that other classification samples are mistakenly classified as crystalline
Body is muddy.
According to above-mentioned measures, the performance of the system is as shown in table 4 below.
4 the reliability of the adjustment model of table determines
Lenticular opacities degree-measuring system based on convolutional neural networks, the system include image pre-processing module and inspection
Survey module;
Image pre-processing module carries out pretreatment to testing image and forms preprocessing image data;By image pre-processing module
In image data substitute into detection module in lenticular opacities degree detecting learning model in realize detection;
Detection module includes MSLPP data set construction part module, image pre-processing module and model training module;
MSLPP data set construction part module constructs MSLPP data set;Then data are concentrated by image pre-processing module
Image is pre-processed;
The data set that model training module forms the image after image pre-processing module pretreatment carries out model training
Obtain learning model.
Claims (10)
1. the lenticular opacities degree detecting method based on convolutional neural networks, it is characterised in that: the method steps are as follows:
(1), pretreatment is carried out using illumination enhancing method to testing image and forms preprocessing image data;
(2), the preprocessing image data in (1) step is substituted into lenticular opacities degree detecting learning model and realizes detection.
2. the lenticular opacities degree detecting method according to claim 1 based on convolutional neural networks, it is characterised in that:
(2) construction method of the lenticular opacities degree detecting model in step is as follows:
(2.1), MSLPP data set is constructed;Then the image concentrated to data pre-processes;And by pretreated MSLPP number
It is divided into training set, verifying collection and test set according to collection;
(2.2), it is muddy that the data set progress model training formed using the image after the pretreatment in (2.1) step obtains crystalline lens
Turbid degree detecting learning model.
3. the lenticular opacities degree detecting method according to claim 2 based on convolutional neural networks, it is characterised in that:
(2.1) building mode of MSLPP data set is as follows in step: the eye crystal image acquired in clinic by slit-lamp is collected, and
It is classified as normal, early stage lenticular opacities and lenticular opacities three classes.
4. the lenticular opacities degree detecting method according to claim 2 based on convolutional neural networks, it is characterised in that:
(2.1) preprocess method in step are as follows: first data set is handled using illumination enhancing method, then to illumination enhancing method processing
Data set afterwards is handled using data amplification.
5. the lenticular opacities degree detecting method according to claim 1 or 4 based on convolutional neural networks, feature exist
In:
Illumination enhances method with the following method:
By 299 × 299 pixel of input image size boil down to, if the triple channel pixel of image any point A (x, y)For
[R(x,y),G(x,y),B(x,y)]T, wherein R (x, y), G (x, y), B (x, y) respectively represent red, green, blue three in point A (x, y)
The brightness value in a channel, the range of luminance values in three channels are 0-255, then the average pixel value of imageIt indicates are as follows:
IfThen image each point pixel value becomesIfThen image each point pixel value becomes
IfThen image each point pixel value is constant.
6. the lenticular opacities degree detecting method according to claim 5 based on convolutional neural networks, it is characterised in that:
2.1) the data amplification in the pretreatment of step has following three kinds, in three kinds of methods selection one of them, two or
All:
(1) it translates: the enhanced image of illumination is translated to 5-20 pixel respectively up and down;So that image translation is expanded to
Multiple originally, the multiple are the number of translation;
(2) it rotates: by the enhanced image of illumination along 5 ° -20 ° of reverse rotation;So that image rotation is expanded to original multiple, it should
Multiple is the number of rotation;
(3) mirror image: by the enhanced image of illumination, each mirror image is primary up and down, that is, spins upside down once, and left and right overturning is primary,
So that image mirrors are expanded to original multiple, which is the number of mirror image;
If expanded image all just for the enhanced figure of the same illumination when two or the whole of three kinds of methods of selection wherein
As carrying out, then enhanced image is used together.
7. the lenticular opacities degree detecting method according to claim 2 based on convolutional neural networks, it is characterised in that:
(2.2) step model training recycles pretreated MSLPP data set pair after carrying out transfer learning using convolutional neural networks
Model after transfer learning continues to train;
Transfer learning and the step of training are as follows:
The first step carries out pre-training based on the data set of ImageNet image labeling on Inception-V3 model, extracts one
The feature vector of a 2048 dimension;
Described eigenvector is inputted the full Connection Neural Network of a single layer by second step, is classified using one comprising Softmax
The full Connection Neural Network of the single layer of device obtains final classification result after training using pretreated MSLPP data set.
8. the lenticular opacities degree detecting method according to claim 7 based on convolutional neural networks, it is characterised in that:
Pre-training process is carried out in the first step on Inception-V3 model are as follows:
(1) load has been removed the Inception-V3 model of full articulamentum and has been obtained with ImageNet data set pre-training
Weight parameter;
(2) full articulamentum structure is added on Inception-V3 network after initialization, and is added in full articulamentum
Dropout strategy, ratio are set as 0.75;Extract the feature vector of one 2048 dimension;
One full Connection Neural Network of single layer comprising Softmax classifier of use in second step, using pretreated
The step of obtaining final classification result after the training of MSLPP data set is as follows:
(1) all feature extraction layers other than full articulamentum are freezed, learning rate is then set as 0.001, utilize pre- place
After reason MSLPP data training set training 1 epoch, iteration 550 times;
(2) all layers are thawed, using fine tuning transfer learning, continues to be trained with MSLPP data training set, using boarding steps
The method of decline is spent, initial learning rate is set as 0.01, trains 100 epoch, and each epoch iteration 550 times is every to terminate one
Epoch collects test model accuracy rate using verifying, if accuracy rate was improved compared with last time, saves this training parameter, if accuracy rate drops
It is low, then continue to train using previously stored parameter;It criticizes sample number batch_size and is set as 32, momentum momentum is set as
0.9。
9. the lenticular opacities degree detecting method according to claim 8 based on convolutional neural networks, it is characterised in that:
(3) freeze feature extract layer only selects full articulamentum to be trained in step, should update weight in a small range, in order to avoid destroy
The good feature of pre-training.
10. lenticular opacities degree-measuring system of the base according to claim 1 based on convolutional neural networks, feature exist
In: the system includes image pre-processing module and detection module;
Image pre-processing module carries out pretreatment to testing image and forms preprocessing image data;Detection module is by image preprocessing
Image data in module, which substitutes into, realizes detection in the lenticular opacities degree detecting learning model in detection module;
Detection module includes MSLPP data set construction part module, image pre-processing module and model training module;
MSLPP data set construction part module constructs MSLPP data set;Then image data concentrated by image pre-processing module
It is pre-processed;
The data set that model training module forms the image after image pre-processing module pretreatment, which carries out model training, to be learned
Practise model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910468518.0A CN110516685A (en) | 2019-05-31 | 2019-05-31 | Lenticular opacities degree detecting method based on convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910468518.0A CN110516685A (en) | 2019-05-31 | 2019-05-31 | Lenticular opacities degree detecting method based on convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110516685A true CN110516685A (en) | 2019-11-29 |
Family
ID=68622812
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910468518.0A Pending CN110516685A (en) | 2019-05-31 | 2019-05-31 | Lenticular opacities degree detecting method based on convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110516685A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111369506A (en) * | 2020-02-26 | 2020-07-03 | 四川大学 | Lens turbidity grading method based on eye B-ultrasonic image |
CN111658308A (en) * | 2020-05-26 | 2020-09-15 | 首都医科大学附属北京同仁医院 | In-vitro focusing ultrasonic cataract treatment operation system |
CN112000809A (en) * | 2020-09-29 | 2020-11-27 | 迪爱斯信息技术股份有限公司 | Incremental learning method and device for text categories and readable storage medium |
CN112348792A (en) * | 2020-11-04 | 2021-02-09 | 广东工业大学 | X-ray chest radiography image classification method based on small sample learning and self-supervision learning |
CN112767378A (en) * | 2021-01-28 | 2021-05-07 | 佛山科学技术学院 | Dense-Unet-based vitreous opacity degree rating method |
CN113378414A (en) * | 2021-08-12 | 2021-09-10 | 爱尔眼科医院集团股份有限公司 | Cornea shaping lens fitting method, device, equipment and readable storage medium |
CN113536847A (en) * | 2020-04-17 | 2021-10-22 | 天津职业技术师范大学(中国职业培训指导教师进修中心) | Industrial scene video analysis system and method based on deep learning |
CN114298286A (en) * | 2022-01-10 | 2022-04-08 | 江苏稻源科技集团有限公司 | Method for training lightweight convolutional neural network to obtain pre-training model |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101833754A (en) * | 2010-04-15 | 2010-09-15 | 青岛海信网络科技股份有限公司 | Image enhancement method and image enhancement system |
CN106446872A (en) * | 2016-11-07 | 2017-02-22 | 湖南源信光电科技有限公司 | Detection and recognition method of human face in video under low-light conditions |
CN106780465A (en) * | 2016-08-15 | 2017-05-31 | 哈尔滨工业大学 | Retinal images aneurysms automatic detection and recognition methods based on gradient vector analysis |
US20180336672A1 (en) * | 2017-05-22 | 2018-11-22 | L-3 Security & Detection Systems, Inc. | Systems and methods for image processing |
-
2019
- 2019-05-31 CN CN201910468518.0A patent/CN110516685A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101833754A (en) * | 2010-04-15 | 2010-09-15 | 青岛海信网络科技股份有限公司 | Image enhancement method and image enhancement system |
CN106780465A (en) * | 2016-08-15 | 2017-05-31 | 哈尔滨工业大学 | Retinal images aneurysms automatic detection and recognition methods based on gradient vector analysis |
CN106446872A (en) * | 2016-11-07 | 2017-02-22 | 湖南源信光电科技有限公司 | Detection and recognition method of human face in video under low-light conditions |
US20180336672A1 (en) * | 2017-05-22 | 2018-11-22 | L-3 Security & Detection Systems, Inc. | Systems and methods for image processing |
Non-Patent Citations (8)
Title |
---|
刘清: "低照度图像增强算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
刘清: "低照度图像增强算法研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, 15 August 2016 (2016-08-15), pages 1 - 2 * |
安莹莹: "基于深度学习的小儿白内障裂隙图像诊断研究及治疗效果预测", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》, 15 April 2018 (2018-04-15), pages 3 - 23 * |
安莹莹: "基于深度学习的小儿白内障裂隙图像诊断研究及治疗效果预测", 《中国优秀硕士学位论文全文数据库医药卫生科技辑》, pages 3 - 23 * |
李冠东 等: "卷积神经网络迁移的高分影像场景分类学习", 《测绘科学》, 30 April 2019 (2019-04-30), pages 116 - 123 * |
李冠东 等: "卷积神经网络迁移的高分影像场景分类学习", 《测绘科学》, pages 116 - 123 * |
杨凯: "基于深度学习的细胞核图像分割研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, 15 April 2019 (2019-04-15), pages 12 - 13 * |
杨凯: "基于深度学习的细胞核图像分割研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, pages 12 - 13 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111369506A (en) * | 2020-02-26 | 2020-07-03 | 四川大学 | Lens turbidity grading method based on eye B-ultrasonic image |
CN111369506B (en) * | 2020-02-26 | 2022-08-02 | 四川大学 | Lens turbidity grading method based on eye B-ultrasonic image |
CN113536847A (en) * | 2020-04-17 | 2021-10-22 | 天津职业技术师范大学(中国职业培训指导教师进修中心) | Industrial scene video analysis system and method based on deep learning |
CN111658308A (en) * | 2020-05-26 | 2020-09-15 | 首都医科大学附属北京同仁医院 | In-vitro focusing ultrasonic cataract treatment operation system |
CN111658308B (en) * | 2020-05-26 | 2022-06-17 | 首都医科大学附属北京同仁医院 | In-vitro focusing ultrasonic cataract treatment operation system |
CN112000809A (en) * | 2020-09-29 | 2020-11-27 | 迪爱斯信息技术股份有限公司 | Incremental learning method and device for text categories and readable storage medium |
CN112000809B (en) * | 2020-09-29 | 2024-05-17 | 迪爱斯信息技术股份有限公司 | Incremental learning method and device for text category and readable storage medium |
CN112348792A (en) * | 2020-11-04 | 2021-02-09 | 广东工业大学 | X-ray chest radiography image classification method based on small sample learning and self-supervision learning |
CN112767378A (en) * | 2021-01-28 | 2021-05-07 | 佛山科学技术学院 | Dense-Unet-based vitreous opacity degree rating method |
CN113378414A (en) * | 2021-08-12 | 2021-09-10 | 爱尔眼科医院集团股份有限公司 | Cornea shaping lens fitting method, device, equipment and readable storage medium |
CN114298286A (en) * | 2022-01-10 | 2022-04-08 | 江苏稻源科技集团有限公司 | Method for training lightweight convolutional neural network to obtain pre-training model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110516685A (en) | Lenticular opacities degree detecting method based on convolutional neural networks | |
CN106920227B (en) | The Segmentation Method of Retinal Blood Vessels combined based on deep learning with conventional method | |
CN107437092B (en) | The classification method of retina OCT image based on Three dimensional convolution neural network | |
Masood et al. | Identification of diabetic retinopathy in eye images using transfer learning | |
CN109726743B (en) | Retina OCT image classification method based on three-dimensional convolutional neural network | |
CN108095683A (en) | The method and apparatus of processing eye fundus image based on deep learning | |
CN109087302A (en) | A kind of eye fundus image blood vessel segmentation method and apparatus | |
CN107516312B (en) | A kind of Chinese medicine complexion automatic classification method using shallow-layer neural network | |
CN109635862A (en) | Retinopathy of prematurity plus lesion classification method | |
CN109948719A (en) | A kind of eye fundus image quality automatic classification method based on the intensive module network structure of residual error | |
CN110097545A (en) | Eye fundus image generation method based on deep learning | |
CN110428421A (en) | Macula lutea image region segmentation method and apparatus | |
CN111461218B (en) | Sample data labeling system for fundus image of diabetes mellitus | |
CN112185523B (en) | Diabetic retinopathy classification method based on multi-scale convolutional neural network | |
CN110013216B (en) | Artificial intelligence cataract analysis system | |
CN109691979A (en) | A kind of diabetic retina image lesion classification method based on deep learning | |
CN109344763A (en) | A kind of strabismus detection method based on convolutional neural networks | |
Junjun et al. | Diabetic retinopathy detection based on deep convolutional neural networks for localization of discriminative regions | |
CN102567734A (en) | Specific value based retina thin blood vessel segmentation method | |
CN111028230A (en) | Fundus image optic disc and macula lutea positioning detection algorithm based on YOLO-V3 | |
Xiong et al. | Automatic cataract classification based on multi-feature fusion and SVM | |
CN115035127A (en) | Retinal vessel segmentation method based on generative confrontation network | |
CN106725341A (en) | A kind of enhanced lingual diagnosis system | |
CN114372985B (en) | Diabetic retinopathy focus segmentation method and system adapting to multi-center images | |
CN114445666A (en) | Deep learning-based method and system for classifying left eye, right eye and visual field positions of fundus images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191129 |