CN116246331A - Automatic keratoconus grading method, device and storage medium - Google Patents
Automatic keratoconus grading method, device and storage medium Download PDFInfo
- Publication number
- CN116246331A CN116246331A CN202211552123.7A CN202211552123A CN116246331A CN 116246331 A CN116246331 A CN 116246331A CN 202211552123 A CN202211552123 A CN 202211552123A CN 116246331 A CN116246331 A CN 116246331A
- Authority
- CN
- China
- Prior art keywords
- features
- keratoconus
- feature
- extraction module
- grading
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 201000002287 Keratoconus Diseases 0.000 title claims abstract description 78
- 238000000034 method Methods 0.000 title claims abstract description 67
- 238000003860 storage Methods 0.000 title claims abstract description 13
- 238000000605 extraction Methods 0.000 claims abstract description 58
- 238000012876 topography Methods 0.000 claims abstract description 25
- 230000004927 fusion Effects 0.000 claims abstract description 24
- 238000003384 imaging method Methods 0.000 claims abstract description 12
- 230000007246 mechanism Effects 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims description 30
- 238000011176 pooling Methods 0.000 claims description 18
- 238000010606 normalization Methods 0.000 claims description 16
- 210000004087 cornea Anatomy 0.000 claims description 15
- 230000004913 activation Effects 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 14
- 239000011159 matrix material Substances 0.000 claims description 13
- 238000004590 computer program Methods 0.000 claims description 12
- 238000007689 inspection Methods 0.000 claims description 6
- 238000013135 deep learning Methods 0.000 abstract description 8
- 230000009286 beneficial effect Effects 0.000 abstract description 5
- 230000006870 function Effects 0.000 description 29
- 238000010586 diagram Methods 0.000 description 12
- 238000013527 convolutional neural network Methods 0.000 description 8
- 238000000113 differential scanning calorimetry Methods 0.000 description 6
- 201000010099 disease Diseases 0.000 description 6
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 238000011282 treatment Methods 0.000 description 5
- 238000003745 diagnosis Methods 0.000 description 4
- 206010038933 Retinopathy of prematurity Diseases 0.000 description 3
- 206010064930 age-related macular degeneration Diseases 0.000 description 3
- 201000009310 astigmatism Diseases 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 201000000766 irregular astigmatism Diseases 0.000 description 2
- 208000002780 macular degeneration Diseases 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 201000004569 Blindness Diseases 0.000 description 1
- 206010011033 Corneal oedema Diseases 0.000 description 1
- 206010012689 Diabetic retinopathy Diseases 0.000 description 1
- 208000017442 Retinal disease Diseases 0.000 description 1
- 206010038923 Retinopathy Diseases 0.000 description 1
- 230000001154 acute effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 201000004778 corneal edema Diseases 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 206010012601 diabetes mellitus Diseases 0.000 description 1
- 238000012774 diagnostic algorithm Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 208000018769 loss of vision Diseases 0.000 description 1
- 231100000864 loss of vision Toxicity 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000012014 optical coherence tomography Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000037390 scarring Effects 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
- 238000001356 surgical procedure Methods 0.000 description 1
- 238000011269 treatment regimen Methods 0.000 description 1
- 230000004393 visual impairment Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/197—Matching; Classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/32—Normalisation of the pattern dimensions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Medical Informatics (AREA)
- Ophthalmology & Optometry (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Eye Examination Apparatus (AREA)
Abstract
The invention relates to the technical field of deep learning, in particular to an automatic keratoconus grading method and device, a computer storage medium and an ophthalmic imaging device. According to the automatic grading method for keratoconus, five topographic maps are taken as a whole to be input, the relation among multiple topographic maps is fully considered, and the self-attention-based feature extraction module can guide a network to extract important feature information by acquiring the average value and the maximum value of corresponding dimensions of each channel of input features and generating attention weights based on rich information of different dimensions; the feature fusion module can adaptively calibrate the features of the dimension of the upper and lower channels based on the attention mechanism, and fuse the features of the upper and lower channels according to a certain weight, so that the higher features which are more beneficial to classification are obtained, the network can be more focused on the key features in the corneal topography, and excellent performance is obtained on the keratoconus severity classification task.
Description
Technical Field
The invention relates to the technical field of deep learning, in particular to an automatic keratoconus grading method and device, a computer storage medium and an ophthalmic imaging device.
Background
Keratoconus (KC) is an ophthalmic disease characterized by a central thinning of the cornea, tapering forward and generally asymmetric. It often develops in adolescence and often leads to highly irregular myopic astigmatism, with acute corneal oedema, scarring and significant loss of vision in the late stages. Studies have shown that every 2000 persons in the general population have a keratoconus. Although the etiology of keratoconus is not clear, researchers have found that the incidence of this disease varies with race, environmental and genetic factors.
Ophthalmic surgeons typically diagnose keratoconus from corneal topography because they provide detailed data and morphological features of the cornea. Early imaging devices were only capable of acquiring topographical data of the anterior surface, and along with the continued development of science and technology, imaging techniques associated with ophthalmology have also been rapidly advanced. The current corneal imaging modality provides much more information than previous placido-based previous corneal analysis. Other parameter indexes for diagnosing the keratoconus can be obtained, and data information of the rear surface of the keratoconus can be obtained, so that the keratoconus can be detected and classified. However, the parameter indexes obtained by different imaging devices often have differences, so that the classification standards formulated according to the parameter indexes have poor repeatability and cannot reflect the characteristic differences among the various categories, and the corneal topography can provide rich characteristic information, which is beneficial to realizing the classification of the keratoconus severity.
Keratoconus treatment depends on the progression of the disease and its severity. In general, eyeglasses can provide acceptable vision for mild and partially moderate patients. As the disease progresses and irregular astigmatism occurs, the contact lens corrects the irregular astigmatism to provide better vision for a moderate patient. In addition, by using corneal topography and optical coherence tomography, proper lenses can be selectively worn according to the type, position and size of the cone, which not only can effectively improve the comfort of the wearer, but also helps to prevent the evolution into a severe keratoconus and thus avoid corneal transplants. Patients with severe keratoconus may receive keratoplasty and select related surgical treatments. Thus, developing an effective keratoconus grading algorithm would provide effective guidance and assistance for the ophthalmologist to formulate an appropriate treatment regimen for the patient.
In the past, many studies on keratoconus have focused mainly on the screening and detection of keratoconus, most of which utilize some conventional machine learning algorithms, such as decision trees, support vector machines, artificial neural networks, etc., to effect diagnosis of disease by means of cornea indices obtained from the apparatus. In recent years, with the development of artificial intelligence technology, the ability to analyze and process complex data is also continuously increasing, and in particular in the field of deep learning, alexNet, VGGNet, googLeNet, resNet, denseNet and other convolutional neural networks (Convolutional Neural Network, CNN) have been used for image classification tasks in many scenarios. In ophthalmology, the deep learning method has been widely used for diagnosis and screening of diseases such as diabetic retinopathy (Diabetes Retinopathy, DR), age-related macular degeneration (Age-related Macular Degeneration, AMD), retinopathy of prematurity (Retinopathy of Prematurity, ROP) and the like. With the advent of advanced imaging devices, researchers have also begun to study end-to-end automated keratoconus detection algorithms using deep learning techniques based on corneal topography. However, studies on keratoconus severity grading using deep learning are currently less common. The only methods mostly adopt some traditional convolutional neural networks such as VGGNet and ResNet to directly extract features from a single-aperture corneal topography map, and then obtain classification results.
Currently, these keratoconus diagnostic algorithms developed using deep learning still have some limitations. Most researches are to send single-piece corneal topography into a network for processing to obtain respective classification results, but the relation among multiple pieces of corneal topography is not considered, and in addition, most researchers still use a traditional convolutional neural network to process cornea topography, and the characteristics extraction and fusion are not greatly innovated, so that the performance on classification tasks is still to be improved.
Disclosure of Invention
Therefore, the technical problem to be solved by the invention is to overcome the problems that the relation between the multi-aperture cornea topographic map is not considered in the prior art, and the characteristic extraction and the characteristic fusion capability of the traditional convolutional neural network are insufficient, so that the performance of a grading task is lower.
In order to solve the technical problems, the invention provides an automatic keratoconus grading method, which comprises the following steps: obtaining a multi-aperture corneal topography of a target eye, carrying out normalization processing to obtain target input characteristics, and inputting the target input characteristics into a pre-trained keratoconus automatic grading model;
inputting the target input features into an n-level feature extraction module to further extract features, wherein the processing procedure in each level of feature extraction module is as follows:
after the output features of the previous stage feature extraction module are processed by a batch normalization layer and a ReLU activation function, the first features F are obtained through three 3X 3 convolutions respectively q Second characteristic F k And third feature F v Respectively find the first features F q Second characteristic F k Average value and maximum value of corresponding dimension, adding the product of average value of corresponding dimension and the product of maximum value of corresponding dimension to obtain attention weight matrix, multiplying the third feature with the attention weight matrix to obtain attention weight, adding the attention weight with the output feature of the previous stage feature extraction module by 1×1 convolution to obtain the current timeOutput characteristics of the front-stage characteristic extraction module;
the method comprises the steps that output features of an nth-level feature extraction module and output features of an nth-2-level feature extraction module with dimensions and sizes consistent with those of the nth level after 1X 1 convolution adjustment are input into a feature fusion module together, and the processing procedure in the feature fusion module is as follows:
the method comprises the steps of adjusting importance degrees of two input features through a channel attention mechanism respectively, fusing the two calibrated input features according to a preset proportion respectively, and finally obtaining target output features through 1X 1 convolution;
and processing the target output characteristics through the full connection layer and the Softmax function to obtain a grading result.
Preferably, the 3 x 3 convolution is a 3 x 3 depth separable convolution.
Preferably, the adjusting the importance level by the channel attention mechanism includes:
respectively passing the input features through a maximum pooling layer and an average pooling layer to obtain the maximum pooling features and the average pooling features;
the maximum pooling feature and the average pooling feature are respectively convolved by 1 multiplied by 1 shared by two weights and a ReLU activation function, and channel attention weights with the range of 0-1 are generated by a sigmoid activation function after the obtained results are added;
multiplying the channel attention weight with the input feature to adjust the importance degree thereof.
Preferably, the sigmoid function is a Hard-sigmoid function.
Preferably, the acquiring the multi-aperture corneal topography of the target eye and performing normalization processing includes:
acquiring an inspection report in a pdf format of a target eye;
extracting a plurality of corneal topography maps from the inspection report by pdf file parsing, the plurality of corneal topography maps including a corneal thickness map, an anterior surface tangent map, an anterior surface elevation map, a posterior surface tangent map, and a posterior surface elevation map;
and downsampling the five-aperture cornea topographic map by the same size by using a bilinear interpolation method, and carrying out normalization processing.
Preferably, the training method of the keratoconus automatic grading model comprises the following steps:
cornea topographic map data of a plurality of eyes are obtained and marked and divided into normal, mild, moderate and severe to obtain a training set;
and training the keratoconus automatic grading model by using a training set until the loss function converges.
Preferably, the loss function is:
wherein L is m Representing CE loss of multi-classification task, y i,k The true label representing the ith sample is K, and there are K label values and N samples, p i,k Representing the probability that the i-th sample is predicted to be the k-th tag value.
The invention also provides an automatic keratoconus grading device, which comprises:
the input module is used for acquiring a plurality of cornea topographic maps of the target eyes and carrying out normalization processing to obtain target input characteristics, and inputting the target input characteristics into a pre-trained keratoconus automatic grading model;
the multi-stage feature extraction module is used for inputting the target input features into the n-stage feature extraction module to further extract the features, wherein the processing procedure in each stage of feature extraction module is as follows:
after the output features of the previous stage feature extraction module are processed by a batch normalization layer and a ReLU activation function, the first features F are obtained through three 3X 3 convolutions respectively q Second characteristic F k And third feature F v Respectively find the first features F q Second characteristic F k Average value and maximum value of corresponding dimension, adding product of average value of corresponding dimension and product of maximum value of corresponding dimension to obtain attention weight matrix, and combining the third feature with the attention weightMatrix multiplication is carried out to obtain attention weight, and the attention weight is added with the output characteristic of the previous stage characteristic extraction module through 1X 1 convolution to obtain the output characteristic of the current stage characteristic extraction module;
the feature fusion module is used for respectively adjusting the importance degree of the output features of the nth level feature extraction module and the output features of the nth-2 level feature extraction module, the dimension and the size of which are consistent with the nth level after being adjusted by 1X 1 convolution, respectively fusing the two calibrated features according to a preset proportion, and finally obtaining target output features by 1X 1 convolution;
and the automatic grading module is used for processing the target output characteristics through the full-connection layer and the Softmax function to obtain grading results.
The invention also provides an ophthalmic imaging device comprising the keratoconus automatic grading device.
The invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and the computer program realizes the steps of the keratoconus automatic grading method when being executed by a processor.
Compared with the prior art, the technical scheme of the invention has the following advantages:
according to the automatic grading method for keratoconus, five topographic maps are taken as a whole to be input, the relation among multiple topographic maps is fully considered, and the self-attention-based feature extraction module can guide a network to extract important feature information by acquiring the average value and the maximum value of corresponding dimensions of each channel of input features and generating attention weights based on rich information of different dimensions; the feature fusion module can adaptively calibrate the features of the dimension of the upper and lower channels based on the attention mechanism, and fuse the features of the upper and lower channels according to a certain weight, so that the higher features which are more beneficial to classification are obtained, the network can be more focused on the key features in the corneal topography, and excellent performance is obtained on the keratoconus severity classification task.
Drawings
In order that the invention may be more readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings, in which:
FIG. 1 is a flow chart of an implementation of an automatic keratoconus grading method of the present invention;
FIG. 2 is a general block diagram of the keratoconus auto-rating method according to the present invention;
FIG. 3 is a diagram of the self-attention based feature extraction module SaFEB network according to the present invention;
FIG. 4 is a block diagram of a multistage feature fusion module MlFFM network designed in the present invention;
fig. 5 is a block diagram of an automatic keratoconus grading device according to an embodiment of the present invention.
Detailed Description
The core of the invention is to provide an automatic keratoconus grading method and device, a computer storage medium and an ophthalmic imaging device, so that the performance of a keratoconus severity grading task is improved.
In order to better understand the aspects of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and detailed description. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1 and 2, fig. 1 is a flowchart showing an implementation of an automatic keratoconus grading method according to the present invention, and fig. 2 is a general block diagram showing the automatic keratoconus grading method according to the present invention; the specific operation steps are as follows:
s101, acquiring a plurality of cornea topographic maps of a target eye, carrying out normalization treatment to obtain target input features, and inputting the target input features into a pre-trained keratoconus automatic grading model;
the training method of the keratoconus automatic grading model comprises the following steps:
cornea topographic map data of a plurality of eyes are obtained and marked and divided into normal, mild, moderate and severe to obtain a training set;
and training the keratoconus automatic grading model by using a training set until the loss function converges.
The network proposed by the present invention is directed to the severity classification of keratoconus, which is a multi-classification task. In the supervised classification task of deep learning, cross Entropy loss functions (CE) are often used. The method can be used for not only classification tasks but also multi-classification tasks, can effectively measure the difference between the distribution learned by the model and the real distribution, and is characterized in that the loss function is as follows:
wherein L is m Representing CE loss of multi-classification task, y i,k The true label representing the ith sample is K, and there are K label values and N samples, P i,k Representing the probability that the i-th sample is predicted to be the k-th tag value.
S102: inputting the target input features into an n-level feature extraction module to further extract features, wherein the processing procedure in each level of feature extraction module is as follows:
as shown in fig. 3, a novel feature extraction block is designed that uses feature information of different dimensions to generate its attention weight matrix to obtain more rich and efficient features, and in addition, to reduce the complexity of the model, the present invention uses Depth-separable convolution (Depth-wise Separable Convolution, DSC) instead of conventional convolution. The Depth separable convolution consists of a Depth-wise (Dw) convolution and a 1 x 1 convolution. In the depth convolution, one convolution kernel is only one-dimensional, and one channel is convolved by only one convolution kernel. It can significantly reduce the number of parameters while maintaining model performance, and thus is widely used in lightweight networks:
output characteristic F of the previous stage characteristic extraction module i ∈After batch normalization layer and ReLU6 activation function processing (which can enhance regularization of the model and make it easier to optimize), the convolution F can be separated by a depth of 3×3 in three passes q =DSC 3 (BR(F i ))、F k =DSC 3 (BR(F i ))、F v =DSC 3 (BR(F i ) Obtaining the first characteristic->Second feature->And third feature->Solving for characteristic F q Average and maximum of corresponding dimensions are obtainedAnd->Solving for characteristic F k Average and maximum value of the corresponding dimensions are +.>Andand the product of the average value of the two corresponding dimensions and the product of the maximum value of the two corresponding dimensions are added and processed by a Hard-sigmoid function (the Hard-sigmoid function is realized based on ReLU6 and is very close to the sigmoid function, but becomes simpler in the process of calculating and deriving), so as to obtain an attention weight matrix W v =HS(EM(W qA ,W kA )+EM(W kM ,W qM ) Multiplying the third feature by the attention weight matrix to obtain an attention weight, and annotating the attentionThe meaning weight is added with the output characteristic of the previous stage characteristic extraction module through 1X 1 convolution to obtain the output characteristic F of the current stage characteristic extraction module o =EM(F v ,W v )+C1(F i ) The method comprises the steps of carrying out a first treatment on the surface of the Wherein->Representing input feature map, C i The number of channels of the feature map, H i Is the height of the feature map, W i Is the width of the feature map. DSC (differential scanning calorimetry) 3 Representing a depth separable convolution with a kernel size of 3 x 3, BR represents a batch normalization layer and a ReLU6 activation function, EM represents element level multiplication, HS represents a Hard-sigmoid activation function, and C1 represents a 1 x 1 convolution.
S103: the method comprises the steps that output features of an nth-level feature extraction module and output features of an nth-2-level feature extraction module with dimensions and sizes consistent with those of the nth level after 1X 1 convolution adjustment are input into a feature fusion module together, and the processing procedure in the feature fusion module is as follows:
as shown in fig. 4, a multi-level feature fusion module is designed, and the module can adaptively calibrate the features of the dimensions of the upper and lower channels based on a channel attention mechanism, and fuse the calibrated two-level features according to a certain weight to strengthen the features:
the method comprises the steps of adjusting importance degrees of two input features through a channel attention mechanism respectively, fusing the two calibrated input features according to a preset proportion respectively, and finally eliminating information aliasing through 1X 1 convolution to obtain target output features:
F n+1 =C1(((CA(C 1 (F n-2 )))×w+(CA(F n ))×(1-w))
wherein CA stands for channel attention mechanism, C 1 Represents a 1X 1 convolution, w.epsilon.0, 1]Representing the fusion ratio of the n-2 stage.
The adjusting the importance level through the channel attention mechanism comprises:
respectively passing the input features through a maximum pooling layer and an average pooling layer to obtain the maximum pooling features and the average pooling features;
the maximum pooling feature and the average pooling feature are respectively subjected to 1X 1 convolution and a ReLU activation function which are shared by two weights, and channel attention weights with the range of 0-1 are generated through a hard-sigmoid activation function after the obtained results are added;
multiplying the channel attention weight with the input feature to adjust the importance degree thereof.
S104: and processing the target output characteristics through the full connection layer and the Softmax function to obtain a grading result.
Based on the above embodiments, the present embodiment further describes step S101:
an inspection report of the pdf format of the target eye is obtained, and five color-coded corneal topography maps are extracted from the inspection report by pdf file parsing, the size of these pictures being 640 x 480. Then, the picture processing stage is entered, the five pictures are fully downsampled to 256×256 by bilinear interpolation, and then the five pictures are combined into a 256×256×15 input data format.
In the data processing part, the invention designs an automatic processing program capable of extracting the data in the input report without manual intervention. Secondly, in the network part, a Self-attention-based feature extraction module (Self-attention based feature extraction block, saFEB) module and a Multi-level feature fusion module (Multi-level Featurel Fusion Module, mlFFM) module are designed and applied to a ResNet18 network structure to realize a lightweight keratoconus grading network (Lightweight Keratoconus Grading Network, LKG-Net) and divide input data into four types of normal, mild, moderate and severe.
The self-attention-based feature extraction module SaFEB provided by the invention can generate the attention weight matrix by utilizing the feature information with different dimensions, so that more abundant and effective features are obtained, the number of parameters and the calculated amount are reduced by using the depth separable convolution, and the number of model parameters can be obviously reduced while rich features are obtained. The multistage feature fusion module MlFFM can adaptively calibrate the features of the dimension of the upper and lower stages of channels based on an attention mechanism, and perform feature fusion on the upper and lower stages of features according to a certain weight, so that the higher stage features which are more beneficial to classification are obtained. In keratoconus grading experiments, excellent results were obtained, indicating the effectiveness of the method proposed by the present invention.
The invention provides a lightweight keratoconus automatic grading method based on a corneal topography, and the self-attention-based feature extraction module SaFEB and the multi-stage feature fusion module MlFFM can enable a network to be more focused on key features in the corneal topography, so that excellent performance is achieved on a keratoconus severity grading task, parameter quantity is small, practicability is high, and powerful help is provided for ophthalmologists to diagnosis and treatment of keratoconus.
Based on the above embodiment, the present embodiment is based on the Pytorch's integration environment, and uses a block of NVIDIA Tesla K40 GPU with 12GB of memory space for model training and testing. The model was trained by back propagation algorithm minimizing cross entropy loss, and an optimizer Adam was used to minimize the cost function, with both the base learning rate and the weight decay set to 0.0001. The data fed into the process for each batch was set to 8 and the number of iterations was set to 200.
The experimental data used in this example was corneal topography data from 488 eyes of 281, including 236 normal eyes and 252 keratoconus eyes. The data for each eye contained five-toned film topography, respectively a corneal thickness map, a front surface tangent map, a front surface elevation map, a rear surface tangent map, and a rear surface elevation map, with an original resolution of 640 x 480. The professional ophthalmologist divides 252 KC eyes into mild 137 eyes, moderate 64 eyes and severe 51 eyes according to Amsler-Krumeich classification standard in combination with the index in the examination report. Considering the problem of data volume, to evaluate the effectiveness of the proposed method, a 4-fold cross-validation strategy was employed. To reduce the computational cost, all corneal topography was downsampled to 256×256 using bilinear interpolation and normalized to [0,1].
The proposed method is compared with other CNN-based excellent classification networks, including VGG16, conceptionv 2, resNet34, resNet101, resNext50, se_resnet50, sensenet 121, efficentet_b0, efficentet_b2, mobilenet_v3_ small, mobileNet _v3_large, and Mutil-ResNet18, using weighted recall (w_r), weighted precision (w_p), weighted F1 score (w_f1), and consistency check coefficient (Kappa). For convenience, the backbone network ResNet18 is referred to as the reference network. For all methods, five corneal topography maps were sent as a single input to the network for training and testing, with the experimental results shown in table 1:
table 1 keratoconus grading results for different methods
As can be seen from table 1, the performance of the proposed network LKG-Net is superior to all CNN-based methods described above. First, the performance of the proposed method is significantly improved compared to the baseline network, with 2.46%,2.15%,3.45% and 1.67% improvement in W_ R, W _ P, W _F1 and Kappa, respectively. The complexity of the model proposed by the present invention is greatly reduced, the number of parameters is about one fifth of the reference network, and therefore is called a lightweight network. Second, this approach has significant advantages in terms of model performance and complexity over other networks employing attention mechanisms and having a ResNet backbone, including SE_Resnet50, SE_ResNext 50. MobileNetV3 is a typical representation in a lightweight network, and the proposed method has significantly improved performance compared to mobilenetv3_small with similar model parameters.
To this end, a novel end-to-end approach to keratoconus severity grading has been implemented and validated. The self-attention-based feature extraction module SaFEB and the multi-level feature fusion module MlFFM, LKG-Net network provided by the invention better overcome the defects of insufficient consideration, over high model complexity and the like in the aspect of feature extraction and fusion of the traditional model. Experimental results show that the LKG-Net network designed by the invention can effectively distinguish keratoconus with different severity degrees, which is beneficial to the diagnosis and treatment of patients by ophthalmologists.
Referring to fig. 5, fig. 5 is a block diagram of an automatic keratoconus grading device according to an embodiment of the present invention; the specific apparatus may include:
the input module 100 is used for acquiring a plurality of cornea topographic maps of the target eyes and carrying out normalization processing to obtain target input features, and inputting the target input features into a pre-trained keratoconus automatic grading model;
the multi-stage feature extraction module 200 is configured to input the target input feature into the n-stage feature extraction module to further extract a feature, where a processing procedure in each stage of feature extraction module is as follows:
after the output features of the previous stage feature extraction module are processed by a batch normalization layer and a ReLU activation function, the first features F are obtained through three 3X 3 convolutions respectively q Second characteristic F k And third feature F v Respectively find the first features F q Second characteristic F k Adding the product of the average value of the corresponding dimension and the maximum value of the corresponding dimension to obtain an attention weight matrix, multiplying the third feature by the attention weight matrix to obtain attention weight, and adding the attention weight to the output feature of the previous stage feature extraction module through 1×1 convolution to obtain the output feature of the current stage feature extraction module;
the feature fusion module 300 is configured to adjust the importance level of the output features of the nth level feature extraction module and the output features of the nth-2 level feature extraction module whose dimension and dimension are consistent with those of the nth level after being adjusted by 1×1 convolution, respectively, fuse the two calibrated features according to a preset proportion, and finally obtain a target output feature by 1×1 convolution;
the automatic grading module 400 is configured to process the target output feature through the full connection layer and the Softmax function to obtain a grading result.
The keratoconus automatic grading device of the present embodiment is used to implement the aforementioned keratoconus automatic grading method, so the specific implementation of the keratoconus automatic grading device can be found in the example parts of the foregoing Wen Yuanzhui cornea automatic grading method, such as the input module 100, the multi-stage feature extraction module 200, the feature fusion module 300, and the automatic grading module 400, which are respectively used to implement steps S101, S102, S103, and S104 in the aforementioned keratoconus automatic grading method, so the specific implementation thereof can refer to the description of the examples of the respective parts and will not be repeated herein.
The embodiment of the invention also provides an ophthalmic imaging device, and the automatic keratoconus grading device is integrated into the ophthalmic imaging device to realize automatic examination.
The specific embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium is stored with a computer program, and the computer program realizes the steps of the automatic keratoconus grading method when being executed by a processor.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It is apparent that the above examples are given by way of illustration only and are not limiting of the embodiments. Other variations and modifications of the present invention will be apparent to those of ordinary skill in the art in light of the foregoing description. It is not necessary here nor is it exhaustive of all embodiments. While still being apparent from variations or modifications that may be made by those skilled in the art are within the scope of the invention.
Claims (10)
1. An automated keratoconus grading method, comprising:
obtaining a multi-aperture corneal topography of a target eye, carrying out normalization processing to obtain target input characteristics, and inputting the target input characteristics into a pre-trained keratoconus automatic grading model;
inputting the target input features into an n-level feature extraction module to further extract features, wherein the processing procedure in each level of feature extraction module is as follows:
the output characteristics of the previous-stage characteristic extraction module are subjected to batch normalization layer sumAfter the ReLU activation function is processed, a first characteristic F is obtained through three 3X 3 convolutions respectively q Second characteristic F k And third feature F v Respectively find the first features F q Second characteristic F k Adding the product of the average value of the corresponding dimension and the maximum value of the corresponding dimension to obtain an attention weight matrix, multiplying the third feature by the attention weight matrix to obtain attention weight, and adding the attention weight to the output feature of the previous stage feature extraction module through 1×1 convolution to obtain the output feature of the current stage feature extraction module;
the method comprises the steps that output features of an nth-level feature extraction module and output features of an nth-2-level feature extraction module with dimensions and sizes consistent with those of the nth level after 1X 1 convolution adjustment are input into a feature fusion module together, and the processing procedure in the feature fusion module is as follows:
the method comprises the steps of adjusting importance degrees of two input features through a channel attention mechanism respectively, fusing the two calibrated input features according to a preset proportion respectively, and finally obtaining target output features through 1X 1 convolution;
and processing the target output characteristics through the full connection layer and the Softmax function to obtain a grading result.
2. The automated keratoconus grading method of claim 1, wherein the 3 x 3 convolution is a 3 x 3 depth separable convolution.
3. The automated keratoconus grading method according to claim 1, wherein the adjusting the importance level by a channel attention mechanism comprises:
respectively passing the input features through a maximum pooling layer and an average pooling layer to obtain the maximum pooling features and the average pooling features;
the maximum pooling feature and the average pooling feature are respectively convolved by 1 multiplied by 1 shared by two weights and a ReLU activation function, and channel attention weights with the range of 0-1 are generated by a sigmoid activation function after the obtained results are added;
multiplying the channel attention weight with the input feature to adjust the importance degree thereof.
4. The automated keratoconus grading method of claim 3, wherein the sigmoid function is a Hard-sigmoid function.
5. The automated keratoconus grading method according to claim 1, wherein the acquiring and normalizing the multiple corneal topography of the target eye comprises:
acquiring an inspection report in a pdf format of a target eye;
extracting a plurality of corneal topography maps from the inspection report by pdf file parsing, the plurality of corneal topography maps including a corneal thickness map, an anterior surface tangent map, an anterior surface elevation map, a posterior surface tangent map, and a posterior surface elevation map;
and downsampling the five-aperture cornea topographic map by the same size by using a bilinear interpolation method, and carrying out normalization processing.
6. The automated keratoconus grading method according to claim 1, wherein the training method of the automated keratoconus grading model comprises:
cornea topographic map data of a plurality of eyes are obtained and marked and divided into normal, mild, moderate and severe to obtain a training set;
and training the keratoconus automatic grading model by using a training set until the loss function converges.
7. The automated keratoconus grading method according to claim 6, wherein the loss function is:
wherein L is m Representing CE loss of multi-classification task, y i,k The true label representing the ith sample is K, and there are K label values and N samples, p i,k Representing the probability that the i-th sample is predicted to be the k-th tag value.
8. An automatic keratoconus grading device, comprising:
the input module is used for acquiring a plurality of cornea topographic maps of the target eyes and carrying out normalization processing to obtain target input characteristics, and inputting the target input characteristics into a pre-trained keratoconus automatic grading model;
the multi-stage feature extraction module is used for inputting the target input features into the n-stage feature extraction module to further extract the features, wherein the processing procedure in each stage of feature extraction module is as follows:
after the output features of the previous stage feature extraction module are processed by a batch normalization layer and a ReLU activation function, the first features F are obtained through three 3X 3 convolutions respectively q Second characteristic F k And third feature F v Respectively find the first features F q Second characteristic F k Adding the product of the average value of the corresponding dimension and the maximum value of the corresponding dimension to obtain an attention weight matrix, multiplying the third feature by the attention weight matrix to obtain attention weight, and adding the attention weight to the output feature of the previous stage feature extraction module through 1×1 convolution to obtain the output feature of the current stage feature extraction module;
the feature fusion module is used for respectively adjusting the importance degree of the output features of the nth level feature extraction module and the output features of the nth-2 level feature extraction module, the dimension and the size of which are consistent with the nth level after being adjusted by 1X 1 convolution, respectively fusing the two calibrated features according to a preset proportion, and finally obtaining target output features by 1X 1 convolution;
and the automatic grading module is used for processing the target output characteristics through the full-connection layer and the Softmax function to obtain grading results.
9. An ophthalmic imaging apparatus comprising an automated keratoconus grading device according to claim 8.
10. A computer readable storage medium, characterized in that it has stored thereon a computer program which, when executed by a processor, implements the steps of a keratoconus automatic grading method according to any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211552123.7A CN116246331B (en) | 2022-12-05 | 2022-12-05 | Automatic keratoconus grading method, device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211552123.7A CN116246331B (en) | 2022-12-05 | 2022-12-05 | Automatic keratoconus grading method, device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116246331A true CN116246331A (en) | 2023-06-09 |
CN116246331B CN116246331B (en) | 2024-08-16 |
Family
ID=86624927
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211552123.7A Active CN116246331B (en) | 2022-12-05 | 2022-12-05 | Automatic keratoconus grading method, device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116246331B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118411352A (en) * | 2024-05-14 | 2024-07-30 | 深圳市卓美瑞科技有限公司 | Electronic cigarette defect detection method and device based on machine vision perception |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111340776A (en) * | 2020-02-25 | 2020-06-26 | 浙江大学 | Method and system for identifying keratoconus based on multi-dimensional feature adaptive fusion |
CN111832620A (en) * | 2020-06-11 | 2020-10-27 | 桂林电子科技大学 | Image emotion classification method based on double-attention multilayer feature fusion |
US20200405148A1 (en) * | 2019-06-27 | 2020-12-31 | Bao Tran | Medical analysis system |
CN112806957A (en) * | 2021-04-22 | 2021-05-18 | 浙江大学 | Keratoconus and subclinical keratoconus detection system based on deep learning |
WO2021115159A1 (en) * | 2019-12-09 | 2021-06-17 | 中兴通讯股份有限公司 | Character recognition network model training method, character recognition method, apparatuses, terminal, and computer storage medium therefor |
CN113298717A (en) * | 2021-06-08 | 2021-08-24 | 浙江工业大学 | Medical image super-resolution reconstruction method based on multi-attention residual error feature fusion |
US11222217B1 (en) * | 2020-08-14 | 2022-01-11 | Tsinghua University | Detection method using fusion network based on attention mechanism, and terminal device |
US20220019867A1 (en) * | 2020-07-14 | 2022-01-20 | International Business Machines Corporation | Weighted deep fusion architecture |
CN113963182A (en) * | 2021-10-22 | 2022-01-21 | 河海大学 | Hyperspectral image classification method based on multi-scale void convolution attention network |
CN114078172A (en) * | 2020-08-19 | 2022-02-22 | 四川大学 | Text image generation method for progressively generating confrontation network based on resolution |
CN114266757A (en) * | 2021-12-25 | 2022-04-01 | 北京工业大学 | Diabetic retinopathy classification method based on multi-scale fusion attention mechanism |
CN114708511A (en) * | 2022-06-01 | 2022-07-05 | 成都信息工程大学 | Remote sensing image target detection method based on multi-scale feature fusion and feature enhancement |
CN114842270A (en) * | 2022-05-31 | 2022-08-02 | 中冶赛迪技术研究中心有限公司 | Target image classification method and device, electronic equipment and medium |
US20220272312A1 (en) * | 2021-02-22 | 2022-08-25 | Shenzhen University | Cross-view image optimizing method, apparatus, computer equipment, and readable storage medium |
US20220382553A1 (en) * | 2021-05-24 | 2022-12-01 | Beihang University | Fine-grained image recognition method and apparatus using graph structure represented high-order relation discovery |
-
2022
- 2022-12-05 CN CN202211552123.7A patent/CN116246331B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200405148A1 (en) * | 2019-06-27 | 2020-12-31 | Bao Tran | Medical analysis system |
WO2021115159A1 (en) * | 2019-12-09 | 2021-06-17 | 中兴通讯股份有限公司 | Character recognition network model training method, character recognition method, apparatuses, terminal, and computer storage medium therefor |
CN111340776A (en) * | 2020-02-25 | 2020-06-26 | 浙江大学 | Method and system for identifying keratoconus based on multi-dimensional feature adaptive fusion |
CN111832620A (en) * | 2020-06-11 | 2020-10-27 | 桂林电子科技大学 | Image emotion classification method based on double-attention multilayer feature fusion |
US20220019867A1 (en) * | 2020-07-14 | 2022-01-20 | International Business Machines Corporation | Weighted deep fusion architecture |
US11222217B1 (en) * | 2020-08-14 | 2022-01-11 | Tsinghua University | Detection method using fusion network based on attention mechanism, and terminal device |
CN114078172A (en) * | 2020-08-19 | 2022-02-22 | 四川大学 | Text image generation method for progressively generating confrontation network based on resolution |
US20220272312A1 (en) * | 2021-02-22 | 2022-08-25 | Shenzhen University | Cross-view image optimizing method, apparatus, computer equipment, and readable storage medium |
CN112806957A (en) * | 2021-04-22 | 2021-05-18 | 浙江大学 | Keratoconus and subclinical keratoconus detection system based on deep learning |
US20220382553A1 (en) * | 2021-05-24 | 2022-12-01 | Beihang University | Fine-grained image recognition method and apparatus using graph structure represented high-order relation discovery |
CN113298717A (en) * | 2021-06-08 | 2021-08-24 | 浙江工业大学 | Medical image super-resolution reconstruction method based on multi-attention residual error feature fusion |
CN113963182A (en) * | 2021-10-22 | 2022-01-21 | 河海大学 | Hyperspectral image classification method based on multi-scale void convolution attention network |
CN114266757A (en) * | 2021-12-25 | 2022-04-01 | 北京工业大学 | Diabetic retinopathy classification method based on multi-scale fusion attention mechanism |
CN114842270A (en) * | 2022-05-31 | 2022-08-02 | 中冶赛迪技术研究中心有限公司 | Target image classification method and device, electronic equipment and medium |
CN114708511A (en) * | 2022-06-01 | 2022-07-05 | 成都信息工程大学 | Remote sensing image target detection method based on multi-scale feature fusion and feature enhancement |
Non-Patent Citations (2)
Title |
---|
A. F. H. SALLEHUDDIN等: "Score level normalization and fusion of iris recognition", 2016 3RD INTERNATIONAL CONFERENCE ON ELECTRONIC DESIGN (ICED), 5 January 2017 (2017-01-05), pages 464 - 469 * |
梁礼明等: "基于锐度感知最小化与多色域双级融合的视网膜图片质量分级", 科学技术与工程, vol. 22, no. 32, 18 November 2022 (2022-11-18), pages 14289 - 14297 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118411352A (en) * | 2024-05-14 | 2024-07-30 | 深圳市卓美瑞科技有限公司 | Electronic cigarette defect detection method and device based on machine vision perception |
Also Published As
Publication number | Publication date |
---|---|
CN116246331B (en) | 2024-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110197493B (en) | Fundus image blood vessel segmentation method | |
CN112132817B (en) | Retina blood vessel segmentation method for fundus image based on mixed attention mechanism | |
CN112016626B (en) | Uncertainty-based diabetic retinopathy classification system | |
CN110236483B (en) | Method for detecting diabetic retinopathy based on depth residual error network | |
CN111667490B (en) | Fundus picture cup optic disc segmentation method | |
CN112308830A (en) | Attention mechanism and deep supervision strategy-based automatic division identification method for retinopathy of prematurity | |
CN117058676B (en) | Blood vessel segmentation method, device and system based on fundus examination image | |
CN112580580A (en) | Pathological myopia identification method based on data enhancement and model fusion | |
CN116246331B (en) | Automatic keratoconus grading method, device and storage medium | |
CN112446875A (en) | AMD grading system based on macular attention mechanism and uncertainty | |
CN111968107A (en) | Uncertainty-based retinopathy of prematurity plus lesion classification system | |
Sharma et al. | Harnessing the Strength of ResNet50 to Improve the Ocular Disease Recognition | |
Triyadi et al. | Deep learning in image classification using vgg-19 and residual networks for cataract detection | |
Ali et al. | Cataract disease detection used deep convolution neural network | |
Pavani et al. | Robust semantic segmentation of retinal fluids from SD-OCT images using FAM-U-Net | |
CN116503639A (en) | Retina OCT image lesion multi-label classification system and method | |
Thanh et al. | A real-time classification of glaucoma from retinal fundus images using AI technology | |
CN116012639A (en) | Quantitative index and staging method for retinal fundus image of premature infant based on meta-learning | |
CN114998300A (en) | Corneal ulcer classification method based on multi-scale information fusion network | |
WO2022115777A2 (en) | System and methods of predicting parkinson's disease based on retinal images using machine learning | |
Kushwaha et al. | Classifying diabetic retinopathy images using induced deep region of interest extraction | |
CN112418290A (en) | ROI (region of interest) region prediction method and display method of real-time OCT (optical coherence tomography) image | |
Al Sariera et al. | Automated Cataract Detection and Classification Using Random Forest Classifier in Fundus Images. | |
Bajaj et al. | Diabetic retinopathy stage classification | |
El-Hoseny et al. | Optimized Deep Learning Approach for Efficient Diabetic Retinopathy Classification Combining VGG16-CNN. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |