WO2023272876A1 - 一种基于多模态数据的双眼圆锥角膜诊断方法 - Google Patents
一种基于多模态数据的双眼圆锥角膜诊断方法 Download PDFInfo
- Publication number
- WO2023272876A1 WO2023272876A1 PCT/CN2021/110812 CN2021110812W WO2023272876A1 WO 2023272876 A1 WO2023272876 A1 WO 2023272876A1 CN 2021110812 W CN2021110812 W CN 2021110812W WO 2023272876 A1 WO2023272876 A1 WO 2023272876A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- corneal
- cornea
- height
- keratoconus
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 108
- 201000002287 Keratoconus Diseases 0.000 title claims abstract description 48
- 238000003745 diagnosis Methods 0.000 title claims abstract description 17
- 230000035945 sensitivity Effects 0.000 claims abstract description 28
- 210000004087 cornea Anatomy 0.000 claims description 74
- 239000011159 matrix material Substances 0.000 claims description 51
- 238000012549 training Methods 0.000 claims description 44
- 238000004364 calculation method Methods 0.000 claims description 16
- 238000012795 verification Methods 0.000 claims description 15
- 238000012360 testing method Methods 0.000 claims description 9
- 238000012876 topography Methods 0.000 claims description 8
- 238000011156 evaluation Methods 0.000 claims description 7
- 241000237970 Conus <genus> Species 0.000 claims description 6
- 238000012706 support-vector machine Methods 0.000 claims description 5
- 238000002790 cross-validation Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 claims description 4
- 238000013145 classification model Methods 0.000 claims description 3
- 230000001186 cumulative effect Effects 0.000 claims description 3
- 238000013461 design Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000002123 temporal effect Effects 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 230000003321 amplification Effects 0.000 claims description 2
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 2
- 230000003902 lesion Effects 0.000 abstract description 8
- 238000010801 machine learning Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 5
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 238000007619 statistical method Methods 0.000 description 2
- 238000001356 surgical procedure Methods 0.000 description 2
- 201000004569 Blindness Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 208000021921 corneal disease Diseases 0.000 description 1
- 238000013434 data augmentation Methods 0.000 description 1
- 238000002405 diagnostic procedure Methods 0.000 description 1
- 238000013399 early diagnosis Methods 0.000 description 1
- 208000018769 loss of vision Diseases 0.000 description 1
- 231100000864 loss of vision Toxicity 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000013179 statistical model Methods 0.000 description 1
- 230000004393 visual impairment Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/0016—Operational features thereof
- A61B3/0025—Operational features thereof characterised by electronic signal processing, e.g. eye models
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/107—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining the shape or measuring the curvature of the cornea
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10101—Optical tomography; Optical coherence tomography [OCT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30041—Eye; Retina; Ophthalmic
Definitions
- the invention relates to an ophthalmology data diagnosis technology, in particular to a binocular keratoconus diagnosis method based on multimodal data.
- Keratoconus is a clinical ophthalmic disease characterized by corneal dilatation, central thinning and protruding forward, and conical shape. These disorders are contraindications to refractive surgery, occur in one or both eyes, and usually result in significant loss of vision. Keratoconus often develops from the posterior surface of the cornea first, and then gradually develops slowly to the anterior surface of the cornea.
- the current diagnosis and treatment of keratoconus has developed into a clinical disease that closely cooperates with corneal disease, refractive surgery, and optometry.
- the method based on corneal topography for the diagnosis of keratoconus is a clinical statistical method, and most of them use the morphological characteristics of corneal topography together with clinical parameters and medical history to diagnose and clinically stage keratoconus.
- the result of the statistical model is a summary parameter, and the parameter boundary is obtained based on the known diagnostic result data set, such as the widely used KISA index, IS index, SRI ⁇ SAI index, etc.
- This type of method is mostly limited by platform data, relies too much on the monocular characteristics identified by individuals, and ignores the relationship between the two eyes, and the sensitivity and specificity of different indices are different. It does not give a clear picture of early keratoconus and frustrated keratoconus. good judgment.
- a binocular keratoconus diagnosis method based on multimodal data is proposed, based on multimodal refractive topography matrix data, combined with neural network convolution method, eigenvalue SVM method, binocular comparison
- the method and the enhanced topography method with adjustable BFS can give comprehensive keratoconus diagnosis results in both eyes, especially for the screening and diagnosis of early posterior cone and frustrated cone with better robustness and accuracy.
- the technical solution of the present invention is: a method for diagnosing binocular keratoconus based on multimodal data, specifically comprising the following steps:
- the data includes the four corneal refraction maps and the corneal absolute height data
- the corneal refraction four maps include the axial curvature of the anterior corneal surface, the relative height topography of the anterior corneal surface, and the relative height topography of the posterior corneal surface map and corneal thickness topographic map
- corneal absolute height data includes absolute height data of the front and rear surfaces of the cornea
- a branch method all the refraction four-map data matrix is sent to the classification network of the deep convolutional network after data processing to identify the sensitivity and specificity of keratoconus, and obtain the output classification result P(A) for a certain case ;
- B-branch method calculate the eigenvalues of each graph data matrix in all the refraction four-map data matrices, and send the eigenvalue data to the SVM support vector machine binary classification method to identify the sensitivity and specificity of keratoconus, and obtain, A case output classification result P(B);
- Branch C method compare the absolute height data of the front and rear surfaces of the cornea with the best-fitting spherical surface data to obtain the critical threshold between keratoconus cases and normal cases, so as to judge the output classification result P(C) of a certain case;
- branch D method Use the average value, maximum value, and standard deviation of the left and right eye refraction four-map data matrix as feature quantities, use the critical threshold, or use the SVM classification method to obtain the optimal sensitivity and specificity, as well as the probability of cones in a certain case.
- P(D) Use the average value, maximum value, and standard deviation of the left and right eye refraction four-map data matrix as feature quantities, use the critical threshold, or use the SVM classification method to obtain the optimal sensitivity and specificity, as well as the probability of cones in a certain case.
- the A branch method implements specific steps:
- A-1 data scaling, all refraction four-map data matrices processed in step 3) are scaled to 224x224 size by linear interpolation;
- A-2 data normalization divide the data in step A-1 into training set and verification set according to the ratio of 7:3, and then calculate the mean and standard deviation of the refraction four-map data matrix respectively on the training set, corresponding to get 4 mean and 4 standard deviations, and then normalize the refractive four-map data matrix of all cases with the mean and standard deviation;
- A-3 Classification network design based on deep convolutional network, Resnet50 classification network is used to classify the data matrix of four refraction maps to identify normal and keratoconus in one eye;
- A-4 Classification model training the refraction four-map data matrix is connected according to the channel, and the input of 4 channels is obtained; data amplification adopts rotation, translation, random fuzzy preprocessing; the loss function adopts the cross-entropy function of two classifications , use the training weight of MobileNetV3 on the IMAGENET dataset as the initial weight, and then do fine-tuning training; finally select the training weight with the smallest difference between the training set and the verification set loss value as the training result;
- A-5 model index evaluation predict on the verification set, and then compare and evaluate with the true value, and finally obtain the sensitivity and specificity of the A branch method to identify keratoconus
- A-6 result output if the test sensitivity and specificity of the training set in step A-5 meet the requirements, then record the probability of cone occurrence in both eyes of this branch A method for a certain case as p(Al) and p(Ar) respectively ;
- Output classification result P(A) p(Al), (p(Al)>p(Ar)); p(Ar), (p(Al) ⁇ p(Ar)).
- the B branch method implements specific steps:
- the corneal thickness data matrix is integrated within a radius of 4.5 to obtain the corneal volume
- B-7 Normalize all eigenvalues of steps B-1 to B-6, and divide all normalized eigenvalues of case data into a training set and a verification set in a ratio of 7:3;
- B-8 Use the SVM support vector machine binary classification method to perform feature training on the normalized training set feature data in B-7.
- choose RBF kernel use cross-validation and grid-search to get the optimal c and g to train the data;
- B-10 result output if the test sensitivity and specificity of the B-9 step to the training set meet the requirements, then the probability of cones occurring in the eyes of a certain case of the branch B method of this record is p(Bl) and p(Br) respectively ;
- Output classification result P(B) p(Bl), (P(Bl)>P(Br)); p(Br), (p(Bl) ⁇ p(Br)).
- the C branch method implements specific steps:
- C-1 Standard relative height data of the front surface of the cornea For the absolute height data of the front and back surfaces of the cornea, the absolute height data within the diameter of the front surface of the cornea within the range of 8 mm are used for ball fitting to obtain the BFS value, and the data of the front surface of the cornea are compared with the obtained best fit The height difference between the congruent spherical surfaces is used as the standard relative height data of the posterior surface of the cornea;
- BFS Value based on the current BFS, with a step size of 0.2mm, 5 sets of data are shifted up and down to obtain 11 sets of different BFS values, and the height difference between the corneal anterior surface data and the obtained different BFS best fitting spheres As the relative height data of corneal posterior surface features;
- C-3 corneal anterior surface enhanced height data make a difference between the standard relative height data obtained in step C-2 and the 11 sets of characteristic relative height data obtained in C-3, to obtain 11 sets of corneal anterior surface enhanced data;
- C-4 Standard relative height data of the posterior surface of the cornea Aiming at the absolute height data of the anterior and posterior surfaces of the cornea, the absolute height data of the posterior surface of the cornea within a diameter range of 8 mm is used for spherical fitting to obtain the BFS value, and the data of the posterior surface of the cornea are compared with the obtained best fit The height difference between the congruent spherical surfaces is used as the standard relative height data of the posterior surface of the cornea;
- C-5 Feature height data of the posterior surface of the cornea: For the absolute height data of the anterior and posterior surfaces of the cornea, based on the absolute height data within the diameter range of 8 mm of the posterior surface of the cornea, remove the data within the radius of 2 mm from the thinnest point and perform ball fitting to obtain BFS value. Based on the current BFS, with a step size of 0.2mm, 5 sets of data are shifted up and down to obtain 11 sets of different BFS values; Rear surface feature relative height data;
- corneal posterior surface enhanced height data the difference between the corneal posterior surface standard relative height data obtained in step C-4 and the 11 groups of feature relative height data obtained in C-5, to obtain 11 groups of corneal posterior surface enhanced data;
- C-7 is characterized by the 22 groups of corneal front and rear surface enhancement data matrices obtained in total in steps C-3 and C-6, combining all sample data to calculate the critical threshold of each group of data for keratoconus cases and normal cases;
- the D branch method implements specific steps:
- D-1 Unify the data orientation, mirror the data matrix of the four right eye refraction images along the column direction, and unify the nasal and temporal orientations of the left and right eye refraction four image data matrices;
- D-2 Refractive four-map diff map matrix respectively make point-to-point difference of left and right eye refraction four-map data matrices and take absolute value to obtain refractive four-map diff map data matrix;
- D-3diff data feature calculation respectively calculate the average value, maximum value, and standard deviation of all data in the data matrix of the four-map diff map of refraction within the data range of 6mm in diameter as feature quantities;
- D-4 is characterized by the average value, maximum value, and standard deviation of the 12 groups of left and right corneal refraction four-map diff maps obtained in step D-3, and combines all sample data to calculate the critical threshold of each group of data for keratoconus cases and normal cases , or use the SVM classification method to normalize the features of all types of diff data for training and testing, giving optimal sensitivity and specificity;
- the multimodal data-based binocular keratoconus diagnosis method of the present invention considers the mutual relationship between the eyes, and combines the deep convolutional network method, the traditional machine learning svm method and the adjustable BSF height map Enhanced methods to identify the sensitivity and specificity of lesions, and balance the sensitivity and specificity, multi-dimensional comprehensive judgment of the incidence of keratoconus with the patient as the unit, combined with binocular data including both manual selection features and deep network from the large Learning from data, the diagnostic method has stronger robustness and accuracy.
- Fig. 1 is the flow chart of the binocular keratoconus diagnosis method based on multimodal data of the present invention
- Figure 2 is four pictures of corneal refraction
- Figure 3 is the absolute height map of the front and rear surfaces of the cornea
- Fig. 4 is classification network structural diagram in the inventive method
- Figure 5 is the Diff diagram of the four images of corneal refraction for the left and right eyes.
- the flow chart of binocular keratoconus diagnosis method based on multimodal data as shown in Figure 1 specifically includes the following steps:
- the data includes the four images of corneal refraction and the absolute height data of the cornea, and obtain the four images of refraction through the three-dimensional anterior segment analyzer or the OCT measurement device of the anterior segment, as shown in Figure 2, the corneal refraction
- the light four maps include the axial curvature of the anterior surface of the cornea, the topographic map of the relative height of the anterior corneal surface, the topographic map of the relative height of the posterior corneal surface, and the topographic map of the corneal thickness;
- the topographic map is obtained with a data step of 0.02mm (the four maps in the four maps of refraction are called topographic maps).
- branch methods are used to judge the early cone of both eyes. They are respectively recorded as A branch method, B branch method, C branch method and D branch method.
- this branch method takes a single case as a reference, and every data in all the four-map matrix of refraction participates in the calculation, and keeps all the factors that may affect the judgment of the cone as much as possible, and achieves The purpose of judging lesions in massive data, so that it does not depend on the selection of human subjective features, so as to enhance the specificity of identifying lesions.
- A-1 data scaling, all refraction four-map data matrices processed in step 3 are scaled to 224x224 size by linear interpolation.
- A-2 data normalization divide the data in step A-1 into training set and verification set according to the ratio of 7:3, and then calculate the mean and standard deviation of the refraction four-map data matrix respectively on the training set, corresponding to get 4 mean and 4 standard deviations, and then normalize the four-map data matrix of refraction for all cases with the mean and standard deviation.
- Resnet50 classification network model built its backbone network remains unchanged, only the channel input of the first convolutional layer is changed to 4, and the output number of the final fully connected layer is changed to 2.
- the network structure is shown in Figure 4.
- the data matrix of the four refraction maps is connected according to the channel, and then the input of 4 channels is obtained.
- Data augmentation uses preprocessing such as rotation, translation, and random blurring.
- the loss function adopts the cross-entropy function of the two classifications.
- A-5 model index evaluation predicting on the verification set, and then comparing and evaluating with the true value, and finally obtaining the sensitivity and specificity of the A-branch method for identifying keratoconus.
- A-6 result output if the test sensitivity and specificity of the training set in step A-5 meet the requirements, then record the probability of cone occurrence in both eyes of this branch A method for a certain case as p(Al) and p(Ar) respectively .
- Output classification results P(A) p(Al), (p(Al)>p(Ar)); p(Ar), (p(Al) ⁇ p(Ar)).
- This branch method takes a single case as a reference, manually defines the characteristics that can directly reflect the lesion, and then applies the supervised learning method SVM to realize the classification of whether the lesion is present. High efficiency and sensitivity are guaranteed even under small sample data.
- the corneal thickness data matrix is integrated within a radius of 4.5 to obtain the corneal volume
- B-7 Normalize all the eigenvalues in steps B-1 to B-6, and divide all the normalized eigenvalues of case data into a training set and a verification set in a ratio of 7:3;
- B-8 SVM support vector model training apply the SVM support vector machine binary classification method to perform feature training on the normalized training set feature data in B-7, choose RBF kernel (radial basis function kernel) in the process, use cross- Validation (cross-validation) and grid-search (network search) to get the optimal c and g to train the data.
- RBF kernel radial basis function kernel
- cross-validation cross-validation
- grid-search network search
- This branch method upgrades the traditional Belin method with a single case as a reference, not only fully reflects the lesion characteristics of the front and rear surfaces of the cornea through highly enhanced data, but also improves the feature dimension by changing different BSF values to improve the statistical method When the specificity, reduce the false positive false positive rate.
- C-1 Standard relative height data of the front surface of the cornea For the absolute height data of the front and back surfaces of the cornea, the absolute height data of the front surface of the cornea within a diameter of 8mm is used for ball fitting to obtain the BFS (best fitting sphere) value, and the front surface of the cornea The height difference between the data and the obtained best fitting sphere was used as the standard relative height data of the posterior surface of the cornea.
- BFS best fitting sphere
- BFS value Based on the current BFS, with a step size of 0.2mm, offset 5 sets of data up and down respectively to obtain 11 sets of different BFS values.
- the height difference between the corneal anterior surface data and different BFS best fitting spheres was used as the relative height data of corneal posterior surface features.
- C-3 Enhanced height data of the anterior corneal surface: The difference between the standard relative height data obtained in step C-2 and the 11 sets of characteristic relative height data obtained in C-3 is obtained to obtain 11 sets of enhanced corneal anterior surface data.
- C-4 Standard relative height data of the posterior surface of the cornea Aiming at the absolute height data of the anterior and posterior surfaces of the cornea, the absolute height data of the posterior surface of the cornea within a diameter range of 8 mm is used for spherical fitting to obtain the BFS value, and the data of the posterior surface of the cornea are compared with the obtained best fit The height difference between the congruent spheres was used as the standard relative height data of the posterior surface of the cornea.
- C-5 Feature height data of the posterior surface of the cornea: For the absolute height data of the anterior and posterior surfaces of the cornea, based on the absolute height data within the diameter range of 8 mm of the posterior surface of the cornea, remove the data within the radius of 2 mm from the thinnest point and perform ball fitting to obtain BFS value. Based on the current BFS and with a step size of 0.2 mm, 5 sets of data are shifted up and down to obtain 11 sets of different BFS values. The height difference between the corneal posterior surface data and the different BFS best fitting spheres was used as the relative height data of the corneal posterior surface features.
- C-7 is characterized by the 22 groups of corneal front and rear surface enhancement data matrices obtained in steps C-3 and C-6, and combines all sample data to calculate the critical threshold of each group of data for keratoconus cases and normal cases.
- this branch method takes binocular cases as a reference, adopts the method of combining the left and right eye refraction map data, and reflects the characteristics of the lesion itself by extracting the difference data of the binocular topographic map, so as to improve the recognition accuracy of conus patients as individuals.
- D-1 Unify the data orientation, mirror the data matrix of the four right eye refraction images along the column direction, so that the nasal and temporal orientations of the left and right eye refraction four image data matrices are unified;
- D-2 Refractive four-map diff map matrix respectively make point-to-point difference of left and right eye refraction four-map data matrices and then take the absolute value to obtain the refractive four-map diff map data matrix, as shown in Figure 5;
- D-3 Calculation of diff data features, respectively calculate the average value, maximum value, and standard deviation of all data in the data matrix of the four-map diff map of refraction within the data range of 6mm in diameter as feature quantities.
- D-4 is characterized by the average value, maximum value, and standard deviation of the 12 groups of left and right corneal refraction four-map diff maps obtained in step D-3, and combines all sample data to calculate the critical threshold of each group of data for keratoconus cases and normal cases , or use the SVM classification method to normalize the features of all types of diff data for training and testing, giving optimal sensitivity and specificity.
- the weight of the D branch method, the acquisition of the weight should be based on the target requirements, to ensure that the advantages of each branch method are fully utilized, so as to achieve an optimal balance between sensitivity and specificity, and to ensure robustness and at the same time strive for the smallest false positives. Negative rate and false positive rate.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- General Physics & Mathematics (AREA)
- Heart & Thoracic Surgery (AREA)
- Surgery (AREA)
- Veterinary Medicine (AREA)
- Ophthalmology & Optometry (AREA)
- Animal Behavior & Ethology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Pathology (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Signal Processing (AREA)
- Bioinformatics & Computational Biology (AREA)
- Quality & Reliability (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Geometry (AREA)
- Eye Examination Apparatus (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (5)
- 一种基于多模态数据的双眼圆锥角膜诊断方法,其特征在于,具体包括如下步骤:1)收集双眼多模态数据,数据包括双眼角膜屈光四图和角膜绝对高度数据,角膜屈光四图包括角膜前表面轴向曲率、角膜前表面相对高度地形图、角膜后表面相对高度地形图以及角膜厚度地形图;角膜绝对高度数据包括角膜前后表面绝对高度数据;2)根据已经分类的病例,将其双眼多模态数据与分类类别进行关联,根据具有要求进行数据分类;3)将步骤2)屈光四图中的每幅地形图数据及角膜前后表面高度数据统一为相同大小数据矩阵;4)基于以上数据分四个分支方法分别对双眼早期圆锥进行判断,分别记为A分支方法、B分支方法、C分支方法和D分支方法;其中:A分支方法:所有屈光四图数据矩阵经过数据处理后送入深度卷积网络的分类网络进行识别圆锥角膜的敏感性及特异性,并得到对于某一病例输出分类结果P(A);B分支方法:所有屈光四图数据矩阵中每个图形数据矩阵进行特征值计算,将特征值数据送入SVM支持向量机二分类方法,识别圆锥角膜的敏感性及特异性,并得到,某一病例输出分类结果P(B);C分支方法:运用角膜前后表面绝对高度数据与最佳拟合球面数据比较,获取圆锥角膜病例与正常病例的临界阈值,以此评判出某一病例输出分类结果P(C);D分支方法:将左右眼屈光四图数据矩阵的平均值,最大值,标准差作为特征量,利用临界阈值,或采用SVM分类方法,获取最优敏感性和特异性,以及某一病例双眼发生圆锥的概率P(D);5)将A、B、C、D分支方法中最后的结果通过加权累和,得到最终某一病例双眼发生圆锥角膜的概率。
- 根据权利要求1所述基于多模态数据的双眼圆锥角膜诊断方法,其特征在于,所述A分支方法实现具体步骤:A-1数据缩放,将步骤3)处理后的所有屈光四图数据矩阵通过线性插值的方法 缩放到224x224大小;A-2数据归一化,将步骤A-1的数据按7∶3的比例分成训练集和验证集,然后在训练集上分别计算屈光四图数据矩阵的均值和标准差,对应得到4个均值和4个标准差,然后对所有病例的屈光四图数据矩阵用均值和标准差进行归一化;A-3基于深度卷积网络的分类网络设计,采用Resnet50分类网络对屈光四图数据矩阵进行二分类,以识别单眼的正常和圆锥角膜;A-4分类模型的训练,将屈光四图数据矩阵按照通道连接起来,则得到4通道的输入;数据扩增采用了旋转、平移、随机模糊预处理;损失函数采用二分类的交叉熵函数,用MobileNetV3在IMAGENET数据集上的训练权值作为初始权值,然后做微调训练;最终选取训练集和验证集loss值相差最小的训练权值作为训练结果;A-5模型指标评测,在验证集上进行预测,然后和真值进行对比评估,最后得到该A分支方法识别圆锥角膜的敏感性及特异性;A-6结果输出,若A-5步骤对训练集的测试敏感性和特异性达到要求,则记录本A分支方法对于某一病例双眼发生圆锥的概率分别为p(Al)和p(Ar);输出分类结果P(A)=p(Al),(p(Al)>p(Ar));p(Ar),(p(Al)<p(Ar))。
- 根据权利要求1所述基于多模态数据的双眼圆锥角膜诊断方法,其特征在于,所述B分支方法实现具体步骤:B-1角膜前表面轴向曲率特征值的计算,在角膜前表面轴向曲率数据矩阵中计算曲率最值点及位置坐标,计算直径6mm位置上下屈光力差IS值,计算直径范围4.5mm范围内的表面不规则性SRI值,表面非对称性SAI值;B-2角膜前表面相对高度特征值计算,在角膜前表面相对高度数据矩阵中计算高度最大值及位置坐标;B-3角膜后表面相对高度特征值计算,在角膜后表面相对高度数据矩阵中计算高度最大值及位置坐标;B-4角膜厚度特征值计算,角膜厚度数据矩阵中计算厚度最小值及位置坐标,计算角膜顶点处的厚度;B-5距离特征值计算,计算步骤B-2中角膜前表面高度最大值位置到B-3中角膜后表面高度最大值位置之间距离,计算步骤B-2中角膜前表面高度最大值位置到 B-4中角膜厚度最小值位置之间距离,计算步骤B-3中角膜后表面高度最大值位置到B-4中角膜厚度最小值位置之间距离;B-6角体积特征值计算,将角膜厚度数据矩阵在半径4.5范围内做体积积分以得到角膜体积;B-7将步骤B-1~B-6所有特征值做归一化处理,并将所有归一化后的病例数据特征值按7∶3的比例分成训练集和验证集;B-8应用SVM支持向量机二分类方法对对B-7归一化后的训练集特征数据进行特征训练,过程中选用RBF kernel,用cross-validation和grid-search以得到最优的c和g来训练数据;B-9模型指标评测,在验证集上进行预测,然后和真值进行对比评估,最后该分支方法识别圆锥角膜的敏感性及特异性;B-10结果输出,若B-9步骤对训练集的测试敏感性和特异性达到要求,则记录本B分支方法对于某一病例双眼发生圆锥的概率分别为p(Bl)和p(Br);输出分类结果P(B)=p(Bl),(p(Bl)>p(Br));p(Br),(p(Bl)<p(Br))。
- 根据权利要求1所述基于多模态数据的双眼圆锥角膜诊断方法,其特征在于,所述C分支方法实现具体步骤:C-1角膜前表面标准相对高度数据:针对角膜前后表面绝对高度数据,以角膜前表面直径8mm范围内绝对高度数据做球拟合,得到BFS值,将角膜前表面数据与得到的最佳拟合球面之间的高度差作为角膜后表面标准相对高度数据;C-2角膜前表面特征高度数据:针对角膜前后表面绝对高度数据,以角膜前表面直径8mm范围内绝对高度数据为基准,去除最薄点位置半径2mm范围内的数据做球拟合,得到BFS值;以当前BFS为基准,以0.2mm为步长,上下分别偏移5组数据以得到11组不同BFS值,将角膜前表面数据与得到的不同BFS最佳拟合球面之间的高度差作为角膜后表面特征相对高度数据;C-3角膜前表面增强高度数据:将步骤C-2得到的标准相对高度数据与C-3得到的11组特征相对高度数据做差值,以得到11组角膜前表面增强数据;C-4角膜后表面标准相对高度数据:针对角膜前后表面绝对高度数据,以角膜后表面直径8mm范围内绝对高度数据做球拟合,得到BFS值,将角膜后表面数据与得到的最佳拟合球面之间的高度差作为角膜后表面标准相对高度数据;C-5角膜后表面特征高度数据:针对角膜前后表面绝对高度数据,以角膜后表面直径8mm范围内绝对高度数据为基准,去除最薄点位置半径2mm范围内的数据做球拟合,得到BFS值。以当前BFS为基准,以0.2mm为步长,上下分别偏移5组数据以得到11组不同BFS值;将角膜后表面数据与得到的不同BFS最佳拟合球面之间的高度差作为角膜后表面特征相对高度数据;C-6角膜后表面增强高度数据,将步骤C-4得到角膜后表面标准相对高度数据与C-5得到的11组特征相对高度数据做差值,以得到11组角膜后表面增强数据;C-7以步骤C-3及C-6共计得到的22组角膜前后表面增强数据矩阵为特征,结合所有样本数据统计每一组数据圆锥角膜病例与正常病例的临界阈值;C-8记录本C分支方法对于某一病例双眼发生圆锥的概率分别为p(Cl)和p(Cr);其中p(Cl)和p(Cr)均根据当前病例计算出的每组增强数据同步骤C-7得到的临界阈值之间的差值作为权重比例累和得到;且输出分类结果P(C)=p(Cl),(p(Cl)>p(Cr));p(Cr),(p(Cl)<p(Cr))。
- 根据权利要求1所述基于多模态数据的双眼圆锥角膜诊断方法,其特征在于,所述D分支方法实现具体步骤:D-1统一数据方位,将右眼屈光四图数据矩阵沿纵列方向做镜像,左右眼屈光四图数据矩阵鼻侧颞侧方位统一;D-2屈光四图diff图矩阵,分别将左右眼屈光四图数据矩阵做点对点差值后取绝对值,得到屈光四图diff图数据矩阵;D-3 diff数据特征计算,分别计算直径6mm数据范围内屈光四图diff图数据矩阵中所有数据的平均值,最大值,标准差作为特征量;D-4以步骤D-3得到的12组左右眼角膜屈光四图diff图平均值,最大值,标准差为特征,结合所有样本数据统计每一组数据圆锥角膜病例与正常病例的临界阈值,或采用SVM分类方法,将所有类型diff数据的特征归一化训练和测试,给出最优敏感性和特异性;D-5记录本分支方法对于某一病例双眼发生圆锥的概率分别为P(D),其根据当前病例计算出的每组diff数据特征值同步骤D-4得到的临界阈值之间的差值作为权重比例累和得到。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/797,114 US11717151B2 (en) | 2021-06-28 | 2021-08-05 | Method for early diagnosis of keratoconus based on multi-modal data |
EP21925096.6A EP4365829A1 (en) | 2021-06-28 | 2021-08-05 | Binocular keratoconus diagnosis method based on multi-modal data |
KR1020227022946A KR20230005108A (ko) | 2021-06-28 | 2021-08-05 | 다중 모드 데이터에 기반한 양안 원추각막 진단방법 |
JP2022541624A JP7454679B2 (ja) | 2021-06-28 | 2021-08-05 | マルチモーダルデータに基づく両眼円錐角膜の評価システム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110717940.2A CN113284140B (zh) | 2021-06-28 | 2021-06-28 | 一种基于多模态数据的双眼圆锥角膜诊断方法 |
CN202110717940.2 | 2021-06-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023272876A1 true WO2023272876A1 (zh) | 2023-01-05 |
Family
ID=77285790
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/110812 WO2023272876A1 (zh) | 2021-06-28 | 2021-08-05 | 一种基于多模态数据的双眼圆锥角膜诊断方法 |
Country Status (6)
Country | Link |
---|---|
US (1) | US11717151B2 (zh) |
EP (1) | EP4365829A1 (zh) |
JP (1) | JP7454679B2 (zh) |
KR (1) | KR20230005108A (zh) |
CN (1) | CN113284140B (zh) |
WO (1) | WO2023272876A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115880283B (zh) * | 2023-01-19 | 2023-05-30 | 北京鹰瞳科技发展股份有限公司 | 用于检测角膜类型的装置、方法和计算机可读存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170357879A1 (en) * | 2017-08-01 | 2017-12-14 | Retina-Ai Llc | Systems and methods using weighted-ensemble supervised-learning for automatic detection of ophthalmic disease from images |
CN109256207A (zh) * | 2018-08-29 | 2019-01-22 | 王雁 | 一种基于XGBoost+SVM混合机器学习诊断圆锥角膜病例的方法 |
CN110517219A (zh) * | 2019-04-01 | 2019-11-29 | 刘泉 | 一种基于深度学习的角膜地形图判别方法及系统 |
CN110717884A (zh) * | 2019-08-30 | 2020-01-21 | 温州医科大学 | 一种基于眼前节断层成像技术的变化一致性参数用于表达角膜不规则性结构改变的方法 |
CN111160431A (zh) * | 2019-12-19 | 2020-05-15 | 浙江大学 | 一种基于多维特征融合的圆锥角膜识别方法及装置 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200146812A1 (en) * | 2018-07-02 | 2020-05-14 | Gebauer-Klopotek Patent Verwaltungs-Ug | Stabilization of collagen scaffolds |
US10468142B1 (en) * | 2018-07-27 | 2019-11-05 | University Of Miami | Artificial intelligence-based system and methods for corneal diagnosis |
US10945598B2 (en) | 2019-02-06 | 2021-03-16 | Jichi Medical University | Method for assisting corneal severity identification using unsupervised machine learning |
CN111340776B (zh) * | 2020-02-25 | 2022-05-03 | 浙江大学 | 一种基于多维特征自适应融合的圆锥角膜识别方法和系统 |
CN112036448B (zh) * | 2020-08-11 | 2021-08-20 | 上海鹰瞳医疗科技有限公司 | 圆锥角膜识别方法及设备 |
-
2021
- 2021-06-28 CN CN202110717940.2A patent/CN113284140B/zh active Active
- 2021-08-05 EP EP21925096.6A patent/EP4365829A1/en active Pending
- 2021-08-05 US US17/797,114 patent/US11717151B2/en active Active
- 2021-08-05 KR KR1020227022946A patent/KR20230005108A/ko not_active Application Discontinuation
- 2021-08-05 WO PCT/CN2021/110812 patent/WO2023272876A1/zh active Application Filing
- 2021-08-05 JP JP2022541624A patent/JP7454679B2/ja active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170357879A1 (en) * | 2017-08-01 | 2017-12-14 | Retina-Ai Llc | Systems and methods using weighted-ensemble supervised-learning for automatic detection of ophthalmic disease from images |
CN109256207A (zh) * | 2018-08-29 | 2019-01-22 | 王雁 | 一种基于XGBoost+SVM混合机器学习诊断圆锥角膜病例的方法 |
CN110517219A (zh) * | 2019-04-01 | 2019-11-29 | 刘泉 | 一种基于深度学习的角膜地形图判别方法及系统 |
CN110717884A (zh) * | 2019-08-30 | 2020-01-21 | 温州医科大学 | 一种基于眼前节断层成像技术的变化一致性参数用于表达角膜不规则性结构改变的方法 |
CN111160431A (zh) * | 2019-12-19 | 2020-05-15 | 浙江大学 | 一种基于多维特征融合的圆锥角膜识别方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
US20230190089A1 (en) | 2023-06-22 |
CN113284140B (zh) | 2022-10-14 |
JP2023540651A (ja) | 2023-09-26 |
JP7454679B2 (ja) | 2024-03-22 |
KR20230005108A (ko) | 2023-01-09 |
CN113284140A (zh) | 2021-08-20 |
US11717151B2 (en) | 2023-08-08 |
EP4365829A1 (en) | 2024-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109376636B (zh) | 基于胶囊网络的眼底视网膜图像分类方法 | |
Hossain et al. | Automatic detection of eye cataract using deep convolution neural networks (DCNNs) | |
CN109325942B (zh) | 基于全卷积神经网络的眼底图像结构分割方法 | |
CN109635862B (zh) | 早产儿视网膜病plus病变分类方法 | |
EP2285266B1 (en) | Automatic cup-to-disc ratio measurement system | |
CN111046835A (zh) | 一种基于区域特征集合神经网络的眼底照多病种检测系统 | |
CN107563996A (zh) | 一种新型视神经盘分割方法及系统 | |
CN114724231A (zh) | 一种基于迁移学习的青光眼多模态智能识别系统 | |
Mohammad et al. | Texture analysis for the segmentation of optic disc in retinal images | |
CN111369506B (zh) | 一种基于眼部b超图像的晶状体浑浊度分级方法 | |
WO2023272876A1 (zh) | 一种基于多模态数据的双眼圆锥角膜诊断方法 | |
CN113989833B (zh) | 一种基于EfficientNet网络的口腔粘膜性疾病识别方法 | |
CN114334124A (zh) | 一种基于深度神经网络的病理性近视检测系统 | |
Khan et al. | Ddnet: Diabetic retinopathy detection system using skip connection-based upgraded feature block | |
CN113012148A (zh) | 一种基于眼底影像的糖尿病肾病-非糖尿病肾病鉴别诊断装置 | |
CN116763250A (zh) | 眼前节图像的眼压信息提取方法、终端和存储介质 | |
CN114998300A (zh) | 一种基于多尺度信息融合网络的角膜溃疡分类方法 | |
Kazi et al. | Processing retinal images to discover diseases | |
Vitek et al. | Evaluation of deep approaches to sclera segmentation | |
Li et al. | Detection and identification of hemorrhages in fundus images of diabetic retinopathy | |
Mahmood et al. | A new technique for cataract eye disease diagnosis in deep learning | |
CN113963217B (zh) | 一种融合弱监督度量学习的前房角图像分级方法 | |
Özdaş et al. | Retina Disease Classification in Optical Coherence Tomography Images Using Machine Learning and Firefly Algorithm | |
Chalakkal | Automatic Retinal Image Analysis to Triage Retinal Pathologies | |
Bhakat | A generic study on diabetic retinopathy detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2022541624 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21925096 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021925096 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021925096 Country of ref document: EP Effective date: 20240129 |