WO2023272876A1 - 一种基于多模态数据的双眼圆锥角膜诊断方法 - Google Patents

一种基于多模态数据的双眼圆锥角膜诊断方法 Download PDF

Info

Publication number
WO2023272876A1
WO2023272876A1 PCT/CN2021/110812 CN2021110812W WO2023272876A1 WO 2023272876 A1 WO2023272876 A1 WO 2023272876A1 CN 2021110812 W CN2021110812 W CN 2021110812W WO 2023272876 A1 WO2023272876 A1 WO 2023272876A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
corneal
cornea
height
keratoconus
Prior art date
Application number
PCT/CN2021/110812
Other languages
English (en)
French (fr)
Inventor
沈阳
周行涛
李慧杰
王崇阳
陈文光
赵婧
李美燕
冼艺勇
徐海鹏
牛凌凌
赵武校
韩田
Original Assignee
上海美沃精密仪器股份有限公司
复旦大学附属眼耳鼻喉科医院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海美沃精密仪器股份有限公司, 复旦大学附属眼耳鼻喉科医院 filed Critical 上海美沃精密仪器股份有限公司
Priority to US17/797,114 priority Critical patent/US11717151B2/en
Priority to EP21925096.6A priority patent/EP4365829A1/en
Priority to KR1020227022946A priority patent/KR20230005108A/ko
Priority to JP2022541624A priority patent/JP7454679B2/ja
Publication of WO2023272876A1 publication Critical patent/WO2023272876A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/107Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining the shape or measuring the curvature of the cornea
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the invention relates to an ophthalmology data diagnosis technology, in particular to a binocular keratoconus diagnosis method based on multimodal data.
  • Keratoconus is a clinical ophthalmic disease characterized by corneal dilatation, central thinning and protruding forward, and conical shape. These disorders are contraindications to refractive surgery, occur in one or both eyes, and usually result in significant loss of vision. Keratoconus often develops from the posterior surface of the cornea first, and then gradually develops slowly to the anterior surface of the cornea.
  • the current diagnosis and treatment of keratoconus has developed into a clinical disease that closely cooperates with corneal disease, refractive surgery, and optometry.
  • the method based on corneal topography for the diagnosis of keratoconus is a clinical statistical method, and most of them use the morphological characteristics of corneal topography together with clinical parameters and medical history to diagnose and clinically stage keratoconus.
  • the result of the statistical model is a summary parameter, and the parameter boundary is obtained based on the known diagnostic result data set, such as the widely used KISA index, IS index, SRI ⁇ SAI index, etc.
  • This type of method is mostly limited by platform data, relies too much on the monocular characteristics identified by individuals, and ignores the relationship between the two eyes, and the sensitivity and specificity of different indices are different. It does not give a clear picture of early keratoconus and frustrated keratoconus. good judgment.
  • a binocular keratoconus diagnosis method based on multimodal data is proposed, based on multimodal refractive topography matrix data, combined with neural network convolution method, eigenvalue SVM method, binocular comparison
  • the method and the enhanced topography method with adjustable BFS can give comprehensive keratoconus diagnosis results in both eyes, especially for the screening and diagnosis of early posterior cone and frustrated cone with better robustness and accuracy.
  • the technical solution of the present invention is: a method for diagnosing binocular keratoconus based on multimodal data, specifically comprising the following steps:
  • the data includes the four corneal refraction maps and the corneal absolute height data
  • the corneal refraction four maps include the axial curvature of the anterior corneal surface, the relative height topography of the anterior corneal surface, and the relative height topography of the posterior corneal surface map and corneal thickness topographic map
  • corneal absolute height data includes absolute height data of the front and rear surfaces of the cornea
  • a branch method all the refraction four-map data matrix is sent to the classification network of the deep convolutional network after data processing to identify the sensitivity and specificity of keratoconus, and obtain the output classification result P(A) for a certain case ;
  • B-branch method calculate the eigenvalues of each graph data matrix in all the refraction four-map data matrices, and send the eigenvalue data to the SVM support vector machine binary classification method to identify the sensitivity and specificity of keratoconus, and obtain, A case output classification result P(B);
  • Branch C method compare the absolute height data of the front and rear surfaces of the cornea with the best-fitting spherical surface data to obtain the critical threshold between keratoconus cases and normal cases, so as to judge the output classification result P(C) of a certain case;
  • branch D method Use the average value, maximum value, and standard deviation of the left and right eye refraction four-map data matrix as feature quantities, use the critical threshold, or use the SVM classification method to obtain the optimal sensitivity and specificity, as well as the probability of cones in a certain case.
  • P(D) Use the average value, maximum value, and standard deviation of the left and right eye refraction four-map data matrix as feature quantities, use the critical threshold, or use the SVM classification method to obtain the optimal sensitivity and specificity, as well as the probability of cones in a certain case.
  • the A branch method implements specific steps:
  • A-1 data scaling, all refraction four-map data matrices processed in step 3) are scaled to 224x224 size by linear interpolation;
  • A-2 data normalization divide the data in step A-1 into training set and verification set according to the ratio of 7:3, and then calculate the mean and standard deviation of the refraction four-map data matrix respectively on the training set, corresponding to get 4 mean and 4 standard deviations, and then normalize the refractive four-map data matrix of all cases with the mean and standard deviation;
  • A-3 Classification network design based on deep convolutional network, Resnet50 classification network is used to classify the data matrix of four refraction maps to identify normal and keratoconus in one eye;
  • A-4 Classification model training the refraction four-map data matrix is connected according to the channel, and the input of 4 channels is obtained; data amplification adopts rotation, translation, random fuzzy preprocessing; the loss function adopts the cross-entropy function of two classifications , use the training weight of MobileNetV3 on the IMAGENET dataset as the initial weight, and then do fine-tuning training; finally select the training weight with the smallest difference between the training set and the verification set loss value as the training result;
  • A-5 model index evaluation predict on the verification set, and then compare and evaluate with the true value, and finally obtain the sensitivity and specificity of the A branch method to identify keratoconus
  • A-6 result output if the test sensitivity and specificity of the training set in step A-5 meet the requirements, then record the probability of cone occurrence in both eyes of this branch A method for a certain case as p(Al) and p(Ar) respectively ;
  • Output classification result P(A) p(Al), (p(Al)>p(Ar)); p(Ar), (p(Al) ⁇ p(Ar)).
  • the B branch method implements specific steps:
  • the corneal thickness data matrix is integrated within a radius of 4.5 to obtain the corneal volume
  • B-7 Normalize all eigenvalues of steps B-1 to B-6, and divide all normalized eigenvalues of case data into a training set and a verification set in a ratio of 7:3;
  • B-8 Use the SVM support vector machine binary classification method to perform feature training on the normalized training set feature data in B-7.
  • choose RBF kernel use cross-validation and grid-search to get the optimal c and g to train the data;
  • B-10 result output if the test sensitivity and specificity of the B-9 step to the training set meet the requirements, then the probability of cones occurring in the eyes of a certain case of the branch B method of this record is p(Bl) and p(Br) respectively ;
  • Output classification result P(B) p(Bl), (P(Bl)>P(Br)); p(Br), (p(Bl) ⁇ p(Br)).
  • the C branch method implements specific steps:
  • C-1 Standard relative height data of the front surface of the cornea For the absolute height data of the front and back surfaces of the cornea, the absolute height data within the diameter of the front surface of the cornea within the range of 8 mm are used for ball fitting to obtain the BFS value, and the data of the front surface of the cornea are compared with the obtained best fit The height difference between the congruent spherical surfaces is used as the standard relative height data of the posterior surface of the cornea;
  • BFS Value based on the current BFS, with a step size of 0.2mm, 5 sets of data are shifted up and down to obtain 11 sets of different BFS values, and the height difference between the corneal anterior surface data and the obtained different BFS best fitting spheres As the relative height data of corneal posterior surface features;
  • C-3 corneal anterior surface enhanced height data make a difference between the standard relative height data obtained in step C-2 and the 11 sets of characteristic relative height data obtained in C-3, to obtain 11 sets of corneal anterior surface enhanced data;
  • C-4 Standard relative height data of the posterior surface of the cornea Aiming at the absolute height data of the anterior and posterior surfaces of the cornea, the absolute height data of the posterior surface of the cornea within a diameter range of 8 mm is used for spherical fitting to obtain the BFS value, and the data of the posterior surface of the cornea are compared with the obtained best fit The height difference between the congruent spherical surfaces is used as the standard relative height data of the posterior surface of the cornea;
  • C-5 Feature height data of the posterior surface of the cornea: For the absolute height data of the anterior and posterior surfaces of the cornea, based on the absolute height data within the diameter range of 8 mm of the posterior surface of the cornea, remove the data within the radius of 2 mm from the thinnest point and perform ball fitting to obtain BFS value. Based on the current BFS, with a step size of 0.2mm, 5 sets of data are shifted up and down to obtain 11 sets of different BFS values; Rear surface feature relative height data;
  • corneal posterior surface enhanced height data the difference between the corneal posterior surface standard relative height data obtained in step C-4 and the 11 groups of feature relative height data obtained in C-5, to obtain 11 groups of corneal posterior surface enhanced data;
  • C-7 is characterized by the 22 groups of corneal front and rear surface enhancement data matrices obtained in total in steps C-3 and C-6, combining all sample data to calculate the critical threshold of each group of data for keratoconus cases and normal cases;
  • the D branch method implements specific steps:
  • D-1 Unify the data orientation, mirror the data matrix of the four right eye refraction images along the column direction, and unify the nasal and temporal orientations of the left and right eye refraction four image data matrices;
  • D-2 Refractive four-map diff map matrix respectively make point-to-point difference of left and right eye refraction four-map data matrices and take absolute value to obtain refractive four-map diff map data matrix;
  • D-3diff data feature calculation respectively calculate the average value, maximum value, and standard deviation of all data in the data matrix of the four-map diff map of refraction within the data range of 6mm in diameter as feature quantities;
  • D-4 is characterized by the average value, maximum value, and standard deviation of the 12 groups of left and right corneal refraction four-map diff maps obtained in step D-3, and combines all sample data to calculate the critical threshold of each group of data for keratoconus cases and normal cases , or use the SVM classification method to normalize the features of all types of diff data for training and testing, giving optimal sensitivity and specificity;
  • the multimodal data-based binocular keratoconus diagnosis method of the present invention considers the mutual relationship between the eyes, and combines the deep convolutional network method, the traditional machine learning svm method and the adjustable BSF height map Enhanced methods to identify the sensitivity and specificity of lesions, and balance the sensitivity and specificity, multi-dimensional comprehensive judgment of the incidence of keratoconus with the patient as the unit, combined with binocular data including both manual selection features and deep network from the large Learning from data, the diagnostic method has stronger robustness and accuracy.
  • Fig. 1 is the flow chart of the binocular keratoconus diagnosis method based on multimodal data of the present invention
  • Figure 2 is four pictures of corneal refraction
  • Figure 3 is the absolute height map of the front and rear surfaces of the cornea
  • Fig. 4 is classification network structural diagram in the inventive method
  • Figure 5 is the Diff diagram of the four images of corneal refraction for the left and right eyes.
  • the flow chart of binocular keratoconus diagnosis method based on multimodal data as shown in Figure 1 specifically includes the following steps:
  • the data includes the four images of corneal refraction and the absolute height data of the cornea, and obtain the four images of refraction through the three-dimensional anterior segment analyzer or the OCT measurement device of the anterior segment, as shown in Figure 2, the corneal refraction
  • the light four maps include the axial curvature of the anterior surface of the cornea, the topographic map of the relative height of the anterior corneal surface, the topographic map of the relative height of the posterior corneal surface, and the topographic map of the corneal thickness;
  • the topographic map is obtained with a data step of 0.02mm (the four maps in the four maps of refraction are called topographic maps).
  • branch methods are used to judge the early cone of both eyes. They are respectively recorded as A branch method, B branch method, C branch method and D branch method.
  • this branch method takes a single case as a reference, and every data in all the four-map matrix of refraction participates in the calculation, and keeps all the factors that may affect the judgment of the cone as much as possible, and achieves The purpose of judging lesions in massive data, so that it does not depend on the selection of human subjective features, so as to enhance the specificity of identifying lesions.
  • A-1 data scaling, all refraction four-map data matrices processed in step 3 are scaled to 224x224 size by linear interpolation.
  • A-2 data normalization divide the data in step A-1 into training set and verification set according to the ratio of 7:3, and then calculate the mean and standard deviation of the refraction four-map data matrix respectively on the training set, corresponding to get 4 mean and 4 standard deviations, and then normalize the four-map data matrix of refraction for all cases with the mean and standard deviation.
  • Resnet50 classification network model built its backbone network remains unchanged, only the channel input of the first convolutional layer is changed to 4, and the output number of the final fully connected layer is changed to 2.
  • the network structure is shown in Figure 4.
  • the data matrix of the four refraction maps is connected according to the channel, and then the input of 4 channels is obtained.
  • Data augmentation uses preprocessing such as rotation, translation, and random blurring.
  • the loss function adopts the cross-entropy function of the two classifications.
  • A-5 model index evaluation predicting on the verification set, and then comparing and evaluating with the true value, and finally obtaining the sensitivity and specificity of the A-branch method for identifying keratoconus.
  • A-6 result output if the test sensitivity and specificity of the training set in step A-5 meet the requirements, then record the probability of cone occurrence in both eyes of this branch A method for a certain case as p(Al) and p(Ar) respectively .
  • Output classification results P(A) p(Al), (p(Al)>p(Ar)); p(Ar), (p(Al) ⁇ p(Ar)).
  • This branch method takes a single case as a reference, manually defines the characteristics that can directly reflect the lesion, and then applies the supervised learning method SVM to realize the classification of whether the lesion is present. High efficiency and sensitivity are guaranteed even under small sample data.
  • the corneal thickness data matrix is integrated within a radius of 4.5 to obtain the corneal volume
  • B-7 Normalize all the eigenvalues in steps B-1 to B-6, and divide all the normalized eigenvalues of case data into a training set and a verification set in a ratio of 7:3;
  • B-8 SVM support vector model training apply the SVM support vector machine binary classification method to perform feature training on the normalized training set feature data in B-7, choose RBF kernel (radial basis function kernel) in the process, use cross- Validation (cross-validation) and grid-search (network search) to get the optimal c and g to train the data.
  • RBF kernel radial basis function kernel
  • cross-validation cross-validation
  • grid-search network search
  • This branch method upgrades the traditional Belin method with a single case as a reference, not only fully reflects the lesion characteristics of the front and rear surfaces of the cornea through highly enhanced data, but also improves the feature dimension by changing different BSF values to improve the statistical method When the specificity, reduce the false positive false positive rate.
  • C-1 Standard relative height data of the front surface of the cornea For the absolute height data of the front and back surfaces of the cornea, the absolute height data of the front surface of the cornea within a diameter of 8mm is used for ball fitting to obtain the BFS (best fitting sphere) value, and the front surface of the cornea The height difference between the data and the obtained best fitting sphere was used as the standard relative height data of the posterior surface of the cornea.
  • BFS best fitting sphere
  • BFS value Based on the current BFS, with a step size of 0.2mm, offset 5 sets of data up and down respectively to obtain 11 sets of different BFS values.
  • the height difference between the corneal anterior surface data and different BFS best fitting spheres was used as the relative height data of corneal posterior surface features.
  • C-3 Enhanced height data of the anterior corneal surface: The difference between the standard relative height data obtained in step C-2 and the 11 sets of characteristic relative height data obtained in C-3 is obtained to obtain 11 sets of enhanced corneal anterior surface data.
  • C-4 Standard relative height data of the posterior surface of the cornea Aiming at the absolute height data of the anterior and posterior surfaces of the cornea, the absolute height data of the posterior surface of the cornea within a diameter range of 8 mm is used for spherical fitting to obtain the BFS value, and the data of the posterior surface of the cornea are compared with the obtained best fit The height difference between the congruent spheres was used as the standard relative height data of the posterior surface of the cornea.
  • C-5 Feature height data of the posterior surface of the cornea: For the absolute height data of the anterior and posterior surfaces of the cornea, based on the absolute height data within the diameter range of 8 mm of the posterior surface of the cornea, remove the data within the radius of 2 mm from the thinnest point and perform ball fitting to obtain BFS value. Based on the current BFS and with a step size of 0.2 mm, 5 sets of data are shifted up and down to obtain 11 sets of different BFS values. The height difference between the corneal posterior surface data and the different BFS best fitting spheres was used as the relative height data of the corneal posterior surface features.
  • C-7 is characterized by the 22 groups of corneal front and rear surface enhancement data matrices obtained in steps C-3 and C-6, and combines all sample data to calculate the critical threshold of each group of data for keratoconus cases and normal cases.
  • this branch method takes binocular cases as a reference, adopts the method of combining the left and right eye refraction map data, and reflects the characteristics of the lesion itself by extracting the difference data of the binocular topographic map, so as to improve the recognition accuracy of conus patients as individuals.
  • D-1 Unify the data orientation, mirror the data matrix of the four right eye refraction images along the column direction, so that the nasal and temporal orientations of the left and right eye refraction four image data matrices are unified;
  • D-2 Refractive four-map diff map matrix respectively make point-to-point difference of left and right eye refraction four-map data matrices and then take the absolute value to obtain the refractive four-map diff map data matrix, as shown in Figure 5;
  • D-3 Calculation of diff data features, respectively calculate the average value, maximum value, and standard deviation of all data in the data matrix of the four-map diff map of refraction within the data range of 6mm in diameter as feature quantities.
  • D-4 is characterized by the average value, maximum value, and standard deviation of the 12 groups of left and right corneal refraction four-map diff maps obtained in step D-3, and combines all sample data to calculate the critical threshold of each group of data for keratoconus cases and normal cases , or use the SVM classification method to normalize the features of all types of diff data for training and testing, giving optimal sensitivity and specificity.
  • the weight of the D branch method, the acquisition of the weight should be based on the target requirements, to ensure that the advantages of each branch method are fully utilized, so as to achieve an optimal balance between sensitivity and specificity, and to ensure robustness and at the same time strive for the smallest false positives. Negative rate and false positive rate.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Ophthalmology & Optometry (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Signal Processing (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Geometry (AREA)
  • Eye Examination Apparatus (AREA)
  • Image Analysis (AREA)

Abstract

一种基于多模态数据的双眼圆锥角膜诊断方法,利用双眼角膜屈光四图和角膜绝对高度数据,考虑双眼之间的相互关系,结合了基于深度卷积网络方法、传统机器学习svm方法以及可调整的BSF高度图增强方法来识别病灶的敏感性及特异性,并平衡其敏感性和特异性,多维度综合判断以病人为单位的圆锥角膜发病率,诊断方法有更强的鲁棒性和准确性。

Description

一种基于多模态数据的双眼圆锥角膜诊断方法 技术领域
本发明涉及一种眼科数据诊断技术,特别涉及一种基于多模态数据的双眼圆锥角膜诊断方法。
背景技术
人眼圆锥角膜是以角膜扩张、中央变薄向前突出,呈圆锥形为特征的一种临床眼科疾病。该类疾病是屈光手术的禁忌症,发生于单眼或者双眼,且通常会导致视力显著下降。圆锥角膜多由角膜后表面最先发病,进而逐步向角膜前表面缓慢发展。
当前圆锥角膜的诊断和治疗已经发展成为角膜病学、屈光手术学、视光学等密切合作的一种临床疾病学。通常基于角膜地形图用于圆锥角膜诊断的方法为临床统计学方法,大多利用角膜地形图的形态特征配合临床参数和病史对圆锥角膜进行确诊和临床分期。统计学的模型得出的结果是一个汇总参数,并根据已获知的诊断结果数据集得出参数分界线,如广泛应用的KISA指数,IS指数,SRI\SAI指数等。这类方法多受限于平台数据,过分依赖有限个人为认定的单眼特征,忽略双眼之间的相互关系,且不同指数敏感性特异性不同对早期圆锥角膜和顿挫性圆锥角膜未能给出很好的判断。
发明内容
针对双眼圆锥角膜早期诊断的问题,提出了一种基于多模态数据的双眼圆锥角膜诊断方法,基于多模态屈光地形图矩阵数据,结合神经网络卷积方法,特征值SVM方法,双眼对比法和可调节BFS的增强地形图法,给出双眼综合圆锥角膜诊断结果,尤其对早期后圆锥和顿挫圆锥的筛查和诊断具有更好的鲁棒性和准确性。
本发明的技术方案为:一种基于多模态数据的双眼圆锥角膜诊断方法,具体包括如下步骤:
1)收集双眼多模态数据,数据包括双眼角膜屈光四图和角膜绝对高度数据,角膜屈光四图包括角膜前表面轴向曲率、角膜前表面相对高度地形图、角膜后表面相对高度地形图以及角膜厚度地形图;角膜绝对高度数据包括角膜前后表面绝对高度数据;
2)根据已经分类的病例,将其双眼多模态数据与分类类别进行关联,根据具有要求进行数据分类;
3)将步骤2)屈光四图中的每幅地形图数据及角膜前后表面高度数据统一为相同大小数据矩阵;
4)基于以上数据分四个分支方法分别对双眼早期圆锥进行判断,分别记为A分支方法、B分支方法、C分支方法和D分支方法;
其中:A分支方法:所有屈光四图数据矩阵经过数据处理后送入深度卷积网络的分类网络进行识别圆锥角膜的敏感性及特异性,并得到对于某一病例输出分类结果P(A);
B分支方法:所有屈光四图数据矩阵中每个图形数据矩阵进行特征值计算,将特征值数据送入SVM支持向量机二分类方法,识别圆锥角膜的敏感性及特异性,并得到,某一病例输出分类结果P(B);
C分支方法:运用角膜前后表面绝对高度数据与最佳拟合球面数据比较,获取圆锥角膜病例与正常病例的临界阈值,以此评判出某一病例输出分类结果P(C);D分支方法:将左右眼屈光四图数据矩阵的平均值,最大值,标准差作为特征量,利用临界阈值,或采用SVM分类方法,获取最优敏感性和特异性,以及某一病例双眼发生圆锥的概率P(D);
5)将A、B、C、D分支方法中最后的结果通过加权累和,得到最终某一病例双眼发生圆锥角膜的概率。
优选的,所述A分支方法实现具体步骤:
A-1数据缩放,将步骤3)处理后的所有屈光四图数据矩阵通过线性插值的方法缩放到224x224大小;
A-2数据归一化,将步骤A-1的数据按7∶3的比例分成训练集和验证集,然后在训练集上分别计算屈光四图数据矩阵的均值和标准差,对应得到4个均值和4个标准差,然后对所有病例的屈光四图数据矩阵用均值和标准差进行归一化;
A-3基于深度卷积网络的分类网络设计,采用Resnet50分类网络对屈光四图数据矩阵进行二分类,以识别单眼的正常和圆锥角膜;
A-4分类模型的训练,将屈光四图数据矩阵按照通道连接起来,则得到4通道的输入;数据扩增采用了旋转、平移、随机模糊预处理;损失函数采用二分类的交叉熵函数,用MobileNetV3在IMAGENET数据集上的训练权值作为初始权值,然后做微调训练;最终选取训练集和验证集loss值相差最小的训练权值作为训练结果;
A-5模型指标评测,在验证集上进行预测,然后和真值进行对比评估,最后得到该A分支方法识别圆锥角膜的敏感性及特异性;
A-6结果输出,若A-5步骤对训练集的测试敏感性和特异性达到要求,则记录本A分支方法对于某一病例双眼发生圆锥的概率分别为p(Al)和p(Ar);输出分类结果P(A)=p(Al),(p(Al)>p(Ar));p(Ar),(p(Al)<p(Ar))。
优选的,所述B分支方法实现具体步骤:
B-1角膜前表面轴向曲率特征值的计算,在角膜前表面轴向曲率数据矩阵中计算曲率最值点及位置坐标,计算直径6mm位置上下屈光力差IS值,计算直径范围4.5mm范围内的表面不规则性SRI值,表面非对称性SAI值;
B-2角膜前表面相对高度特征值计算,在角膜前表面相对高度数据矩阵中计算高度最大值及位置坐标;
B-3角膜后表面相对高度特征值计算,在角膜后表面相对高度数据矩阵中计算高度最大值及位置坐标;
B-4角膜厚度特征值计算,角膜厚度数据矩阵中计算厚度最小值及位置坐标,计算角膜顶点处的厚度;
B-5距离特征值计算,计算步骤B-2中角膜前表面高度最大值位置到B-3中角膜后表面高度最大值位置之间距离,计算步骤B-2中角膜前表面高度最大值位置到B-4中角膜厚度最小值位置之间距离,计算步骤B-3中角膜后表面高度最大值位置到B-4中角膜厚度最小值位置之间距离;
B-6角体积特征值计算,将角膜厚度数据矩阵在半径4.5范围内做体积积分以得到角膜体积;
B-7将步骤B-1~B-6所有特征值做归一化处理,并将所有归一化后的病例数据 特征值按7∶3的比例分成训练集和验证集;
B-8应用SVM支持向量机二分类方法对对B-7归一化后的训练集特征数据进行特征训练,过程中选用RBF kernel,用cross-validation和grid-search以得到最优的c和g来训练数据;
B-9模型指标评测,在验证集上进行预测,然后和真值进行对比评估,最后该分支方法识别圆锥角膜的敏感性及特异性;
B-10结果输出,若B-9步骤对训练集的测试敏感性和特异性达到要求,则记录本B分支方法对于某一病例双眼发生圆锥的概率分别为p(Bl)和p(Br);输出分类结果P(B)=p(Bl),(P(Bl)>P(Br));p(Br),(p(Bl)<p(Br))。
优选的,所述C分支方法实现具体步骤:
C-1角膜前表面标准相对高度数据:针对角膜前后表面绝对高度数据,以角膜前表面直径8mm范围内绝对高度数据做球拟合,得到BFS值,将角膜前表面数据与得到的最佳拟合球面之间的高度差作为角膜后表面标准相对高度数据;
C-2角膜前表面特征高度数据:针对角膜前后表面绝对高度数据,以角膜前表面直径8mm范围内绝对高度数据为基准,去除最薄点位置半径2mm范围内的数据做球拟合,得到BFS值;以当前BFS为基准,以0.2mm为步长,上下分别偏移5组数据以得到11组不同BFS值,将角膜前表面数据与得到的不同BFS最佳拟合球面之间的高度差作为角膜后表面特征相对高度数据;
C-3角膜前表面增强高度数据:将步骤C-2得到的标准相对高度数据与C-3得到的11组特征相对高度数据做差值,以得到11组角膜前表面增强数据;
C-4角膜后表面标准相对高度数据:针对角膜前后表面绝对高度数据,以角膜后表面直径8mm范围内绝对高度数据做球拟合,得到BFS值,将角膜后表面数据与得到的最佳拟合球面之间的高度差作为角膜后表面标准相对高度数据;
C-5角膜后表面特征高度数据:针对角膜前后表面绝对高度数据,以角膜后表面直径8mm范围内绝对高度数据为基准,去除最薄点位置半径2mm范围内的数据做球拟合,得到BFS值。以当前BFS为基准,以0.2mm为步长,上下分别偏移5组数据以得到11组不同BFS值;将角膜后表面数据与得到的不同BFS最佳拟合球面之间的高度差作为角膜后表面特征相对高度数据;
C-6角膜后表面增强高度数据,将步骤C-4得到角膜后表面标准相对高度数据与 C-5得到的11组特征相对高度数据做差值,以得到11组角膜后表面增强数据;
C-7以步骤C-3及C-6共计得到的22组角膜前后表面增强数据矩阵为特征,结合所有样本数据统计每一组数据圆锥角膜病例与正常病例的临界阈值;
C-8记录本C分支方法对于某一病例双眼发生圆锥的概率分别为p(Cl)和p(Cr);其中p(Cl)和p(Cr)均根据当前病例计算出的每组增强数据同步骤C-7得到的临界阈值之间的差值作为权重比例累和得到;且输出分类结果P(C)=p(Cl),(p(Cl)>p(Cr));p(Cr),(p(Cl)<P(Cr))。
优选的,所述D分支方法实现具体步骤:
D-1统一数据方位,将右眼屈光四图数据矩阵沿纵列方向做镜像,左右眼屈光四图数据矩阵鼻侧颞侧方位统一;
D-2屈光四图diff图矩阵,分别将左右眼屈光四图数据矩阵做点对点差值后取绝对值,得到屈光四图diff图数据矩阵;
D-3diff数据特征计算,分别计算直径6mm数据范围内屈光四图diff图数据矩阵中所有数据的平均值,最大值,标准差作为特征量;
D-4以步骤D-3得到的12组左右眼角膜屈光四图diff图平均值,最大值,标准差为特征,结合所有样本数据统计每一组数据圆锥角膜病例与正常病例的临界阈值,或采用SVM分类方法,将所有类型diff数据的特征归一化训练和测试,给出最优敏感性和特异性;
D-5记录本分支方法对于某一病例双眼发生圆锥的概率分别为P(D),其根据当前病例计算出的每组diff数据特征值同步骤D-4得到的临界阈值之间的差值作为权重比例累和得到。
本发明的有益效果在于:本发明基于多模态数据的双眼圆锥角膜诊断方法,考虑双眼之间的相互关系,结合了基于深度卷积网络方法、传统机器学习svm方法以及可调整的BSF高度图增强方法来识别病灶的敏感性及特异性,并平衡其敏感性和特异性,多维度综合判断以病人为单位的圆锥角膜发病率,结合双眼数据既包含人工选择特征又包含深度网络自己从大数据中学习,诊断方法有更强的鲁棒性和准确性。
附图说明
图1为本发明基于多模态数据的双眼圆锥角膜诊断方法流程图;
图2为角膜屈光四图;
图3为角膜前后表面绝对高度图;
图4为本发明方法中分类网络结构图;
图5为左右眼角膜屈光四图Diff图。
具体实施方式
下面结合附图和具体实施例对本发明进行详细说明。本实施例以本发明技术方案为前提进行实施,给出了详细的实施方式和具体的操作过程,但本发明的保护范围不限于下述的实施例。
如图1所示基于多模态数据的双眼圆锥角膜诊断方法流程图,具体包括如下步骤:
1.收集双眼多模态数据,数据包括双眼角膜屈光四图和角膜绝对高度数据,通过三维眼前节分析仪,或前节OCT测量装置获取屈光四图,如图2所示,角膜屈光四图包括角膜前表面轴向曲率、角膜前表面相对高度地形图、角膜后表面相对高度地形图以及角膜厚度地形图;以及角膜前后表面绝对高度数据,如图3所示。
2.根据已经分类的病例,将其双眼多模态数据与分类类别进行关联。分类数目取决于具体需求,例如二分类为圆锥和正常。
3.根据角膜屈光四图及颜色对应码,以0.02mm为数据步长获取地形图(屈光四图中的四幅图都叫做地形图)直径范围9mm内二维全采样数据矩阵,这样每幅地形图数据及角膜前后表面高度数据矩阵大小为451*451。
4.基于以上数据分四个分支方法分别对双眼早期圆锥进行判断。分别记为A分支方法、B分支方法、C分支方法和D分支方法。
A分支方法:该分支方法以单个病例为参考,所有屈光四图矩阵中的每一个数据均参与计算,尽最大量地保留了所有可能影响圆锥判断的因素,通过机器的自我学习来达到在海量数据中判断病变的目的,这样可以不依赖于人为主观特征的选取,以增强识别病灶的特异性。
A-1数据缩放,将步骤3处理后的所有屈光四图数据矩阵通过线性插值的方 法缩放到224x224大小。
A-2数据归一化,将步骤A-1的数据按7∶3的比例分成训练集和验证集,然后在训练集上分别计算屈光四图数据矩阵的均值和标准差,对应得到4个均值和4个标准差,然后对所有病例的屈光四图数据矩阵用均值和标准差进行归一化。
A-3基于深度卷积网络的分类网络设计,采用Resnet50分类网络对屈光四图数据矩阵进行二分类,以识别单眼的正常和圆锥角膜。搭建的Resnet50分类网络模型,其骨干网络保持不变,只修改第一个卷积层的通道输入为4,和最后输出全连接层的输出个数成2,网络结构如图4。
A-4分类模型的训练,将屈光四图数据矩阵按照通道连接起来,则得到4通道的输入。数据扩增采用了旋转、平移、随机模糊等预处理。损失函数采用二分类的交叉熵函数。用MobileNetV3在IMAGENET数据集上的训练权值作为初始权值,然后做微调训练。迭代训练60轮,初始学习率0.01,在20轮,40轮分别降低学习率10倍。最终选取训练集和验证集loss值相差最小的训练权值作为训练结果。
A-5模型指标评测,在验证集上进行预测,然后和真值进行对比评估,最后得到该A分支方法识别圆锥角膜的敏感性及特异性。
A-6结果输出,若A-5步骤对训练集的测试敏感性和特异性达到要求,则记录本A分支方法对于某一病例双眼发生圆锥的概率分别为p(Al)和p(Ar)。输出分类结果P(A)=p(Al),(p(Al)>p(Ar));p(Ar),(p(Al)<p(Ar))。
B分支方法:该分支方法以单个病例为参考,通过人工定义能够直接反映病变的特征,然后应用监督学习方法SVM来实现是否病变的分类。保证了即使在小样本数据下的高效性和敏感性。
B-1角膜前表面轴向曲率特征值的计算,在角膜前表面轴向曲率数据矩阵中计算曲率最值点及位置坐标,计算直径6mm位置上下屈光力差IS值,计算直径范围4.5mm范围内的表面不规则性SRI值,表面非对称性SAI值;
B-2角膜前表面相对高度特征值计算,在角膜前表面相对高度数据矩阵中计算高度最大值及位置坐标;
B-3角膜后表面相对高度特征值计算,在角膜后表面相对高度数据矩阵中计算高度最大值及位置坐标;
B-4角膜厚度特征值计算,角膜厚度数据矩阵中计算厚度最小值及位置坐标,计算角膜顶点处的厚度;
B-5距离特征值计算,计算步骤B-2中角膜前表面高度最大值位置到B-3中角膜后表面高度最大值位置之间距离,计算步骤B-2中角膜前表面高度最大值位置到B-4中角膜厚度最小值位置之间距离,计算步骤B-3中角膜后表面高度最大值位置到B-4中角膜厚度最小值位置之间距离;
B-6角体积特征值计算,将角膜厚度数据矩阵在半径4.5范围内做体积积分以得到角膜体积;
B-7将步骤B-1~B-6所有特征值做归一化处理,并将所有归一化后的病例数据特征值按7∶3的比例分成训练集和验证集;
B-8 SVM支持向量模型训练,应用SVM支持向量机二分类方法对B-7归一化后的训练集特征数据进行特征训练,过程中选用RBF kernel(径向基函数核),用cross-validation(交叉验证)和grid-search(网络搜索)以得到最优的c和g来训练数据。
B-9模型指标评测,在验证集上进行预测,然后和真值进行对比评估,最后该分支方法识别圆锥角膜的敏感性及特异性。
B-10结果输出,若B-9步骤对训练集的测试敏感性和特异性达到要求,则记录本B分支方法对于某一病例双眼发生圆锥的概率分别为p(Bl)和p(Br)。输出分类结果P(B)=p(Bl),(p(Bl)>p(Br));p(Br),(p(Bl)<p(Br))。
C分支方法:该分支方法以单个病例为参考,升级了传统Belin方法,不仅通过高度增强数据来充分反映角膜前后表面的病变特征,也通过改变不同的BSF值来提升特征维度,以提升统计方法时的特异性,减少假阳性的误报率。
C-1角膜前表面标准相对高度数据:针对角膜前后表面绝对高度数据,以角膜前表面直径8mm范围内绝对高度数据做球拟合,得到BFS(最佳拟合球面)值,将角膜前表面数据与得到的最佳拟合球面之间的高度差作为角膜后表面标准相对高度数据。
C-2角膜前表面特征高度数据:针对角膜前后表面绝对高度数据,以角膜前表面直径8mm范围内绝对高度数据为基准,去除最薄点位置半径2mm范围内的数据做球拟合,得到BFS值。以当前BFS为基准,以0.2mm为步长,上下分 别偏移5组数据以得到11组不同BFS值。将角膜前表面数据与得到的不同BFS最佳拟合球面之间的高度差作为角膜后表面特征相对高度数据。
C-3角膜前表面增强高度数据:将步骤C-2得到的标准相对高度数据与C-3得到的11组特征相对高度数据做差值,以得到11组角膜前表面增强数据。
C-4角膜后表面标准相对高度数据:针对角膜前后表面绝对高度数据,以角膜后表面直径8mm范围内绝对高度数据做球拟合,得到BFS值,将角膜后表面数据与得到的最佳拟合球面之间的高度差作为角膜后表面标准相对高度数据。
C-5角膜后表面特征高度数据:针对角膜前后表面绝对高度数据,以角膜后表面直径8mm范围内绝对高度数据为基准,去除最薄点位置半径2mm范围内的数据做球拟合,得到BFS值。以当前BFS为基准,以0.2mm为步长,上下分别偏移5组数据以得到11组不同BFS值。将角膜后表面数据与得到的不同BFS最佳拟合球面之间的高度差作为角膜后表面特征相对高度数据。
C-6角膜后表面增强高度数据,将步骤C-4得到角膜后表面标准相对高度数据与C-5得到的11组特征相对高度数据做差值,以得到11组角膜后表面增强数据。
C-7以步骤C-3及C-6共计得到的22组角膜前后表面增强数据矩阵为特征,结合所有样本数据统计每一组数据圆锥角膜病例与正常病例的临界阈值。
C-8记录本C分支方法对于某一病例双眼发生圆锥的概率分别为p(Cl)和p(Cr)。其中p(Cl)和p(Cr)均根据当前病例计算出的每组增强数据同步骤C-7得到的临界阈值之间的差值作为权重比例累和得到。且输出分类结果P(C)=p(Cl),(p(Cl)>p(Cr));p(Cr),(p(Cl)<p(Cr))。
D分支方法:该分支方法以双眼病例为参考,采用左右眼屈光四图数据相结合的方法,通过提取双眼地形图差异数据来反应病灶本身特征,从而提高以圆锥病人为个体的识别精度。
D-1统一数据方位,将右眼屈光四图数据矩阵沿纵列方向做镜像,这样左右眼屈光四图数据矩阵鼻侧颞侧方位统一;
D-2屈光四图diff图矩阵,分别将左右眼屈光四图数据矩阵做点对点差值后取绝对值,得到屈光四图diff图数据矩阵,如图5所示;
D-3 diff数据特征计算,分别计算直径6mm数据范围内屈光四图diff图数据 矩阵中所有数据的平均值,最大值,标准差作为特征量。
D-4以步骤D-3得到的12组左右眼角膜屈光四图diff图平均值,最大值,标准差为特征,结合所有样本数据统计每一组数据圆锥角膜病例与正常病例的临界阈值,或采用SVM分类方法,将所有类型diff数据的特征归一化训练和测试,给出最优敏感性和特异性。
D-5记录本分支方法对于某一病例双眼发生圆锥的概率分别为P(D),其根据当前病例计算出的每组diff数据特征值同步骤D-4得到的临界阈值之间的差值作为权重比例累和得到。
5.将A、B、C、D分支方法中最后的结果通过加权累和,得到最终某一病例双眼发生圆锥角膜的概率P=w 1*P(A)+w 2*P(B)+w 3*P(C)+w 4*P(D)/(w 1+w 2+w 3+w 4),其中w 1、w 2、w 3、w 4分别是A、B、C、D分支方法的权值,该权值的获取应该根据目标需求,保证充分利用各分支方法的优点,以达到敏感性和特异性的一个最优平衡,保证鲁棒性的同时也争取最小的假阴性率和假阳性率。
以上所述实施例仅表达了本发明的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变形和改进,这些都属于本发明的保护范围。因此,本发明专利的保护范围应以所附权利要求为准。

Claims (5)

  1. 一种基于多模态数据的双眼圆锥角膜诊断方法,其特征在于,具体包括如下步骤:
    1)收集双眼多模态数据,数据包括双眼角膜屈光四图和角膜绝对高度数据,角膜屈光四图包括角膜前表面轴向曲率、角膜前表面相对高度地形图、角膜后表面相对高度地形图以及角膜厚度地形图;角膜绝对高度数据包括角膜前后表面绝对高度数据;
    2)根据已经分类的病例,将其双眼多模态数据与分类类别进行关联,根据具有要求进行数据分类;
    3)将步骤2)屈光四图中的每幅地形图数据及角膜前后表面高度数据统一为相同大小数据矩阵;
    4)基于以上数据分四个分支方法分别对双眼早期圆锥进行判断,分别记为A分支方法、B分支方法、C分支方法和D分支方法;
    其中:A分支方法:所有屈光四图数据矩阵经过数据处理后送入深度卷积网络的分类网络进行识别圆锥角膜的敏感性及特异性,并得到对于某一病例输出分类结果P(A);
    B分支方法:所有屈光四图数据矩阵中每个图形数据矩阵进行特征值计算,将特征值数据送入SVM支持向量机二分类方法,识别圆锥角膜的敏感性及特异性,并得到,某一病例输出分类结果P(B);
    C分支方法:运用角膜前后表面绝对高度数据与最佳拟合球面数据比较,获取圆锥角膜病例与正常病例的临界阈值,以此评判出某一病例输出分类结果P(C);
    D分支方法:将左右眼屈光四图数据矩阵的平均值,最大值,标准差作为特征量,利用临界阈值,或采用SVM分类方法,获取最优敏感性和特异性,以及某一病例双眼发生圆锥的概率P(D);
    5)将A、B、C、D分支方法中最后的结果通过加权累和,得到最终某一病例双眼发生圆锥角膜的概率。
  2. 根据权利要求1所述基于多模态数据的双眼圆锥角膜诊断方法,其特征在于,所述A分支方法实现具体步骤:
    A-1数据缩放,将步骤3)处理后的所有屈光四图数据矩阵通过线性插值的方法 缩放到224x224大小;
    A-2数据归一化,将步骤A-1的数据按7∶3的比例分成训练集和验证集,然后在训练集上分别计算屈光四图数据矩阵的均值和标准差,对应得到4个均值和4个标准差,然后对所有病例的屈光四图数据矩阵用均值和标准差进行归一化;
    A-3基于深度卷积网络的分类网络设计,采用Resnet50分类网络对屈光四图数据矩阵进行二分类,以识别单眼的正常和圆锥角膜;
    A-4分类模型的训练,将屈光四图数据矩阵按照通道连接起来,则得到4通道的输入;数据扩增采用了旋转、平移、随机模糊预处理;损失函数采用二分类的交叉熵函数,用MobileNetV3在IMAGENET数据集上的训练权值作为初始权值,然后做微调训练;最终选取训练集和验证集loss值相差最小的训练权值作为训练结果;
    A-5模型指标评测,在验证集上进行预测,然后和真值进行对比评估,最后得到该A分支方法识别圆锥角膜的敏感性及特异性;
    A-6结果输出,若A-5步骤对训练集的测试敏感性和特异性达到要求,则记录本A分支方法对于某一病例双眼发生圆锥的概率分别为p(Al)和p(Ar);输出分类结果P(A)=p(Al),(p(Al)>p(Ar));p(Ar),(p(Al)<p(Ar))。
  3. 根据权利要求1所述基于多模态数据的双眼圆锥角膜诊断方法,其特征在于,所述B分支方法实现具体步骤:
    B-1角膜前表面轴向曲率特征值的计算,在角膜前表面轴向曲率数据矩阵中计算曲率最值点及位置坐标,计算直径6mm位置上下屈光力差IS值,计算直径范围4.5mm范围内的表面不规则性SRI值,表面非对称性SAI值;
    B-2角膜前表面相对高度特征值计算,在角膜前表面相对高度数据矩阵中计算高度最大值及位置坐标;
    B-3角膜后表面相对高度特征值计算,在角膜后表面相对高度数据矩阵中计算高度最大值及位置坐标;
    B-4角膜厚度特征值计算,角膜厚度数据矩阵中计算厚度最小值及位置坐标,计算角膜顶点处的厚度;
    B-5距离特征值计算,计算步骤B-2中角膜前表面高度最大值位置到B-3中角膜后表面高度最大值位置之间距离,计算步骤B-2中角膜前表面高度最大值位置到 B-4中角膜厚度最小值位置之间距离,计算步骤B-3中角膜后表面高度最大值位置到B-4中角膜厚度最小值位置之间距离;
    B-6角体积特征值计算,将角膜厚度数据矩阵在半径4.5范围内做体积积分以得到角膜体积;
    B-7将步骤B-1~B-6所有特征值做归一化处理,并将所有归一化后的病例数据特征值按7∶3的比例分成训练集和验证集;
    B-8应用SVM支持向量机二分类方法对对B-7归一化后的训练集特征数据进行特征训练,过程中选用RBF kernel,用cross-validation和grid-search以得到最优的c和g来训练数据;
    B-9模型指标评测,在验证集上进行预测,然后和真值进行对比评估,最后该分支方法识别圆锥角膜的敏感性及特异性;
    B-10结果输出,若B-9步骤对训练集的测试敏感性和特异性达到要求,则记录本B分支方法对于某一病例双眼发生圆锥的概率分别为p(Bl)和p(Br);输出分类结果P(B)=p(Bl),(p(Bl)>p(Br));p(Br),(p(Bl)<p(Br))。
  4. 根据权利要求1所述基于多模态数据的双眼圆锥角膜诊断方法,其特征在于,所述C分支方法实现具体步骤:
    C-1角膜前表面标准相对高度数据:针对角膜前后表面绝对高度数据,以角膜前表面直径8mm范围内绝对高度数据做球拟合,得到BFS值,将角膜前表面数据与得到的最佳拟合球面之间的高度差作为角膜后表面标准相对高度数据;
    C-2角膜前表面特征高度数据:针对角膜前后表面绝对高度数据,以角膜前表面直径8mm范围内绝对高度数据为基准,去除最薄点位置半径2mm范围内的数据做球拟合,得到BFS值;以当前BFS为基准,以0.2mm为步长,上下分别偏移5组数据以得到11组不同BFS值,将角膜前表面数据与得到的不同BFS最佳拟合球面之间的高度差作为角膜后表面特征相对高度数据;
    C-3角膜前表面增强高度数据:将步骤C-2得到的标准相对高度数据与C-3得到的11组特征相对高度数据做差值,以得到11组角膜前表面增强数据;
    C-4角膜后表面标准相对高度数据:针对角膜前后表面绝对高度数据,以角膜后表面直径8mm范围内绝对高度数据做球拟合,得到BFS值,将角膜后表面数据与得到的最佳拟合球面之间的高度差作为角膜后表面标准相对高度数据;
    C-5角膜后表面特征高度数据:针对角膜前后表面绝对高度数据,以角膜后表面直径8mm范围内绝对高度数据为基准,去除最薄点位置半径2mm范围内的数据做球拟合,得到BFS值。以当前BFS为基准,以0.2mm为步长,上下分别偏移5组数据以得到11组不同BFS值;将角膜后表面数据与得到的不同BFS最佳拟合球面之间的高度差作为角膜后表面特征相对高度数据;
    C-6角膜后表面增强高度数据,将步骤C-4得到角膜后表面标准相对高度数据与C-5得到的11组特征相对高度数据做差值,以得到11组角膜后表面增强数据;
    C-7以步骤C-3及C-6共计得到的22组角膜前后表面增强数据矩阵为特征,结合所有样本数据统计每一组数据圆锥角膜病例与正常病例的临界阈值;
    C-8记录本C分支方法对于某一病例双眼发生圆锥的概率分别为p(Cl)和p(Cr);其中p(Cl)和p(Cr)均根据当前病例计算出的每组增强数据同步骤C-7得到的临界阈值之间的差值作为权重比例累和得到;且输出分类结果P(C)=p(Cl),(p(Cl)>p(Cr));p(Cr),(p(Cl)<p(Cr))。
  5. 根据权利要求1所述基于多模态数据的双眼圆锥角膜诊断方法,其特征在于,所述D分支方法实现具体步骤:
    D-1统一数据方位,将右眼屈光四图数据矩阵沿纵列方向做镜像,左右眼屈光四图数据矩阵鼻侧颞侧方位统一;
    D-2屈光四图diff图矩阵,分别将左右眼屈光四图数据矩阵做点对点差值后取绝对值,得到屈光四图diff图数据矩阵;
    D-3 diff数据特征计算,分别计算直径6mm数据范围内屈光四图diff图数据矩阵中所有数据的平均值,最大值,标准差作为特征量;
    D-4以步骤D-3得到的12组左右眼角膜屈光四图diff图平均值,最大值,标准差为特征,结合所有样本数据统计每一组数据圆锥角膜病例与正常病例的临界阈值,或采用SVM分类方法,将所有类型diff数据的特征归一化训练和测试,给出最优敏感性和特异性;
    D-5记录本分支方法对于某一病例双眼发生圆锥的概率分别为P(D),其根据当前病例计算出的每组diff数据特征值同步骤D-4得到的临界阈值之间的差值作为权重比例累和得到。
PCT/CN2021/110812 2021-06-28 2021-08-05 一种基于多模态数据的双眼圆锥角膜诊断方法 WO2023272876A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US17/797,114 US11717151B2 (en) 2021-06-28 2021-08-05 Method for early diagnosis of keratoconus based on multi-modal data
EP21925096.6A EP4365829A1 (en) 2021-06-28 2021-08-05 Binocular keratoconus diagnosis method based on multi-modal data
KR1020227022946A KR20230005108A (ko) 2021-06-28 2021-08-05 다중 모드 데이터에 기반한 양안 원추각막 진단방법
JP2022541624A JP7454679B2 (ja) 2021-06-28 2021-08-05 マルチモーダルデータに基づく両眼円錐角膜の評価システム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110717940.2A CN113284140B (zh) 2021-06-28 2021-06-28 一种基于多模态数据的双眼圆锥角膜诊断方法
CN202110717940.2 2021-06-28

Publications (1)

Publication Number Publication Date
WO2023272876A1 true WO2023272876A1 (zh) 2023-01-05

Family

ID=77285790

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/110812 WO2023272876A1 (zh) 2021-06-28 2021-08-05 一种基于多模态数据的双眼圆锥角膜诊断方法

Country Status (6)

Country Link
US (1) US11717151B2 (zh)
EP (1) EP4365829A1 (zh)
JP (1) JP7454679B2 (zh)
KR (1) KR20230005108A (zh)
CN (1) CN113284140B (zh)
WO (1) WO2023272876A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115880283B (zh) * 2023-01-19 2023-05-30 北京鹰瞳科技发展股份有限公司 用于检测角膜类型的装置、方法和计算机可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170357879A1 (en) * 2017-08-01 2017-12-14 Retina-Ai Llc Systems and methods using weighted-ensemble supervised-learning for automatic detection of ophthalmic disease from images
CN109256207A (zh) * 2018-08-29 2019-01-22 王雁 一种基于XGBoost+SVM混合机器学习诊断圆锥角膜病例的方法
CN110517219A (zh) * 2019-04-01 2019-11-29 刘泉 一种基于深度学习的角膜地形图判别方法及系统
CN110717884A (zh) * 2019-08-30 2020-01-21 温州医科大学 一种基于眼前节断层成像技术的变化一致性参数用于表达角膜不规则性结构改变的方法
CN111160431A (zh) * 2019-12-19 2020-05-15 浙江大学 一种基于多维特征融合的圆锥角膜识别方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200146812A1 (en) * 2018-07-02 2020-05-14 Gebauer-Klopotek Patent Verwaltungs-Ug Stabilization of collagen scaffolds
US10468142B1 (en) * 2018-07-27 2019-11-05 University Of Miami Artificial intelligence-based system and methods for corneal diagnosis
US10945598B2 (en) 2019-02-06 2021-03-16 Jichi Medical University Method for assisting corneal severity identification using unsupervised machine learning
CN111340776B (zh) * 2020-02-25 2022-05-03 浙江大学 一种基于多维特征自适应融合的圆锥角膜识别方法和系统
CN112036448B (zh) * 2020-08-11 2021-08-20 上海鹰瞳医疗科技有限公司 圆锥角膜识别方法及设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170357879A1 (en) * 2017-08-01 2017-12-14 Retina-Ai Llc Systems and methods using weighted-ensemble supervised-learning for automatic detection of ophthalmic disease from images
CN109256207A (zh) * 2018-08-29 2019-01-22 王雁 一种基于XGBoost+SVM混合机器学习诊断圆锥角膜病例的方法
CN110517219A (zh) * 2019-04-01 2019-11-29 刘泉 一种基于深度学习的角膜地形图判别方法及系统
CN110717884A (zh) * 2019-08-30 2020-01-21 温州医科大学 一种基于眼前节断层成像技术的变化一致性参数用于表达角膜不规则性结构改变的方法
CN111160431A (zh) * 2019-12-19 2020-05-15 浙江大学 一种基于多维特征融合的圆锥角膜识别方法及装置

Also Published As

Publication number Publication date
US20230190089A1 (en) 2023-06-22
CN113284140B (zh) 2022-10-14
JP2023540651A (ja) 2023-09-26
JP7454679B2 (ja) 2024-03-22
KR20230005108A (ko) 2023-01-09
CN113284140A (zh) 2021-08-20
US11717151B2 (en) 2023-08-08
EP4365829A1 (en) 2024-05-08

Similar Documents

Publication Publication Date Title
CN109376636B (zh) 基于胶囊网络的眼底视网膜图像分类方法
Hossain et al. Automatic detection of eye cataract using deep convolution neural networks (DCNNs)
CN109325942B (zh) 基于全卷积神经网络的眼底图像结构分割方法
CN109635862B (zh) 早产儿视网膜病plus病变分类方法
EP2285266B1 (en) Automatic cup-to-disc ratio measurement system
CN111046835A (zh) 一种基于区域特征集合神经网络的眼底照多病种检测系统
CN107563996A (zh) 一种新型视神经盘分割方法及系统
CN114724231A (zh) 一种基于迁移学习的青光眼多模态智能识别系统
Mohammad et al. Texture analysis for the segmentation of optic disc in retinal images
CN111369506B (zh) 一种基于眼部b超图像的晶状体浑浊度分级方法
WO2023272876A1 (zh) 一种基于多模态数据的双眼圆锥角膜诊断方法
CN113989833B (zh) 一种基于EfficientNet网络的口腔粘膜性疾病识别方法
CN114334124A (zh) 一种基于深度神经网络的病理性近视检测系统
Khan et al. Ddnet: Diabetic retinopathy detection system using skip connection-based upgraded feature block
CN113012148A (zh) 一种基于眼底影像的糖尿病肾病-非糖尿病肾病鉴别诊断装置
CN116763250A (zh) 眼前节图像的眼压信息提取方法、终端和存储介质
CN114998300A (zh) 一种基于多尺度信息融合网络的角膜溃疡分类方法
Kazi et al. Processing retinal images to discover diseases
Vitek et al. Evaluation of deep approaches to sclera segmentation
Li et al. Detection and identification of hemorrhages in fundus images of diabetic retinopathy
Mahmood et al. A new technique for cataract eye disease diagnosis in deep learning
CN113963217B (zh) 一种融合弱监督度量学习的前房角图像分级方法
Özdaş et al. Retina Disease Classification in Optical Coherence Tomography Images Using Machine Learning and Firefly Algorithm
Chalakkal Automatic Retinal Image Analysis to Triage Retinal Pathologies
Bhakat A generic study on diabetic retinopathy detection

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022541624

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21925096

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2021925096

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021925096

Country of ref document: EP

Effective date: 20240129