WO2024003630A1 - Machine learning system and method for intraocular lens selection - Google Patents

Machine learning system and method for intraocular lens selection Download PDF

Info

Publication number
WO2024003630A1
WO2024003630A1 PCT/IB2023/055597 IB2023055597W WO2024003630A1 WO 2024003630 A1 WO2024003630 A1 WO 2024003630A1 IB 2023055597 W IB2023055597 W IB 2023055597W WO 2024003630 A1 WO2024003630 A1 WO 2024003630A1
Authority
WO
WIPO (PCT)
Prior art keywords
iol
machine learning
patient
learning model
output parameter
Prior art date
Application number
PCT/IB2023/055597
Other languages
French (fr)
Inventor
Thomas Padrick
Edwin Jay Sarver
Original Assignee
Alcon Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alcon Inc. filed Critical Alcon Inc.
Publication of WO2024003630A1 publication Critical patent/WO2024003630A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/0016Operational features thereof
    • A61B3/0025Operational features thereof characterised by electronic signal processing, e.g. eye models
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/02Prostheses implantable into the body
    • A61F2/14Eye parts, e.g. lenses, corneal implants; Implanting instruments specially adapted therefor; Artificial eyes
    • A61F2/16Intraocular lenses

Definitions

  • the present disclosure relates generally to methods for selecting the refractive power for an intraocular lens (IOL) in order to minimize the post-operative refractive error following cataract surgery.
  • IOL intraocular lens
  • the crystalline lens may become cloudy, a condition known as a cataract. Cataracts, and other conditions, are readily treated by removing the crystalline lens and inserting an artificial lens, known as an intraocular lens (IOL).
  • IOL intraocular lens
  • the IOL may be fabricated to additionally correct for aberrations of the patient’s eye, such as myopia, hyperopia, spherical aberration, cylindrical aberration, and astigmatism.
  • aberrations of the patient’s eye prior to surgery may be readily determined, it is extremely difficult to estimate the combined refractive error of the patient’s eye and the IOL following implantation and healing of the patient’s eye.
  • the present disclosure relates generally to a system selecting an IOL in order to reduce post-operative refractive error.
  • a machine learning model is trained using a plurality of training data entries.
  • the training data entries may each include (a) past pre-operative data for an eye of a patient, the past pre-operative data including biometric data for the eye of the patient and a property of an IOL implanted in the eye of the patient and (b) an actual output parameter derived from both of the property of the IOL and a post-operative refractive error of the eye of the patient following implantation of the IOL.
  • the machine learning model is trained to generate a predicted output parameter for input pre-operative data corresponding to a future IOL implantation procedure.
  • Fig. 1 illustrates dimensions of a patient’s eye, in accordance with certain embodiments.
  • FIG. 2 illustrates a training data entry used to train a machine learning model for guiding the selection of an IOL, in accordance with certain embodiments.
  • FIG. 3 is a process flow diagram of an example method for processing postoperative data to obtain training data suitable for training a machine learning model, in accordance with certain embodiments.
  • Fig. 4 is a process flow diagram of a method for guiding selection of an IOL using a machine learning model, in accordance with certain embodiments.
  • Fig. 5 is a schematic block diagram of an interface for receiving pre-operative data and outputting guidance for selecting an IOL, in accordance with certain embodiments.
  • Fig. 6 is a process flow diagram of a method for selecting a machine learning model of a plurality of machine learning models for guiding the selection of an IOL, in accordance with certain embodiments.
  • Fig. 7A is a schematic block diagram of an ensemble of machine learning models for guiding the selection of an IOL, in accordance with certain embodiments.
  • Fig. 7B is a schematic block diagram of machine learning models combined to implement gradient boosting for guiding the selection of an IOL, in accordance with certain embodiments.
  • FIG. 8 illustrates an example computing device that implements, at least partly, one or more functionalities of training and utilizing a machine learning model for guiding the selection of an IOL, in accordance with certain embodiments.
  • IOL power calculation formulas and models are currently widely used in the industry.
  • the existing IOL power calculation formulas and models are not accurate enough when estimating or predicting a post-operative refractive error for a given IOL power.
  • a variety of machine learning models have been developed to take certain patient data as input and predict or recommend an IOL power that would minimize the post-operative refractive error for the patient.
  • Many of these machine learning models are, however, trained to predict the post-operative refractive error, meaning the post-operative refractive error is itself the output parameter.
  • a technical problem exists with training a machine learning model on the basis of the post-operative refractive error being the output parameter.
  • the technical problem associated with such machine learning models is introduced by the nature of the post-operative refractive error, which, as a data point, generally has a highly-variable value.
  • training a machine learning model to predict the post-operative refractive error directly can result in overfitting by which the machine learning model simply reproduces the random variation in the post-operative refractive error for the training data entries.
  • the embodiments herein provide a technical solution to the technical problem described above by providing systems and methods for training one or more machine learning models for guiding the selection of an IOL, where the one or more machine learning models are trained using an output parameter that is derived from post-operative refractive error.
  • a training dataset may be used including training data entries that each include various patient data as well as an output parameter derived from a measured postoperative refractive error.
  • the output parameter may be obtained from a combination of one or more parameters describing the implanted IOL and the post-operative refractive error.
  • an output parameter may be calculated by combining, for each training data entry, the post-operative refractive error with the IOL power of the IOL that was implanted in the patient’s eye.
  • the output parameter may be an emmetropic IOL (EOIL) power, which is calculated as further described in more detail below.
  • EOIL emmetropic IOL
  • Fig. 1 illustrates an eye 100 including an outer layer, shown as the sclera 102.
  • the cornea 104 is a curved transparent layer at anterior side of the eye.
  • the cornea 104 cooperates with the crystalline lens 106 to focus light onto the retina 108, which includes light-detecting nerve cells.
  • the crystalline lens 106 is contained within a capsular bag 110.
  • the iris 112 is positioned between the cornea 104 and the crystalline lens 106.
  • a refractive error of the eye 100 is based on the various dimensions of the various optical components of the eye 100 and may be used according to the methods described herein in order to guide selection of an IOL that minimizes the post-operative refractive error. These dimensions may include some or all of the average power K of the cornea 104, the white-to- white distance (WTW), the anterior chamber depth (ACD), the lens thickness (LT), axial length (AL), and/or other dimensions.
  • WTW may be defined as the diameter of the opening in the area of the sclera 102 occupied by the cornea 104 and iris 112.
  • ACD may be defined as the distance between the anterior pole (outermost point) of the cornea 104 and the crystalline lens 106.
  • LT may be defined as the thickness of the crystalline lens 106 along the optical axis of the eye 100.
  • AL may be defined as the distance between the anterior pole of the cornea 104 and the retina 108, specifically the Bruch’s membrane of the retina 108.
  • the dimensions of the eye 100 may be measured using one or more pre-operative and/or intra-operative imaging devices, such as an optical coherence tomography (OCT) device, a rotating camera (e.g., a Scheimpflug camera), a magnetic resonance imaging (MRI) device, a keratometer, an ophthalmometer, an optical biometer, an intra-operative aberrometer, and/or any other imaging device or approach.
  • OCT optical coherence tomography
  • MRI magnetic resonance imaging
  • keratometer e.g., an ophthalmometer
  • an optical biometer e.g., an intra-operative aberrometer
  • the dimensions of the eye 100 may also be characterized by evaluating an image of the eye.
  • image 100 may be an OCT image that may be processed with a machine learning model to derive an array of values representing the eye 100.
  • the machine learning model may be embodied as a convolution neural network (CNN), deep neural network (DNN), or other type of machine learning model.
  • the machine learning model may be implemented as, for example, an autoencoder trained to generate the array of values.
  • the array of values is the output of an intermediate layer (i.e., not the first or final layer) of a machine learning model trained to perform a task such as estimating the refractive error from an image of an eye.
  • the OCT image, or labels of the anatomy of the eye 100 in the OCT image may additionally or alternatively be processed by means of a transform to obtain the array of values, such as a Fourier transform, Karhunen-Loeve (K-L) transform.
  • the OCT image, or labels of anatomy of the eye 100 in the OCT image may additionally or alternatively be processed to obtain the Gaussian power and/or principal curvature of the anatomy of the eye 100 in order to obtain the array of values.
  • the OCT image, or labels of anatomy of the eye 100 in the OCT image may additionally or alternatively be reduced using a cognitive network of tasks (CogNet) to obtain the array of values.
  • CogNet cognitive network of tasks
  • Fig. 2 illustrates an example training data entry 200 including patient data that may be used to train a machine learning model to provide guidance for the selection of an IOL.
  • the example training data entry 200 may be provided for each eye operated on.
  • the training data entry 200 includes patient attributes 204, comprising patient demographic data, pre-operative biometric data, intra-operative biometric data, and implanted IOL power, postoperative refractive error 202, and a derived output parameter 206.
  • the pre-operative biometric data may include any of the dimensions described above with respect to Fig. 1, an array of values derived from an image of the eye 100 as described above, and/or any diagnostic/imaging data (or information derived therefrom) provided by preoperative diagnostic/imaging systems and devices.
  • the intra-operative biometric data may include any of the dimensions described above with respect to Fig. 1 , an array of values derived from an image of the eye 100 as described above, and/or any diagnostic/imaging data (or information derived therefrom) provided by intra-operative imaging systems and devices.
  • the patient attributes 204 may also include the power and/or type of the IOL that is actually implanted during the operation.
  • the IOL power may be selected before an operation is commenced or may be selected based on one or more measurements of refractive error obtained during an operation, such as after removal of the crystalline lens.
  • the IOL power may be measured in diopters.
  • the IOL power may be augmented with other attributes of the IOL, such as properties of the IOL correcting for astigmatism or spherical aberration.
  • the patient attributes 204 may further include demographic data.
  • the outcomes for patients may have a degree of correlation to the patient’s age, gender, ethnicity, race, and other demographic factors. Accordingly, some or all of these items of demographic data may be included in the patient attributes 204.
  • Other values such as an identifier of the surgeon, a clinic where surgery was performed, a location (country, state, city, etc.) where the surgery was performed, may also be used as patient attributes 204 that is potentially correlated to postoperative refractive error 202.
  • the post-operative refractive error 202 may include the spherical equivalent (SE) of the patient’s eye following implantation of the IOL whose power is specified in the training data entry 200. Other measures of post-operative refractive error 202 may also be included in the post-operative refractive error 202 such as astigmatism and spherical aberration.
  • Each training data entry 200 may further include a derived output parameter 206 used as the desired output of the training data entry that the machine learning model is trained to generate.
  • the output parameter is a function of the post-operative refractive error 202 and one or more data points provided as part of the patient attributes 204, such as the IOL power.
  • the derived output parameter 206 may be the emmetropic equivalent IOL power (EIOL) as defined below.
  • Fig. 3 illustrates an example method 300 for training a machine learning model for guiding the selection of an IOL based on a derived output parameter (e.g., EIOL).
  • the machine learning model may be trained using a training dataset including training data entries, such as the training data entry 200.
  • the training dataset may be routinely updated. For example, information about new patients may be collected routinely and used to update the training dataset.
  • the method 300 may be executed separately for each type of IOL, or class of types of IOL, resulting in a trained machine learning model for each type of IOL (e.g., monofocal, set of specific monofocal IOL models, multifocal, toric, etc.).
  • the method 300 may include receiving, at step 302, patient attributes 204 and a postoperative refractive error 202 for each training data entry 200.
  • the machine learning model may be composed of multiple machine learning models that individually or collectively perform the tasks described in relation to, e.g., Figs. 4, 6, 7A, and 7B and the corresponding description.
  • the method 300 may include, for each set of patient attributes 204 and post-operative refractive error 202 in a corresponding data entry 200, deriving, at step 304, an output parameter 206 from the post-operative refractive error 202 and one or more items of the patient attributes 204, such as the IOL power of an implanted IOL.
  • the derived output parameter 206 may be the EIOL, which is an approximation of an IOL power that, if used in place of the IOL power of the implanted IOL, would correct the corresponding patient’s postoperative refractive error 202.
  • DIOL the IOL power in diopters and let R be the post-operative refractive error 202 in the form of a spherical equivalent.
  • EIOL may then be calculated as DIOL+ p*R, where p is a scaling factor.
  • the value of p may be a fixed predetermined amount.
  • p may be a value between 0.64 and 0.72, between 0.66 and 0.7, or a value between 0.675 and 0.685. Experiments conducted by the inventors have found 0.68 to be suitable for p for most applications.
  • the value of p is a tunable parameter and may be adjusted in order to improve results.
  • the value of p may be the result of a more complex function of one or both of DIOL and R and other patient attributes 204.
  • an output parameter may be derived as the property of the IOL compensating for the type of refractive error combined (e.g., summed) with a post-operative value for the type of refractive error multiplied by a scaling factor.
  • the scaling factor may be selected using a known relationship between the type of refractive error and the property of the IOL compensating for the type of refractive error or may be determined experimentally. In the following description, spherical equivalent is discussed with the understanding that any other type of refractive error may be substituted and the property of the IOL for compensating for that type of refractive error may be selected using the embodiments disclosed herein.
  • Each training data entry 200 may include a set of patient attributes 204 as input and the EIOL as the desired output, the EIOL being derived from the patient attributes 204 and corresponding post-operative refractive error 202.
  • Many thousands of training data entries may be used to train the machine learning model.
  • the machine learning model may then be trained, at step 306, by, for each training data entry, processing the patient attributes 204 using the machine learning model to obtain an estimated EIOL.
  • the estimated EIOL may be compared to the EIOL of the corresponding training data entry and one or more weights of the machine learning model may be modified by a training algorithm according to a difference between the estimated EIOL and the actual EIOL of the training data entry.
  • Fig. 4 illustrates an example method 400 for utilizing the machine learning model trained according to the method 300 to help with selecting an IOL power to be used in future surgery for an eye of a patient.
  • the method 400 may include receiving, at step 402, the patient attributes 204 for the eye of the patient, not including an IOL power.
  • the method 400 may include selecting, at step 404, an initial IOL power.
  • the initial IOL power may be selected based on the pre-operative and/or intra-operative biometric data using any approach known in the art, including any formula-based selection criteria, such as the Barrett formula, refractive vergence formula, or the like.
  • the method 400 may then include generating, at step 406, an IOL set.
  • the IOL set may include a range of IOL powers, including IOL powers less than or greater than the initial IOL power. Where sufficient computation and storage capacity is available, step 404 may be omitted and the IOL set may simply be the entire set of available IOL powers.
  • the IOL powers of the IOL set may be constrained to be commercially available IOL powers, which are typically available in increments of 0.5 diopters.
  • the method 400 may include predicting, at step 408, for each IOL power in the
  • step 408 may include displaying only the N IOL powers from the IOL set with the lowest N post-operative refractive error, where N is an integer such as a value between three and twenty.
  • Fig. 5 illustrates an example interface 500 for utilizing the machine learning model described in relation to FIGS. 3 and 4.
  • the interface 500 may include some or all of a field 502 for inputting a patient identifier, a field 504 for inputting which eye (right or left) is being operated on, a field 506 for inputting an IOL type, a field 508 for selecting the machine learning model from among a plurality of available machine learning models, one or more fields 510 for inputting biometric data for the patient’s eye (and possibly selecting which dimensions or other biometric data to use), and field 512 for inputting a target post-operative refractive error. Fields may be provided for inputting other patient attributes such as age, gender, ethnicity, an eye identifier (right or left), surgeon name, clinic name, and/or location.
  • fields may be provided for inputting intra-operative data parameters, including intra-operative measurements of an aphakic eye.
  • the fields of the interface 500 may be manually filled or automatically populated with data obtained from other systems or devices (e.g., imaging systems, clinic serves, health databases, electronic medical record (ERM) systems, surgical consoles, digital microscopes, etc.).
  • ERP electronic medical record
  • the interface 500 may include a results section 514 that presents, for each IOL power in the IOL set, the IOL power 516 and the predicted post-operative refractive error 518.
  • the predicted refractive error 518 may be measured in diopters, such as the value (EIOL -DIOL)/P.
  • the machine learning model trained according to the method 300 and utilized according to the method 400 may have various forms such as those illustrated in Figs. 6, 7A, and 7B. However, these are exemplary only and any machine learning or artificial intelligence model known in the art may be trained to perform the tasks described
  • Fig. 6 illustrates a method 600 for using clustering to improve accuracy of the machine learning model described in relation to Figs. 3 and 4.
  • the method 600 may include clustering, at step 602, training data entries based on the values for one or more of the patient attributes 204, the post-operative refractive error 202, and the derived output parameter 206 (e.g., EIOL) in each data entry.
  • Clustering may include using a k nearest neighbors (KNN) algorithm.
  • KNN k nearest neighbors
  • Other clustering approaches may be used such as k means, Gaussian mixture model, centroidbased clustering, density-based clustering, distribution-based clustering, and hierarchical clustering.
  • the method 600 may then include receiving, at step 604, patient attributes 204 for an eye of a patient for which an IOL is to be selected for a future surgery.
  • the patient attributes 204 may include an IOL power selected from an IOL set as described above with respect to Fig. 4.
  • the method may include identifying, at step 606, a cluster relevant to the patient attributes 204.
  • the patient attributes 204 may then be processed, at step 608, using a machine learning model specific to the cluster identified at step 606 to generate a predicted EIOL (or another derived output parameter).
  • the accuracy of predictions of the machine learning model may be improved relative to a machine learning model for a different cluster or one trained for a larger, less-specific set of training data entries.
  • step 606 may include processing the patient attributes 204 according to KNN, which is itself a machine learning model, to identify a cluster of training data entries (i.e., the K nearest neighbors) in the training dataset and then, at step 608, processing the patient attributes 204 according to a second machine learning model trained using that cluster of training data entries, in order to output a predicted EIOL.
  • the second machine learning model may include a multiple linear regression (MLR) model (KNN+MLR) or a random sample consensus (RANSAC) regression model (KNN + RAN).
  • the second machine learning model may also include any machine learning model known in the art that is trained using the cluster of training data entries, such as a DNN, CNN, multiple polynomial regression (MPR) (2 nd order, 3 rd order, or higher), support vector regression model (SVM), or SVM radial bias function (SVM-RBF).
  • MPR multiple polynomial regression
  • SVM support vector regression model
  • SVM-RBF SVM radial bias function
  • the machine learning model trained at step 306 may include the illustrated ensemble machine learning model 700a.
  • the ensemble machine learning model 700a is a composite of multiple machine learning models 702a-702e.
  • Each machine learning model 702a-702e may be of a different type.
  • Non-limiting examples of these types include KNN, MLR, KNN+MLR, KNN+RAN, DNN, multiple polynomial regression (MPR) (2 nd order, 3 rd order, or higher), support vector regression model (SVM), SVM radial bias function (SVM-RBF),
  • Each machine learning model 702a-702e may be separately trained according to the method 300.
  • the outputs of the machine learning models 702a-702e along with the inputs (patient attributes 204) are passed to a blending algorithm 704.
  • the blending algorithm 704 is itself a machine learning model of any type, such as any of the types of machine learning models discussed herein.
  • the blending algorithm 704 may be trained to either (a) select from among the outputs of the machine learning models 702a-702e or (b) combine the outputs of the machine learning models 702a-702e to produce a final result, e.g., a predicted EIOL.
  • Fig. 7B illustrates a gradient boosting machine learning model 700b that is likewise composed of a plurality of machine learning models 702a-702e of different types, such as any of the machine learning model types referenced herein.
  • the first machine learning model 702a takes as an input the patient attributes 204 and produces a predicted EIOL along with a residual error 706a.
  • Each of the other machine learning models 702b-702e takes as an input the patient attributes 204 and the residual error 706a-706d from a preceding stage.
  • the output of the final machine learning model 702e may then be a final EIOL prediction 708.
  • the ensemble machine learning model 700a and gradient boosting machine learning model 700b may provide a marginal benefit relative to any of the machine learning models 702a-702e alone. Experiments conducted by the inventors have found that the ensemble machine learning model 700a or gradient boosting machine learning model 700b can increase the percentage of predictions that are correct within 0.5 diopters by up to about 2 percent.
  • Tables 1 and 2 list results for various individual machine learning models in the form of the percentage of predicted EIOL that are within 0.5 diopters of the actual EIOL, as provided in a data entry.
  • the results of Tables 1 and 2 were obtained using patient attributes 204 and post-operative refractive error 202 for patients receiving the Acrysof Monofocal IOL.
  • the results of Table 1 were obtained using data for 17,500 patients, the data including three biometric variables (AL, K, and WTW) for each patient.
  • the results of Table 2 were obtained using data for 5,277 patients. Entries labeled “(Ex)” additionally used ACD and LT for each patient.
  • Fig. 8 illustrates an example computing system 800 that implements, at least partly, one or more functionalities described herein in response to inputs to the interface 500.
  • the computing system 800 may also implement the methods 300, 400, and/or 600.
  • computing system 800 includes a central processing unit (CPU) 802, one or more I/O device interfaces 804, which may allow for the connection of various I/O devices 814 (e.g., keyboards, displays, mouse devices, pen input, etc.) to computing system 800, network interface 806 through which computing system 800 is connected to network 890 (which may be a local network, an intranet, the internet, or any other group of computing systems communicatively connected to each other), a memory 808, storage 810, and an interconnect 812.
  • CPU central processing unit
  • I/O device interfaces 804 may allow for the connection of various I/O devices 814 (e.g., keyboards, displays, mouse devices, pen input, etc.) to computing system 800
  • network interface 806 through which computing system 800 is connected to network 890 (which may be a local network, an intranet, the internet, or any other group of computing systems communicatively connected to each other)
  • network 890 which may be a local network, an intranet, the internet, or any other group of
  • CPU 802 may retrieve and execute programming instructions stored in the memory 808. Similarly, CPU 802 may retrieve and store application data residing in the memory 808.
  • the interconnect 812 transmits programming instructions and application data, among CPU 802, I/O device interface 804, network interface 806, memory 808, and storage 810.
  • CPU 802 is included to be representative of a single CPU, multiple CPUs, a single CPU having multiple processing cores, and the like.
  • Memory 808 is representative of a volatile memory, such as a random access memory, and/or a nonvolatile memory, such as nonvolatile random access memory, phase change random access memory, or the like. As shown, memory 808 may store a data preparation module 816 for calculating a derived output parameter (e.g., EIOL) based on patient attributes data and a training algorithm 818 for training the machine learning model to predict the derived output parameter. The memory 808 may store a prediction module 820 that uses the machine learning model to predict derived output parameter based on the patient attributes and an IOL power.
  • a derived output parameter e.g., EIOL
  • the memory 808 may store a prediction module 820 that uses the machine learning model to predict derived output parameter based on the patient attributes and an IOL power.
  • Storage 810 may be non-volatile memory, such as a disk drive, solid state drive, or a collection of storage devices distributed across multiple storage systems. Storage 810 may optionally store one or more machine learning models 822 trained as described above and training data 824 for training the machine learning models 822.
  • a phrase referring to “at least one of’ a list of items refers to any combination of those items, including single members.
  • “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c).
  • determining encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like.
  • the methods disclosed herein comprise one or more steps or actions for achieving the methods.
  • the method steps and/or actions may be interchanged with one another without departing from the scope of the claims.
  • the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
  • the various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions.
  • the means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor.
  • ASIC application specific integrated circuit
  • those operations may have corresponding counterpart means-plus- function components with similar numbering.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • PLD programmable logic device
  • a general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • a processing system may be implemented with a bus architecture.
  • the bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints.
  • the bus may link together various circuits including a processor, machine- readable media, and input/output devices, among others.
  • a user interface e.g., keypad, display, mouse, joystick, etc.
  • the bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further.
  • the processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system.
  • the functions may be stored or transmitted over as one or more instructions or code on a computer-readable medium.
  • Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
  • Computer-readable media include both computer storage media and communication media, such as any medium that facilitates transfer of a computer program from one place to another.
  • the processor may be responsible for managing the bus and general processing, including the execution of software modules stored on the computer-readable storage media.
  • a computer-readable storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
  • the computer-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer readable storage medium with instructions stored thereon separate from the wireless node, all of which may be accessed by the processor through the bus interface.
  • the computer-readable media, or any portion thereof may be integrated into the processor, such as the case may be with cache and/or general register files.
  • machine-readable storage media may include, by way of example, RAM (Random Access Memory), flash memory, ROM (Read Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • PROM PROM
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrical Erasable Programmable Read-Only Memory
  • registers magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof.
  • the machine-readable media may be embodied in a computer-program product.
  • a software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media.
  • the computer-readable media may comprise a number of software modules.
  • the software modules include instructions that, when executed by an apparatus such as a processor, cause the processing system to perform various functions.
  • the software modules may include a transmission module and a receiving module.
  • Each software module may reside in a single storage device or be distributed across multiple storage devices.
  • a software module may be loaded into RAM from a hard drive when a triggering event occurs.
  • the processor may load some of the instructions into cache to increase access speed.
  • One or more cache lines may then be loaded into a general register file for execution by the processor.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

Particular embodiments disclosed herein provide an apparatus and corresponding methods for guiding selection of an IOL. A machine learning model is trained to generate an output parameter based on patient attribute data including dimensions of a patient's eye and a property of an IOL. The output parameter may be an emmatropic IOL power (EIOL) corresponding to an IOL that, if substituted for the implanted IOL would reduce post-operative refractive error to zero. The machine learning model is trained with data from past IOL implantation procedures including patient attribute data (eye dimensions and implanted IOL power) and an output parameter that is derived by summing the implanted IOL power and the post-operative refractive error scaled by a scaling factor.

Description

MACHINE LEARNING SYSTEM AND METHOD FOR INTRAOCULAR LENS SELECTION
TECHNICAL FIELD
[0001] The present disclosure relates generally to methods for selecting the refractive power for an intraocular lens (IOL) in order to minimize the post-operative refractive error following cataract surgery.
BACKGROUND
[0002] Light received by the human eye, passes through the transparent cornea covering the iris and pupil of the eye. The light is transmitted through the pupil and is focused by a crystalline lens positioned behind the pupil in a structure called the capsular bag. The light is focused by the lens onto the retina, which includes rods and cones capable of generating nerve impulses in response to the light.
[0003] Through age or disease, the crystalline lens may become cloudy, a condition known as a cataract. Cataracts, and other conditions, are readily treated by removing the crystalline lens and inserting an artificial lens, known as an intraocular lens (IOL). The IOL may be fabricated to additionally correct for aberrations of the patient’s eye, such as myopia, hyperopia, spherical aberration, cylindrical aberration, and astigmatism. Although the refractive error of the patient’s eye prior to surgery may be readily determined, it is extremely difficult to estimate the combined refractive error of the patient’s eye and the IOL following implantation and healing of the patient’s eye.
[0004] Currently, sophisticated IOL power calculation formulas and models are used to drive the selection of an IOL for a patient’s eye. These formulas and models take as input preoperative and/or intra-operative data associated with the patient’s eye and output an estimated post-operative refractive error of the patient’s eye for a given IOL power. However, although widely used and providing adequate accuracy in many instances, the existing IOL power calculation formulas and models are often not accurate enough.
[0005] It would be an advancement in the art to facilitate the selection of the refractive properties of an IOL in order to minimize post-operative refractive error of a patient’s eye following implantation of the IOL. BRIEF SUMMARY
[0006] The present disclosure relates generally to a system selecting an IOL in order to reduce post-operative refractive error.
[0007] A machine learning model is trained using a plurality of training data entries. The training data entries may each include (a) past pre-operative data for an eye of a patient, the past pre-operative data including biometric data for the eye of the patient and a property of an IOL implanted in the eye of the patient and (b) an actual output parameter derived from both of the property of the IOL and a post-operative refractive error of the eye of the patient following implantation of the IOL. The machine learning model is trained to generate a predicted output parameter for input pre-operative data corresponding to a future IOL implantation procedure.
[0008] The following description and the related drawings set forth in detail certain illustrative features of one or more embodiments.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] The appended figures depict certain aspects of the one or more embodiments and are therefore not to be considered limiting of the scope of this disclosure.
[0010] Fig. 1 illustrates dimensions of a patient’s eye, in accordance with certain embodiments.
[0011] Fig. 2 illustrates a training data entry used to train a machine learning model for guiding the selection of an IOL, in accordance with certain embodiments.
[0012] Fig. 3 is a process flow diagram of an example method for processing postoperative data to obtain training data suitable for training a machine learning model, in accordance with certain embodiments.
[0013] Fig. 4 is a process flow diagram of a method for guiding selection of an IOL using a machine learning model, in accordance with certain embodiments.
[0014] Fig. 5 is a schematic block diagram of an interface for receiving pre-operative data and outputting guidance for selecting an IOL, in accordance with certain embodiments. [0015] Fig. 6 is a process flow diagram of a method for selecting a machine learning model of a plurality of machine learning models for guiding the selection of an IOL, in accordance with certain embodiments.
[0016] Fig. 7A is a schematic block diagram of an ensemble of machine learning models for guiding the selection of an IOL, in accordance with certain embodiments.
[0017] Fig. 7B is a schematic block diagram of machine learning models combined to implement gradient boosting for guiding the selection of an IOL, in accordance with certain embodiments.
[0018] Fig. 8 illustrates an example computing device that implements, at least partly, one or more functionalities of training and utilizing a machine learning model for guiding the selection of an IOL, in accordance with certain embodiments.
[0019] To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the drawings. It is contemplated that elements and features of one embodiment may be beneficially incorporated in other embodiments without further recitation.
DETAILED DESCRIPTION
[0020] As described above, many IOL power calculation formulas and models are currently widely used in the industry. However, the existing IOL power calculation formulas and models are not accurate enough when estimating or predicting a post-operative refractive error for a given IOL power. For example, a variety of machine learning models have been developed to take certain patient data as input and predict or recommend an IOL power that would minimize the post-operative refractive error for the patient. Many of these machine learning models are, however, trained to predict the post-operative refractive error, meaning the post-operative refractive error is itself the output parameter. However, a technical problem exists with training a machine learning model on the basis of the post-operative refractive error being the output parameter. In particular, the technical problem associated with such machine learning models is introduced by the nature of the post-operative refractive error, which, as a data point, generally has a highly-variable value. As such, training a machine learning model to predict the post-operative refractive error directly can result in overfitting by which the machine learning model simply reproduces the random variation in the post-operative refractive error for the training data entries.
[0021] Accordingly, the embodiments herein provide a technical solution to the technical problem described above by providing systems and methods for training one or more machine learning models for guiding the selection of an IOL, where the one or more machine learning models are trained using an output parameter that is derived from post-operative refractive error. For example, a training dataset may be used including training data entries that each include various patient data as well as an output parameter derived from a measured postoperative refractive error. In particular for each training data entry, the output parameter may be obtained from a combination of one or more parameters describing the implanted IOL and the post-operative refractive error. For example, an output parameter may be calculated by combining, for each training data entry, the post-operative refractive error with the IOL power of the IOL that was implanted in the patient’s eye. As such, the resulting trained machine learning model would provide smoother and more reliable predictions with respect to variations in IOL power. In one example, the output parameter may be an emmetropic IOL (EOIL) power, which is calculated as further described in more detail below.
[0022] Fig. 1 illustrates an eye 100 including an outer layer, shown as the sclera 102. The cornea 104 is a curved transparent layer at anterior side of the eye. The cornea 104 cooperates with the crystalline lens 106 to focus light onto the retina 108, which includes light-detecting nerve cells. The crystalline lens 106 is contained within a capsular bag 110. The iris 112 is positioned between the cornea 104 and the crystalline lens 106.
[0023] A refractive error of the eye 100 is based on the various dimensions of the various optical components of the eye 100 and may be used according to the methods described herein in order to guide selection of an IOL that minimizes the post-operative refractive error. These dimensions may include some or all of the average power K of the cornea 104, the white-to- white distance (WTW), the anterior chamber depth (ACD), the lens thickness (LT), axial length (AL), and/or other dimensions. WTW may be defined as the diameter of the opening in the area of the sclera 102 occupied by the cornea 104 and iris 112. ACD may be defined as the distance between the anterior pole (outermost point) of the cornea 104 and the crystalline lens 106. LT may be defined as the thickness of the crystalline lens 106 along the optical axis of the eye 100. AL may be defined as the distance between the anterior pole of the cornea 104 and the retina 108, specifically the Bruch’s membrane of the retina 108.
[0024] The dimensions of the eye 100 may be measured using one or more pre-operative and/or intra-operative imaging devices, such as an optical coherence tomography (OCT) device, a rotating camera (e.g., a Scheimpflug camera), a magnetic resonance imaging (MRI) device, a keratometer, an ophthalmometer, an optical biometer, an intra-operative aberrometer, and/or any other imaging device or approach. The dimensions of the eye 100 may also be characterized by evaluating an image of the eye. For example, image 100 may be an OCT image that may be processed with a machine learning model to derive an array of values representing the eye 100. The machine learning model may be embodied as a convolution neural network (CNN), deep neural network (DNN), or other type of machine learning model.
[0025] The machine learning model may be implemented as, for example, an autoencoder trained to generate the array of values. In some embodiments, the array of values is the output of an intermediate layer (i.e., not the first or final layer) of a machine learning model trained to perform a task such as estimating the refractive error from an image of an eye. The OCT image, or labels of the anatomy of the eye 100 in the OCT image may additionally or alternatively be processed by means of a transform to obtain the array of values, such as a Fourier transform, Karhunen-Loeve (K-L) transform. The OCT image, or labels of anatomy of the eye 100 in the OCT image may additionally or alternatively be processed to obtain the Gaussian power and/or principal curvature of the anatomy of the eye 100 in order to obtain the array of values. The OCT image, or labels of anatomy of the eye 100 in the OCT image may additionally or alternatively be reduced using a cognitive network of tasks (CogNet) to obtain the array of values.
[0026] As described above, certain embodiments herein provide one or more machine learning models that are trained using an output parameter that is derived from post-operative refractive error. To train the one or more machine learning models, one or more training datasets including patient data associated with many different patients (e.g., hundreds, thousands, or more patients) may be used. Each of the one or more training datasets may include many training data records or entries, each associated with a different patient. [0027] Fig. 2 illustrates an example training data entry 200 including patient data that may be used to train a machine learning model to provide guidance for the selection of an IOL. The example training data entry 200 may be provided for each eye operated on. As shown, the training data entry 200 includes patient attributes 204, comprising patient demographic data, pre-operative biometric data, intra-operative biometric data, and implanted IOL power, postoperative refractive error 202, and a derived output parameter 206.
[0028] The pre-operative biometric data may include any of the dimensions described above with respect to Fig. 1, an array of values derived from an image of the eye 100 as described above, and/or any diagnostic/imaging data (or information derived therefrom) provided by preoperative diagnostic/imaging systems and devices. The intra-operative biometric data may include any of the dimensions described above with respect to Fig. 1 , an array of values derived from an image of the eye 100 as described above, and/or any diagnostic/imaging data (or information derived therefrom) provided by intra-operative imaging systems and devices. The patient attributes 204 may also include the power and/or type of the IOL that is actually implanted during the operation. The IOL power may be selected before an operation is commenced or may be selected based on one or more measurements of refractive error obtained during an operation, such as after removal of the crystalline lens. The IOL power may be measured in diopters. The IOL power may be augmented with other attributes of the IOL, such as properties of the IOL correcting for astigmatism or spherical aberration.
[0029] The patient attributes 204 may further include demographic data. The outcomes for patients may have a degree of correlation to the patient’s age, gender, ethnicity, race, and other demographic factors. Accordingly, some or all of these items of demographic data may be included in the patient attributes 204. Other values such as an identifier of the surgeon, a clinic where surgery was performed, a location (country, state, city, etc.) where the surgery was performed, may also be used as patient attributes 204 that is potentially correlated to postoperative refractive error 202.
[0030] The post-operative refractive error 202 may include the spherical equivalent (SE) of the patient’s eye following implantation of the IOL whose power is specified in the training data entry 200. Other measures of post-operative refractive error 202 may also be included in the post-operative refractive error 202 such as astigmatism and spherical aberration. Each training data entry 200 may further include a derived output parameter 206 used as the desired output of the training data entry that the machine learning model is trained to generate. In certain embodiments, the output parameter is a function of the post-operative refractive error 202 and one or more data points provided as part of the patient attributes 204, such as the IOL power. For example, the derived output parameter 206 may be the emmetropic equivalent IOL power (EIOL) as defined below.
[0031] Fig. 3 illustrates an example method 300 for training a machine learning model for guiding the selection of an IOL based on a derived output parameter (e.g., EIOL). The machine learning model may be trained using a training dataset including training data entries, such as the training data entry 200. Note that the training dataset may be routinely updated. For example, information about new patients may be collected routinely and used to update the training dataset. Note that, the method 300 may be executed separately for each type of IOL, or class of types of IOL, resulting in a trained machine learning model for each type of IOL (e.g., monofocal, set of specific monofocal IOL models, multifocal, toric, etc.).
[0032] The method 300 may include receiving, at step 302, patient attributes 204 and a postoperative refractive error 202 for each training data entry 200. In the following discussion, training of “the machine learning model” is described with the understanding that the machine learning model may be composed of multiple machine learning models that individually or collectively perform the tasks described in relation to, e.g., Figs. 4, 6, 7A, and 7B and the corresponding description.
[0033] The method 300 may include, for each set of patient attributes 204 and post-operative refractive error 202 in a corresponding data entry 200, deriving, at step 304, an output parameter 206 from the post-operative refractive error 202 and one or more items of the patient attributes 204, such as the IOL power of an implanted IOL. For example, the derived output parameter 206 may be the EIOL, which is an approximation of an IOL power that, if used in place of the IOL power of the implanted IOL, would correct the corresponding patient’s postoperative refractive error 202.
[0034] In one example, let DIOL be the IOL power in diopters and let R be the post-operative refractive error 202 in the form of a spherical equivalent. EIOL may then be calculated as DIOL+ p*R, where p is a scaling factor. The value of p may be a fixed predetermined amount. For example, p may be a value between 0.64 and 0.72, between 0.66 and 0.7, or a value between 0.675 and 0.685. Experiments conducted by the inventors have found 0.68 to be suitable for p for most applications. The value of p is a tunable parameter and may be adjusted in order to improve results. The value of p may be the result of a more complex function of one or both of DIOL and R and other patient attributes 204. Once the EIOL is calculated, it is then added to the corresponding patient data record.
[0035] Post-operative values for astigmatism or spherical aberration individually (as opposed to the spherical equivalent representing both values) or any other type of refractive error may be handled in a like manner: an output parameter may be derived as the property of the IOL compensating for the type of refractive error combined (e.g., summed) with a post-operative value for the type of refractive error multiplied by a scaling factor. The scaling factor may be selected using a known relationship between the type of refractive error and the property of the IOL compensating for the type of refractive error or may be determined experimentally. In the following description, spherical equivalent is discussed with the understanding that any other type of refractive error may be substituted and the property of the IOL for compensating for that type of refractive error may be selected using the embodiments disclosed herein.
[0036] Each training data entry 200 may include a set of patient attributes 204 as input and the EIOL as the desired output, the EIOL being derived from the patient attributes 204 and corresponding post-operative refractive error 202. Many thousands of training data entries may be used to train the machine learning model. The machine learning model may then be trained, at step 306, by, for each training data entry, processing the patient attributes 204 using the machine learning model to obtain an estimated EIOL. The estimated EIOL may be compared to the EIOL of the corresponding training data entry and one or more weights of the machine learning model may be modified by a training algorithm according to a difference between the estimated EIOL and the actual EIOL of the training data entry. Once the machine learning model is trained, such that the difference between the estimated EIOL and the EIOL of the training data entries are minimized or approaching zero, the machine learning model can be deployed and used, as described in relation to Fig. 4.
[0037] Fig. 4 illustrates an example method 400 for utilizing the machine learning model trained according to the method 300 to help with selecting an IOL power to be used in future surgery for an eye of a patient. The method 400 may include receiving, at step 402, the patient attributes 204 for the eye of the patient, not including an IOL power. The method 400 may include selecting, at step 404, an initial IOL power. The initial IOL power may be selected based on the pre-operative and/or intra-operative biometric data using any approach known in the art, including any formula-based selection criteria, such as the Barrett formula, refractive vergence formula, or the like.
[0038] The method 400 may then include generating, at step 406, an IOL set. The IOL set may include a range of IOL powers, including IOL powers less than or greater than the initial IOL power. Where sufficient computation and storage capacity is available, step 404 may be omitted and the IOL set may simply be the entire set of available IOL powers. The IOL powers of the IOL set may be constrained to be commercially available IOL powers, which are typically available in increments of 0.5 diopters.
[0039] [0039] The method 400 may include predicting, at step 408, for each IOL power in the
IOL set, a post-operative refractive error based on the EIOL predicted by the machine learning model. The input to the machine learning model at step 408 for each IOL power in the IOL set may include the patient attributes 204 from step 402 and each of the set of IOL powers. For example, a predicted post-operative refractive error may be calculated as (EIOL - DIOL)/P, where DIOL is an IOL power from the IOL and EIOL is the value predicted by the machine learning model. The predicted post-operative refractive errors may then be displayed in conjunction with the corresponding IOL powers, at step 410, such as on the display of a computing device. Where the IOL set is very large, step 410 may include displaying only the N IOL powers from the IOL set with the lowest N post-operative refractive error, where N is an integer such as a value between three and twenty.
[0040] In this manner, a surgeon or other professional may readily observe which available IOL power will yield the lowest expected post-operative refractive error. Experiments conducted by the inventors have shown that simply using post-operative refractive error as the desired output when training machine learning models is ineffective due to the large amount of noise in measured post-operative refractive error. As such, by using a derived output parameter, such as the EIOL that is derived from a property of the IOL as well as the postoperative refractive error, improved results were obtained. [0041] Fig. 5 illustrates an example interface 500 for utilizing the machine learning model described in relation to FIGS. 3 and 4. The interface 500 may include some or all of a field 502 for inputting a patient identifier, a field 504 for inputting which eye (right or left) is being operated on, a field 506 for inputting an IOL type, a field 508 for selecting the machine learning model from among a plurality of available machine learning models, one or more fields 510 for inputting biometric data for the patient’s eye (and possibly selecting which dimensions or other biometric data to use), and field 512 for inputting a target post-operative refractive error. Fields may be provided for inputting other patient attributes such as age, gender, ethnicity, an eye identifier (right or left), surgeon name, clinic name, and/or location. In addition, other fields may be provided for inputting intra-operative data parameters, including intra-operative measurements of an aphakic eye. The fields of the interface 500 may be manually filled or automatically populated with data obtained from other systems or devices (e.g., imaging systems, clinic serves, health databases, electronic medical record (ERM) systems, surgical consoles, digital microscopes, etc.).
[0042] The interface 500 may include a results section 514 that presents, for each IOL power in the IOL set, the IOL power 516 and the predicted post-operative refractive error 518. The predicted refractive error 518 may be measured in diopters, such as the value (EIOL -DIOL)/P.
[0043] Referring to Figs. 6, 7A, and 7B, the machine learning model trained according to the method 300 and utilized according to the method 400 may have various forms such as those illustrated in Figs. 6, 7A, and 7B. However, these are exemplary only and any machine learning or artificial intelligence model known in the art may be trained to perform the tasks described
[0044] Fig. 6 illustrates a method 600 for using clustering to improve accuracy of the machine learning model described in relation to Figs. 3 and 4. The method 600 may include clustering, at step 602, training data entries based on the values for one or more of the patient attributes 204, the post-operative refractive error 202, and the derived output parameter 206 (e.g., EIOL) in each data entry. Clustering may include using a k nearest neighbors (KNN) algorithm. Other clustering approaches may be used such as k means, Gaussian mixture model, centroidbased clustering, density-based clustering, distribution-based clustering, and hierarchical clustering. [0045] The method 600 may then include receiving, at step 604, patient attributes 204 for an eye of a patient for which an IOL is to be selected for a future surgery. The patient attributes 204 may include an IOL power selected from an IOL set as described above with respect to Fig. 4. The method may include identifying, at step 606, a cluster relevant to the patient attributes 204. The patient attributes 204 may then be processed, at step 608, using a machine learning model specific to the cluster identified at step 606 to generate a predicted EIOL (or another derived output parameter). By using a machine learning model trained using only training data entries in a cluster relevant to the patient attributes 204, the accuracy of predictions of the machine learning model may be improved relative to a machine learning model for a different cluster or one trained for a larger, less-specific set of training data entries.
[0046] For example, step 606 may include processing the patient attributes 204 according to KNN, which is itself a machine learning model, to identify a cluster of training data entries (i.e., the K nearest neighbors) in the training dataset and then, at step 608, processing the patient attributes 204 according to a second machine learning model trained using that cluster of training data entries, in order to output a predicted EIOL. The second machine learning model may include a multiple linear regression (MLR) model (KNN+MLR) or a random sample consensus (RANSAC) regression model (KNN + RAN). The second machine learning model may also include any machine learning model known in the art that is trained using the cluster of training data entries, such as a DNN, CNN, multiple polynomial regression (MPR) (2nd order, 3 rd order, or higher), support vector regression model (SVM), or SVM radial bias function (SVM-RBF).
[0047] Referring to Fig. 7A, the machine learning model trained at step 306 may include the illustrated ensemble machine learning model 700a. The ensemble machine learning model 700a is a composite of multiple machine learning models 702a-702e. Each machine learning model 702a-702e may be of a different type. Non-limiting examples of these types include KNN, MLR, KNN+MLR, KNN+RAN, DNN, multiple polynomial regression (MPR) (2nd order, 3rd order, or higher), support vector regression model (SVM), SVM radial bias function (SVM-RBF),
[0048] Each machine learning model 702a-702e may be separately trained according to the method 300. The outputs of the machine learning models 702a-702e along with the inputs (patient attributes 204) are passed to a blending algorithm 704. The blending algorithm 704 is itself a machine learning model of any type, such as any of the types of machine learning models discussed herein. The blending algorithm 704 may be trained to either (a) select from among the outputs of the machine learning models 702a-702e or (b) combine the outputs of the machine learning models 702a-702e to produce a final result, e.g., a predicted EIOL.
[0049] Fig. 7B illustrates a gradient boosting machine learning model 700b that is likewise composed of a plurality of machine learning models 702a-702e of different types, such as any of the machine learning model types referenced herein. The first machine learning model 702a takes as an input the patient attributes 204 and produces a predicted EIOL along with a residual error 706a. Each of the other machine learning models 702b-702e takes as an input the patient attributes 204 and the residual error 706a-706d from a preceding stage. The output of the final machine learning model 702e may then be a final EIOL prediction 708.
[0050] The ensemble machine learning model 700a and gradient boosting machine learning model 700b may provide a marginal benefit relative to any of the machine learning models 702a-702e alone. Experiments conducted by the inventors have found that the ensemble machine learning model 700a or gradient boosting machine learning model 700b can increase the percentage of predictions that are correct within 0.5 diopters by up to about 2 percent.
[0051] Tables 1 and 2 list results for various individual machine learning models in the form of the percentage of predicted EIOL that are within 0.5 diopters of the actual EIOL, as provided in a data entry. The results of Tables 1 and 2 were obtained using patient attributes 204 and post-operative refractive error 202 for patients receiving the Acrysof Monofocal IOL. The results of Table 1 were obtained using data for 17,500 patients, the data including three biometric variables (AL, K, and WTW) for each patient. The results of Table 2 were obtained using data for 5,277 patients. Entries labeled “(Ex)” additionally used ACD and LT for each patient. Another metric for characterizing the accuracy of a machine learning model is the root-mean-square (RMS) of the errors (predicted EIOL -EIOL from training data entry). Table 1 : Experimental Results for Three Variables (AL, K, WTW)
Figure imgf000015_0001
Table 2: Experimental Results for Five Variables (AL, K, WTW, ACD, LT)
Figure imgf000015_0002
[0052] Fig. 8 illustrates an example computing system 800 that implements, at least partly, one or more functionalities described herein in response to inputs to the interface 500. The computing system 800 may also implement the methods 300, 400, and/or 600.
[0053] As shown, computing system 800 includes a central processing unit (CPU) 802, one or more I/O device interfaces 804, which may allow for the connection of various I/O devices 814 (e.g., keyboards, displays, mouse devices, pen input, etc.) to computing system 800, network interface 806 through which computing system 800 is connected to network 890 (which may be a local network, an intranet, the internet, or any other group of computing systems communicatively connected to each other), a memory 808, storage 810, and an interconnect 812.
[0054] CPU 802 may retrieve and execute programming instructions stored in the memory 808. Similarly, CPU 802 may retrieve and store application data residing in the memory 808. The interconnect 812 transmits programming instructions and application data, among CPU 802, I/O device interface 804, network interface 806, memory 808, and storage 810. CPU 802 is included to be representative of a single CPU, multiple CPUs, a single CPU having multiple processing cores, and the like.
[0055] Memory 808 is representative of a volatile memory, such as a random access memory, and/or a nonvolatile memory, such as nonvolatile random access memory, phase change random access memory, or the like. As shown, memory 808 may store a data preparation module 816 for calculating a derived output parameter (e.g., EIOL) based on patient attributes data and a training algorithm 818 for training the machine learning model to predict the derived output parameter. The memory 808 may store a prediction module 820 that uses the machine learning model to predict derived output parameter based on the patient attributes and an IOL power.
[0056] Storage 810 may be non-volatile memory, such as a disk drive, solid state drive, or a collection of storage devices distributed across multiple storage systems. Storage 810 may optionally store one or more machine learning models 822 trained as described above and training data 824 for training the machine learning models 822.
Additional Considerations
[0057] The preceding description is provided to enable any person skilled in the art to practice the various embodiments described herein. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments. For example, changes may be made in the function and arrangement of elements discussed without departing from the scope of the disclosure. Various examples may omit, substitute, or add various procedures or components as appropriate. Also, features described with respect to some examples may be combined in some other examples. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method that is practiced using other structure, functionality, or structure and functionality in addition to, or other than, the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim.
[0058] As used herein, a phrase referring to “at least one of’ a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover a, b, c, a-b, a-c, b-c, and a-b-c, as well as any combination with multiples of the same element (e.g., a-a, a-a-a, a-a-b, a-a-c, a-b-b, a-c-c, b-b, b-b-b, b-b-c, c-c, and c-c-c or any other ordering of a, b, and c).
[0059] As used herein, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” may include resolving, selecting, choosing, establishing and the like.
[0060] The methods disclosed herein comprise one or more steps or actions for achieving the methods. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims. Further, the various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in figures, those operations may have corresponding counterpart means-plus- function components with similar numbering.
[0061] The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
[0062] A processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine- readable media, and input/output devices, among others. A user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further. The processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system.
[0063] If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer-readable medium. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Computer-readable media include both computer storage media and communication media, such as any medium that facilitates transfer of a computer program from one place to another. The processor may be responsible for managing the bus and general processing, including the execution of software modules stored on the computer-readable storage media. A computer-readable storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. By way of example, the computer-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer readable storage medium with instructions stored thereon separate from the wireless node, all of which may be accessed by the processor through the bus interface. Alternatively, or in addition, the computer-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files. Examples of machine-readable storage media may include, by way of example, RAM (Random Access Memory), flash memory, ROM (Read Only Memory), PROM (Programmable Read-Only Memory), EPROM (Erasable Programmable Read-Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product.
[0064] A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. The computer-readable media may comprise a number of software modules. The software modules include instructions that, when executed by an apparatus such as a processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module, it will be understood that such functionality is implemented by the processor when executing instructions from that software module.
[0065] The following claims are not intended to be limited to the embodiments shown herein, but are to be accorded the full scope consistent with the language of the claims. Within a claim, reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more. No claim element is to be construed under the provisions of 35 U.S.C. §112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.” All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims.

Claims

WHAT IS CLAIMED IS:
1. A system for guiding intra-ocular lens (IOL) selection, the system comprising: one or more processing devices; one or more memory devices operably coupled to the one or more processing devices, the one or more memory devices storing executable code that, when executed by the one or more processing devices, causes the one or more processing devices to: receive patient attribute data; receive a set of one or more IOL properties describing one or more IOLS; and for each IOL property of the one or more IOL properties: processing, using a machine learning model, the patient attribute data and the each IOL property to obtain an output parameter; and calculate a predicted refractive error from a combination of the each IOL property and the output parameter; and output the predicted refractive error for each IOL of the one more IOLs.
2. The system of claim 14, wherein the combination of the each IOL property and the output parameter includes a difference between the output parameter and the IOL property.
3. The system of claim 15, wherein the combination of the each IOL property and the output parameter includes a difference between the output parameter and the IOL property, the difference being scaled by a parameter between 0.66 and 0.7.
4. The system of claim 15, wherein the patient attribute data includes dimensions of an eye of a patient.
5. The system of claim 17, wherein the dimensions of the eye of the patient any of a white-to-white distance (WTW), axial length (AL), average cornea power (K), lens thickness (LT), and anterior chamber depth (ACD).
6. The system of claim 18, wherein the set of one or more IOL properties for the one or more IOLS includes a plurality of IOL properties for a plurality of IOLS; wherein the executable code, when executed by the one or more processing devices, further causes the one or more processing devices to: calculate an initial IOL property based on the dimensions of the eye of the patient; and select the plurality of IOL properties based on the initial IOL property.
7. The system of claim 14, wherein the machine learning model includes one or more machine learning models selected from a group comprising:
K nearest neighbor model; multiple linear regression model; random sample consensus model; multiple polynomial regression model; support vector machine - radial bias function; and deep neural network model.
PCT/IB2023/055597 2022-06-30 2023-05-31 Machine learning system and method for intraocular lens selection WO2024003630A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202217855326A 2022-06-30 2022-06-30
US17/855,326 2022-06-30

Publications (1)

Publication Number Publication Date
WO2024003630A1 true WO2024003630A1 (en) 2024-01-04

Family

ID=87036246

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2023/055597 WO2024003630A1 (en) 2022-06-30 2023-05-31 Machine learning system and method for intraocular lens selection

Country Status (1)

Country Link
WO (1) WO2024003630A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190099262A1 (en) * 2017-09-29 2019-04-04 John Gregory LADAS Systems, apparatuses, and methods for intraocular lens selection using artifical intelligence
US20190209242A1 (en) * 2018-01-05 2019-07-11 Novartis Ag Systems and methods for intraocular lens selection
US20210000542A1 (en) * 2018-07-12 2021-01-07 Alcon Inc. Systems and methods for intraocular lens selection
US20210059756A1 (en) * 2019-08-27 2021-03-04 Visuworks Method for determining lens and apparatus using the method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190099262A1 (en) * 2017-09-29 2019-04-04 John Gregory LADAS Systems, apparatuses, and methods for intraocular lens selection using artifical intelligence
US20190209242A1 (en) * 2018-01-05 2019-07-11 Novartis Ag Systems and methods for intraocular lens selection
US20210000542A1 (en) * 2018-07-12 2021-01-07 Alcon Inc. Systems and methods for intraocular lens selection
US20210059756A1 (en) * 2019-08-27 2021-03-04 Visuworks Method for determining lens and apparatus using the method

Similar Documents

Publication Publication Date Title
Guirao et al. Corneal optical aberrations and retinal image quality in patients in whom monofocal intraocular lenses were implanted
Davison et al. Refractive cylinder outcomes after calculating toric intraocular lens cylinder power using total corneal refractive power
EP2683287B1 (en) Methods of predicting the post-operative position of an iol and uses of such methods
US20180296320A1 (en) Forecasting cataract surgery effectiveness
Karabela et al. Predicting the refractive outcome and accuracy of IOL power calculation after phacoemulsification using the SRK/T formula with ultrasound biometry in medium axial lengths
Caglar et al. The stabilization time of ocular measurements after cataract surgery
Cho et al. Visual outcomes and optical quality of accommodative, multifocal, extended depth-of-focus, and monofocal intraocular lenses in presbyopia-correcting cataract surgery: a systematic review and bayesian network meta-analysis
Fernández-Álvarez et al. Using a multilayer perceptron in intraocular lens power calculation
JP2023528587A (en) Intraocular Lens Selection Based on Predicted Subjective Outcome Score
CN115103653A (en) Pipeline of machine learning support for determining intraocular lens size
Khanna et al. Factors associated with visual outcomes after cataract surgery: A cross-sectional or retrospective study in Liberia
Ünsal et al. Morphologic changes in the anterior segment using ultrasound biomicroscopy after cataract surgery and intraocular lens implantation
Burwinkel et al. Physics-aware learning and domain-specific loss design in ophthalmology
Rastogi et al. Comparative evaluation of intraocular lens power calculation formulas in children
Nakano et al. Influence of posterior corneal astigmatism on the outcomes of toric intraocular lens implantation in eyes with oblique astigmatism
WO2024003630A1 (en) Machine learning system and method for intraocular lens selection
US20220331092A1 (en) Methods and systems for determining intraocular lens (iol) parameters for cataract surgery
Fernández-Muñoz et al. Long-term refractive outcomes in patients with cataracts and keratoconus after phacoemulsification with toric intraocular lens implant
Singh et al. Cataract surgery in Keratoconus revisited–An update on preoperative and intraoperative considerations and postoperative outcomes
Toygar et al. Clinical outcomes of a new diffractive multifocal intraocular lens
CN118120019A (en) System and method for vitreous disease severity measurement
JP2022165915A (en) Ai-based video analysis of cataract surgery for dynamic anomaly recognition and correction
US20240090995A1 (en) Methods and systems for determining intraocular lens parameters for ophthalmic surgery using an emulated finite elements analysis model
US20230148859A1 (en) Prediction of iol power
El-Khayat et al. Optimizing the intraocular lens formula constant according to intraocular lens diameter

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23734731

Country of ref document: EP

Kind code of ref document: A1