CN118451452A - Prognosis model for predicting fibrosis development - Google Patents

Prognosis model for predicting fibrosis development Download PDF

Info

Publication number
CN118451452A
CN118451452A CN202280083690.4A CN202280083690A CN118451452A CN 118451452 A CN118451452 A CN 118451452A CN 202280083690 A CN202280083690 A CN 202280083690A CN 118451452 A CN118451452 A CN 118451452A
Authority
CN
China
Prior art keywords
image data
output
model
data
retinal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280083690.4A
Other languages
Chinese (zh)
Inventor
J·赫尔南德斯·桑切斯
A·毛恩茨
S·于
B·加西亚·加西亚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
F Hoffmann La Roche AG
Original Assignee
F Hoffmann La Roche AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by F Hoffmann La Roche AG filed Critical F Hoffmann La Roche AG
Priority claimed from PCT/US2022/081817 external-priority patent/WO2023115007A1/en
Publication of CN118451452A publication Critical patent/CN118451452A/en
Pending legal-status Critical Current

Links

Landscapes

  • Eye Examination Apparatus (AREA)

Abstract

The present disclosure provides a method and system for predicting the progression of fibrosis. Optical Coherence Tomography (OCT) image data can be received for a retina of a subject having neovascular age-related macular degeneration (nAMD). The OCT image data is processed using a model system that includes a machine learning model to generate a prediction output. Generating a final output based on the predicted output, wherein the final output is indicative of a risk of developing fibrosis in the retina.

Description

Prognosis model for predicting fibrosis development
The inventors:
Cross-reference to J.Henan Des Mulberry Chess, A. Mao Enci, S.Ind., B.Gasiella, gasiella related applications
The present application claims priority from U.S. provisional patent application No. 63/330,756, entitled "Prognostic Models for Predicting Fibrosis Development", filed on day 13, 4, 2022, and U.S. provisional patent application No. 63/290,628, entitled "Prognostic Models for Predicting Fibrosis Development", filed on day 16, 12, 2021, each of which is incorporated herein by reference in its entirety.
Technical Field
The present disclosure relates generally to predicting fibrosis progression, and more particularly, to methods and systems for predicting fibrosis progression using machine learning automation.
Background
Age-related macular degeneration (AMD) is still the most common cause of irreversible blindness in people over 50 years of age in developed countries. Neovascular AMD (nAMD) is an advanced form of AMD. The introduction of anti-vascular endothelial growth factor (anti-VEGF) therapies has significantly improved the prognosis of nAMD. However, a significant proportion of patients suffer from irreversible vision loss despite treatment. In many cases, this vision loss is caused by irreversible changes (e.g., fibrosis development).
Fibrosis is thought to be the result of an abnormal wound healing process and may be characterized by the deposition of collagen fibers that greatly alter the structure and function of the different retinal layers. However, the pathophysiology of retinal fibrosis is complex and not fully understood, which makes the development of specific therapies and identification of reliable biomarkers challenging. The currently available methods for detecting biomarkers that predict the progression of fibrosis involve manual assessment of images by human raters, making detection inaccurate, inefficient, and slower than expected.
Disclosure of Invention
In one or more embodiments, a method of predicting the progression of fibrosis is provided. Optical Coherence Tomography (OCT) image data can be received for a retina of a subject having neovascular age-related macular degeneration (nAMD). OCT image data is processed using a model system including a machine learning model to generate a prediction output. A final output is generated based on the predicted output, wherein the final output is indicative of a risk of developing fibrosis in the retina.
In one or more embodiments, a method for predicting the progression of fibrosis is provided. Optical Coherence Tomography (OCT) image data is received for a retina of a subject having neovascular age-related macular degeneration (nAMD). The OCT image data is segmented using a segmentation model to generate segmented image data. The segmented image data is processed using a deep learning model to generate a prediction output. A final output is generated based on the predicted output indicative of a risk of developing fibrosis in the retina.
In one or more embodiments, a method for predicting the progression of fibrosis is provided. At least one of clinical data or retinal feature data is received for a retina of a subject having neovascular age-related macular degeneration (nAMD). At least one of the clinical data or retinal feature data is processed using a regression model to generate a prediction output. Based on the predicted output, a final output is generated indicative of a risk of developing fibrosis in the retina.
In some embodiments, a system is provided that includes one or more data processors and a non-transitory computer-readable storage medium containing instructions that, when executed on the one or more data processors, cause the one or more data processors to perform a portion or all of one or more methods disclosed herein.
In some embodiments, a computer program product is provided that is tangibly embodied in a non-transitory machine-readable storage medium and includes instructions configured to cause one or more data processors to perform part or all of one or more methods disclosed herein.
Drawings
The present disclosure is described with reference to the accompanying drawings:
FIG. 1 is a block diagram of a prediction system 100 in accordance with one or more embodiments.
FIG. 2 is a block diagram of one example of an implementation of the model system from FIG. 1 in accordance with one or more embodiments.
FIG. 3 is a block diagram of one example of an implementation of the model system from FIG. 1 in accordance with one or more embodiments.
FIG. 4 is a flow diagram of a process for predicting the progression of fibrosis in accordance with one or more embodiments.
Fig. 5 is a flow diagram of a process for predicting fibrosis progression using OCT image data in accordance with one or more embodiments.
FIG. 6 is a flow diagram of a process for predicting the progression of fibrosis in accordance with one or more embodiments.
FIG. 7 is a flow diagram of a process for predicting the progression of fibrosis in accordance with one or more embodiments.
Fig. 8 is an OCT image in accordance with one or more embodiments.
FIG. 9 is a segmented image in accordance with one or more embodiments.
FIG. 10 is a table comparing statistics of feature-based models using clinical data in accordance with one or more embodiments.
Fig. 11 is a table comparing statistics of feature-based models using retinal features derived from OCT image data in accordance with one or more embodiments.
FIG. 12 is a table comparing statistics of a deep learning model using OCT image data and segmented image data in accordance with one or more embodiments.
FIG. 13 is a table comparing statistics of a deep learning model using OCT image data and segmented image data in combination with clinical data in accordance with one or more embodiments.
FIG. 14 is a block diagram illustrating a computer system in accordance with one or more embodiments.
In the drawings, similar components and/or features may have the same reference numerals. In addition, various parts of the same type may be distinguished by following the reference label by a dash and a second label that distinguishes among the similar parts. If only the first reference label is used in the specification, the description is applicable to any one of the similar sites having the same first reference label irrespective of the second reference label.
Detailed Description
I. Summary of the invention
The embodiments described herein recognize that methods and systems for predicting the progression of fibrosis in a subject with neovascular age-related macular degeneration (nAMD) may be desirable to be less invasive, more efficient, and/or faster than currently available methods and systems. The progression of fibrosis may include the onset of fibrosis and may include any sustained progression of fibrosis. Since fibrosis may lead to irreversible vision loss, and since there is no treatment currently specific for fibrosis that has already developed, it may be important to predict whether and when a subject receiving nAMD treatment or a subject who will receive nAMD treatment will develop fibrosis.
In general, typical Choroidal Neovascularization (CNV) has been used as a prognostic biomarker for the development of fibrosis. Traditionally, CNV types and sizes are detected by manually observing dye leakage in images generated via Fluorescein Angiography (FA), which method may also be referred to as Fundus Fluorescein Angiography (FFA). FA (or FFA) imaging is invasive and images using such imaging modalities may be more cumbersome than desired. For example, interpretation of FA images to detect fibrosis currently relies on a person with the necessary expertise or training.
Optical Coherence Tomography (OCT) imaging can be used to improve diagnosis and follow-up of nAMD patients at risk of fibrosis, as OCT imaging is less invasive. In addition to being less invasive, it is also easier to acquire OCT images because of less technician training that may be required. Further, OCT imaging may be able to obtain both qualitative and quantitative information. Thus, embodiments recognize that it may be desirable to have methods and systems for automated prediction of fibrosis progression via OCT images. Various morphological features found on OCT images have been associated with an increased risk of fibrosis development, including but not limited to: subretinal Hyperreflexia (SHRM), subretinal effusion (SRF), pigment Epithelial Detachment (PED), and epiretinal thickness.
Accordingly, embodiments described herein provide methods and systems for automated prediction of fibrosis progression using OCT images and machine learning. The OCT image may be, for example, a baseline OCT image. In one or more embodiments, a deep learning model is used to process OCT images or segmented images developed from OCT images (e.g., segmentation masks) to predict fibrosis. These segmented images may be generated using a trained deep learning model. These deep learning models may provide similar or improved accuracy of fibrosis prediction as compared to manual assessment of CNV type and size via FA images by a human rater. Further, using these deep learning models to predict fibrosis may be easier, faster, and more efficient than using FA images or manual fractionation. Still further, using a deep learning model as described herein may be able to achieve improved prediction of fibrosis in a manner that reduces the amount of computational resources required.
In one or more embodiments, feature-based modeling is utilized to process retinal feature data extracted from segmented images to predict fibrosis. These segmented images may be generated by the same trained deep learning model as the segmented images discussed above with respect to the segmented images for the deep learning model approach. These feature-based models may provide similar or improved accuracy of fibrosis prediction as compared to manual assessment of CNV type and size via FA images by a human rater. Further, using these feature-based models to predict fibrosis may be easier, faster, and more efficient than using FA images or manual fractionation. Still further, using a feature-based model as described herein may be able to achieve improved fibrosis prediction in a manner that reduces the amount of computational resources required.
In some embodiments, clinical data may be used in addition to OCT image data, segmented image data, and/or retinal feature data described above. The clinical data may be baseline clinical data including values of various clinical variables such as, but not limited to, age, visual acuity (e.g., visual acuity measurements such as best corrected visual acuity measurements (BCVA)) or CNV type determined from FA images.
In various embodiments, the machine learning model may process OCT images, segmented images, and/or retinal feature data to detect the presence of CNVs and classify the CNVs by their type. These machine learning models can detect the type of CNV with improved accuracy compared to manual assessment of FA images via human raters. Further, using a machine learning model to detect the type of CNV may reduce the amount of time and computing resources required to detect the type of CNV.
Automated fibrosis detection using the machine learning-based methods and systems described herein can help guide prognosis and help develop new therapeutic strategies for nAMD and/or fibrosis. Further, automated fibrosis prediction may allow for better stratification and selection of subjects for clinical trials to ensure richer and/or more accurate population selection for clinical trials. Still further, automated fibrosis prediction may be able to more accurately assess treatment responses. For example, predicting fibrosis progression using machine learning models (e.g., deep learning models and feature-based models) such as those described herein can help optimize the use of available medical resources and improve efficacy, thereby improving healthcare for the overall subject (e.g., patient).
Recognizing and considering the importance and utility of the improved methods and systems that may provide the above description, the embodiments described herein provide a machine learning model for improving the accuracy, speed, efficiency, and ease of predicting the progression of fibrosis in a subject diagnosed with and/or undergoing treatment for nAMD. Further, the methods and systems described herein may be able to predict fibrosis progression in a less invasive manner while also reducing the level of expertise or specialized training required to perform the prediction.
Exemplary System for predicting fibrosis progression in nAMD
II.A. System overview
Referring now to the drawings, FIG. 1 is a block diagram of a prediction system 100 in accordance with one or more embodiments. The predictive system 100 may be used to predict the progression of fibrosis in the eyes of a subject diagnosed with neovascular age-related macular degeneration (nAMD). In one or more embodiments, the predictive system 100 includes a computing platform 102, a data storage device 104, and a display system 106. Computing platform 102 may take various forms. In one or more embodiments, computing platform 102 includes a single computer (or computer system) or multiple computers in communication with each other. In other examples, computing platform 102 takes the form of a cloud computing platform, a mobile computing platform (e.g., a smartphone, a tablet, etc.), or a combination thereof.
The data store 104 and the display system 106 are each in communication with the computing platform 102. In some examples, the data store 104, the display system 106, or both may be considered part of or otherwise integral with the computing platform 102. Thus, in some examples, computing platform 102, data store 104, and display system 106 may be separate components that communicate with each other, but in other examples, some combinations of these components may be integrated together.
The prediction system 100 includes a fibrosis predictor 110, which may be implemented using hardware, software, firmware, or a combination thereof. In one or more embodiments, the fibrosis predictor 110 is implemented in the computing platform 102.
The fibrosis predictor 110 receives and processes the input data 112 to generate a final output 114. The final output 114 may be, for example, a binary classification indicating whether fibrosis progression is predicted. The indication may be related to the risk of developing fibrosis. For example, the binary classification may be a positive or negative prediction for fibrosis progression, or may be a high-risk or low-risk prediction. The prediction may be made for future time points (e.g., 1 month, 2 months, 3 months, 4 months, 6 months, 8 months, 12 months, 15 months, 24 months, etc.) or unspecified periods of time following the first dose or doses of treatment. In other examples, final output 114 may be a score indicating whether fibrosis progression is predicted. For example, a score equal to or above a selected threshold (e.g., a threshold between 0.4 and 0.9) may indicate a positive prediction of fibrosis progression, while a score below the selected threshold may indicate a negative prediction. In some cases, the score may be a probability value or a likelihood value of developing fibrosis.
The input data 112 may be data of a subject that has been diagnosed with nAMD. The subject may have been previously treated with a nAMD treatment method (e.g., an anti-VEGF therapy such as ranibizumab, an antibody therapy such as fariximab, or some other type of treatment). In other embodiments, the subject may not have been treated.
The input data 112 may include, for example, but is not limited to, at least one of Optical Coherence Tomography (OCT) image data 116, segmented image data 118, retinal feature data 120, clinical data 122, or a combination thereof. In one or more embodiments, the input data 112 includes at least one of Optical Coherence Tomography (OCT) image data 116, segmented image data 118, or retinal feature data 120, and optionally includes clinical data 122.
OCT image data 116 may include, for example, one or more raw OCT images that have not been preprocessed or one or more OCT images that have been preprocessed using one or more normalization or normalization procedures. The OCT image may take the form of, but is not limited to, a time domain optical coherence tomography (TD-OCT) image, a frequency domain optical coherence tomography (SD-OCT) image, a two-dimensional OCT image, a three-dimensional OCT image, an OCT angiography (OCT-a) image, or a combination thereof. While SD-OCT (also referred to as fourier domain OCT) may be mentioned with respect to the embodiments described herein, other types of OCT images are also contemplated for use with the methods and systems described herein. Accordingly, the description of embodiments with respect to images, image types, and techniques merely provides non-limiting examples of such images, image types, and techniques.
The segmented image data 118 may include one or more segmented images that have been generated via retinal segmentation. Retinal segmentation involves the detection and identification of one or more retinal (e.g., retinal related) elements in a retinal image. The segmented image identifies one or more retinal (e.g., retinal related) elements on the segmented image using one or more graphical indicators. The segmented image may be a representation of an OCT image identifying one or more retinal elements, or may be an OCT image on which one or more retinal elements have been identified.
For example, one or more color indicators, shape indicators, pattern indicators, shadow indicators, lines, curves, markers, tags, labels, text features, other types of graphical indicators, or combinations thereof may be used to identify one or more portions of an image (e.g., in terms of pixels) that have been identified as a retinal element. As a specific example, a group of pixels may be identified to capture a particular retinal fluid (e.g., intraretinal fluid or subretinal fluid). The segmented image may use a color indicator to identify the set of pixels. For example, each pixel in the set of pixels may be assigned a color that is specific to a particular retinal fluid, thereby assigning each pixel to a particular retinal fluid. As another example, a segmented image may identify a pixel group by applying a patterned region or shape (continuous or discontinuous) over the pixel group.
The retinal element may be composed of at least one of a retinal layer element or a retinal pathology element. The detection and identification of one or more retinal layer elements may be referred to as layer element (or retinal layer element) segmentation. The detection and identification of one or more pathological elements of the retina may be referred to as pathological element (or retinal pathological element) segmentation.
The retinal layer element may be, for example, a retinal layer or a boundary associated with a retinal layer. Examples of retinal layers include, but are not limited to, inner Limiting Membrane (ILM) layers, retinal nerve fiber layers, ganglion cell layers, inner mesh layers, outer core layers, outer membrane (ELM) layers, photoreceptor layers, retinal Pigment Epithelium (RPE) layers, RPE detachment, bruch's Membrane (BM) layers, choroidal capillary layers, choroidal substrate layers, ellipsoidal Zones (EZ), and other types of retinal layers. In some cases, the retinal layer may be composed of one or more layers. As one example, the retinal layer may be the interface between the outer mesh layer and the Henle fiber layer (OPL-HFL). The boundary associated with the retinal layer may be, for example, an inner boundary of the retinal layer, an outer boundary of the retinal layer, a boundary associated with a pathological feature of the retinal layer (e.g., an inner boundary or an outer boundary of retinal layer detachment), or other types of boundaries. For example, the boundary may be an inner boundary of an RPE (IB-RPE) release layer, an outer boundary of an RPE (OB-RPE) release layer, or another type of boundary.
The retinal pathological elements may include, for example, effusions (e.g., effusion bags), cells, solid materials, or combinations thereof that demonstrate retinal pathology (e.g., diseases or disorders such as AMD or diabetic macular edema). For example, the presence of certain retinal hydrops may be an indication of nAMD. Examples of pathological elements of the retina include, but are not limited to, intraretinal fluid (IRF), subretinal fluid (SRF), fluid associated with Pigment Epithelial Detachment (PED), highly reflective substances (HRM), subretinal highly reflective Substances (SHRM), intraretinal highly reflective substances (IHRM), hyperreflective foci (HRF), retinal fluid pockets, drusen, and fibrosis. In some cases, the retinal pathological element may be a disruption (e.g., discontinuity, delamination, loss, etc.) of the retinal layer or retinal region. For example, the damage may be to an ellipsoidal belt, ELM, RPE, or another layer or region. Damage may represent damage or loss of cells (e.g., photoreceptors) in the damaged area. In some examples, the retinal pathological element may be clear IRF, turbid IRF, clear SRF, turbid SRF, some other type of clear retinal fluid, some other type of turbid retinal fluid, or a combination thereof.
In one or more embodiments, the segmented image data 118 may have been generated via a deep learning model. The deep learning model may be composed of a convolutional neural network system composed of one or more neural networks. Each or at least one of these one or more neural networks may itself be a convolutional neural network.
Retinal feature data 120 may include, for example, but is not limited to, feature data extracted from segmented image data 118. For example, the feature data may be extracted for one or more retinal elements identified in the segmented image data 118. This feature data may include values for any number of features (e.g., quantitative features) or combinations thereof. These features may include pathology-related features, layer-related volumetric features, layer-related thickness features, or a combination thereof. Examples of features include, but are not limited to, maximum retinal layer thickness, minimum retinal layer thickness, average retinal layer thickness, maximum height of boundaries associated with the retinal layers, volume of the retinal fluid pockets, length of the fluid pockets, width of the fluid pockets, number of retinal fluid pockets, and number of hyperreflexia foci. Thus, at least some of these features may be volumetric features. For example, the feature data may be derived for each selected OCT image (e.g., a single OCT B-scan) and then combined to form a full volume value. In one or more embodiments, 1 to 200 features may be included in retinal feature data 120.
Clinical data 122 may include, for example, but not limited to, age, visual acuity measurements, choroidal Neovascularization (CNV) type, or a combination thereof. The baseline visual acuity measurement may be, for example, an optimal corrected visual acuity (BCVA) measurement. The CNV type may be an identification of the type of assessment based on Fluorescein Angiography (FA) image data. The CNV type may be, for example, occult CNV, typically dominant CNV, slightly classical CNV or retinal hemangiomatoid hyperplasia (RAP). In some cases, "classical CNV" may be used as a type of CNV that encompasses both classical and slightly classical CNVs. In some cases, the CNV type is identified based on a numbering scheme (e.g., type 1 refers to occult CNV, type 2 refers to typical CNV, and type 3 refers to RAP). In one or more embodiments, at least a portion of the clinical data 122 may be for a baseline point in time. For example, CNV type and/or BCVA may be obtained for a baseline time point. The baseline time point may be a time after nAMD diagnosis but just prior to treatment (e.g., prior to the first dose), a time period after the first dose of treatment (e.g., 6 months, 9 months, 12 months, 15 months, etc., after the first dose), or another type of baseline time point.
The fibrosis predictor 110 uses a model system 124 to process the input data 112, which may include any one or more of the different types of data described above, and to generate a final output 114. Model system 124 may be implemented using different types of architectures. Model system 124 may include a set of machine learning models 126. One or more of the set of machine learning models 126 may receive the input data 112 (e.g., a portion or all of the input data 112) for processing. The data included in the input data 112 may vary based on the type of architecture used for the model system 124. Examples of the different types of architectures that may be used for the model system 124 and the different types of data that may be included in the input data 112 are described in more detail below in sections ii.b and ii.c.
In one or more embodiments, the final output 114 may include other types of information. For example, in some cases, the final output 114 may include clinical trial advice, treatment advice, or both. The clinical trial recommendation may be a recommendation to incorporate or exclude the subject from the clinical trial. The treatment recommendation may be a recommendation to change the type of treatment, adjust the treatment regimen (e.g., frequency of injections, dose, etc.), or both.
At least a portion of the final output 114 or a graphical representation of at least a portion of the final output 114 may be displayed on the display system 106. In some embodiments, at least a portion of the final output 114 or a graphical representation of at least a portion of the final output 114 is sent to a remote device 128 (e.g., mobile device, laptop, server, cloud, etc.).
Fibrotic predictor using deep learning model
FIG. 2 is a block diagram of one example of an implementation of the model system 124 from FIG. 1in accordance with one or more embodiments. The modeling system 124 of FIG. 2 is described with continued reference to FIG. 1. Model system 124 includes a deep-learning model 200, which may be one example of an implementation of a machine learning model in machine learning model set 126. The deep learning model 200 may receive model inputs 202 and generate predicted outputs 204.
In one or more embodiments, the model input 202 is formed using at least a portion of the input data 112 described above with respect to FIG. 1. In some embodiments, the model input 202 includes OCT image data 116. In other embodiments, the model input 202 includes at least a portion of the OCT image data 116 and clinical data 122 (e.g., baseline CNV type, baseline visual acuity measurement, age, or a combination thereof).
In some embodiments, the model input 202 includes segmented image data 118. In other embodiments, the model input 202 includes at least a portion of the segmented image data 118 and clinical data 122 (e.g., baseline CNV type, baseline visual acuity measurement, age, or a combination thereof).
The deep learning model 200 may be implemented using a binary classification model. In one or more embodiments, the deep learning model 200 is implemented using a convolutional neural network system, which may be comprised of one or more neural networks. Each or at least one of these one or more neural networks may itself be a convolutional neural network. In some embodiments, the deep learning model 200 is implemented using a ResNet-50 model (which is a 50-layer deep convolutional neural network, or a modified version of ResNet-50).
When model input 202 includes at least a portion of clinical data 122 other than OCT image data 116 or segmented image data 118, deep learning model 200 may connect vectors for clinical data (clinical variables) to OCT image data 116 or segmented image data 118, respectively, using a modified form of convolutional neural network. As one example, when deep-learning model 200 is implemented using ResNet-50, the first portion of deep-learning model 200 includes ResNet-50 and not its top layer. This first portion of the deep learning model 200 is used to generate a first intermediate output based on OCT image data 116 or segmented image data 118. The second portion of the deep learning model (e.g., an alternative to the top layer of ResNet-50) may include custom dense layer portions (e.g., one or more dense layers). A set of vectors for clinical variables (e.g., baseline CNV type, baseline visual acuity, and/or baseline age) are connected to a first intermediate output generated by a first portion of the deep-learning model 200 to form a second intermediate output. The second intermediate output is sent to the custom dense layer portion of the deep learning model 200. In some cases, the outputs ResNet-50 in the first portion of the deep learning model 200 may be passed through an averaging pooling layer to form a first intermediate output.
The deep learning model 200 outputs a prediction output 204 based on the model input 202. The fibrosis predictor 110 can use the prediction output 204 to form the final output 114. For example, the predictive output 204 may be a likelihood that an eye of a subject diagnosed with nAMD will develop fibrosis. In one or more embodiments, the prediction output 204 is a binary classification that indicates whether fibrosis progression is predicted. In such examples, the final output 114 may include the predicted output 204. In other embodiments, the prediction output 204 takes the form of a score (e.g., a probability distribution value or likelihood value) that indicates whether fibrosis progression is predicted. In such examples, the final output 114 may include the predicted output 204 and/or the binary classification formed based on the scores. For example, the fibrosis predictor 110 can generate the final output 114 as a binary classification or indication based on whether the score generated by the deep learning model is above a selected threshold (e.g., a threshold between 0.4 and 0.9).
In some embodiments, the model system 124 may further include a segmentation model 206. Segmentation model 206 may receive OCT image data 116 as input and may generate segmented image data, such as segmented image data 118. The segmentation model 206 is used to automatically segment the OCT image data 116. Segmentation model 206 may include, for example, but is not limited to, a deep learning model. The segmentation model 206 may include, for example, one or more neural networks. In one or more embodiments, the segmentation model 206 takes the form of a U-Net.
The deep learning model 200 can be trained using training data 208 of subjects diagnosed with and undergoing treatment for nAMD. Training data 208 may include training clinical data 210 and training image data 212. The training image data 212 may include or be generated from OCT images at a future point in time after the start of treatment. For example, OCT images may have been generated at 6 month intervals, 9 month intervals, 12 month intervals, 24 month intervals, or some other time interval after the start of treatment. The progression of fibrosis at this future point in time can be assessed by a human rater.
Fibrosis predictor using feature-based models
FIG. 3 is a block diagram of one example of an implementation of the model system 124 from FIG. 1 in accordance with one or more embodiments. The modeling system 124 of fig. 3 is described with continued reference to fig. 1 and 2. Model system 124 includes a feature-based model 300, which may be one example of an implementation of a machine learning model in machine learning model set 126. Feature-based model 300 may receive model input 302 and generate prediction output 304.
In one or more embodiments, the model input 302 is formed using a portion of the input data 112 described above with respect to FIG. 1. For example, model input 302 includes retinal feature data 120. In other embodiments, model input 302 includes at least a portion of retinal feature data 120 and clinical data 122 (e.g., baseline CNV type, baseline visual acuity measurement, age, or a combination thereof). In other embodiments, the model input 302 includes at least a portion of the clinical data 122 including the baseline CNV type and baseline visual acuity measurements, age, or both.
The feature-based model 300 may be a regression model (or algorithm). For example, the feature-based model 300 may be a logistic regression model, a linear regression model, or some other type of regression model. The feature-based model 300 may generate a prediction output 304 in the form of a score (e.g., a probability value or a likelihood value). Scores exceeding a selected threshold (e.g., 0.5, 0.6, 0.7, or some other value between 0.4 and 0.9) may be a score determined to indicate progression of fibrosis. Scores below the selected threshold may indicate that fibrosis is predicted not to develop.
In one or more embodiments, the feature-based model 300 may be a regression model trained using one or more regularization techniques to reduce overfitting. These regularization techniques may include Ridge regularization, lasso regularization, ELASTIC NET regularization, or a combination thereof. For example, the number of features used in the feature-based model may be reduced to a number of features that have an importance above a threshold for the prediction output 304. In some cases, this type of training may simplify the feature-based model 300 and allow for shorter run times. For example, lasso regularization techniques may be used to reduce the number of features used in the regression model and/or identify important features (e.g., those features that are most important to predictions generated by the regression model). ELASTIC NET regularization techniques depend on both the total regularization (λ) and the amount of the mixture (α) of Lasso and Ridge regularization. The cross-validation policy may include a 5-fold or 10-fold cross-validation policy. The parameters alpha and lambda may be selected to minimize cross-validation bias.
In one or more embodiments, model inputs 302 include three baseline clinical variables from clinical data 122, including CNV type, BCVA, and age. In one or more embodiments, the model inputs 302 include, for each of the 1mm and 3mm fovea, SHRM ratings (e.g., rated according to a centralized rating scheme), PED ratings (e.g., rated according to a centralized rating scheme), and the maximum height of the SRF. In one or more embodiments, model inputs 302 include a maximum thickness between the OPL-HFL and the RPE, a thickness of the entire neural retina from the ILM layer to the RPE layer, or both. In one or more embodiments, the model inputs 302 include: baseline CNV type, baseline age, and baseline BCVA from clinical data 122; and Central Retinal Thickness (CRT), foveal choroidal thickness (SFCT), PED grade, SRF maximum height, and SHRM grade from retinal feature data 120. In other embodiments, model inputs 302 include CRT, SFCT, PED, SRF and SHRM.
In some embodiments, the model system 124 includes a segmentation model 206, a feature extraction model 306, or both. The segmentation model 206 may be the same pre-training model as described in fig. 2. The segmentation model 206 may be used to generate segmented image data 118 from OCT image data 116 provided in the model input 302. The feature extraction model 306, which may be one example of an implementation of a machine learning model in the set of machine learning models 126, may be used to generate retinal feature data 120 based on the segmented image data 118 included in the model input 302 or the segmented image data 118 generated by the segmentation model 206.
In one or more embodiments, the CNV type may be a type of feature included in the retinal feature data 120. For example, the CNV type may be determined by feature extraction model 306. In other embodiments, model system 124 includes CNV classifier 308.CNV classifier 308 may be one example of an implementation of a machine learning model in machine learning model set 126. For example, the CNV classifier 308 may include a machine learning model (e.g., a deep learning model including one or more neural networks) that is capable of detecting CNV types using OCT image data 116 in place of FA images. This CNV type may be referred to as model-generated CNV or OCT-based CNV type. In some cases, the CNV type is sent directly from CNV classifier 308 to feature-based model 300 for processing.
The feature-based model 300 outputs a prediction output 304 based on a model input 302. The fibrosis predictor 110 can use the prediction output 304 to form the final output 114. For example, the predictive output 304 may be a likelihood that an eye of a subject diagnosed with nAMD will develop fibrosis. In one or more embodiments, the prediction output 304 is a binary classification that indicates whether fibrosis progression is predicted. In such examples, final output 114 may include predicted output 304. In other embodiments, the prediction output 304 takes the form of a score (e.g., a probability distribution value or likelihood value) that indicates whether fibrosis progression is predicted. In such examples, the final output 114 may include the predicted output 304 and/or the binary classification formed based on the scores. For example, the fibrosis predictor 110 can generate the final output 114 as a binary classification or indication based on whether the score generated by the deep learning model is above a selected threshold (e.g., a threshold between 0.4 and 0.9).
The feature-based model 300 can be trained using training data 208 of subjects diagnosed with and undergoing treatment for nAMD. Training data 208 may include the same training data as the training model described with respect to fig. 2.
Exemplary methods for predicting fibrosis progression
FIG. 4 is a flow diagram of a process 400 for predicting the progression of fibrosis in accordance with one or more embodiments. In one or more embodiments, the process 400 may be implemented using the prediction system 100 described in fig. 1 and/or the fibrosis predictor 110 described in fig. 1-3. Process 400 includes various steps and may be described with continued reference to fig. 1-3. One or more steps not explicitly shown in fig. 4 may be included before, after, between, or as part of the steps of process 400. In some embodiments, process 400 may begin at step 402.
Step 402 includes receiving Optical Coherence Tomography (OCT) image data for a retina of a subject having neovascular age-related macular degeneration (nAMD). The OCT image data may be, for example, OCT image data 116 described with respect to fig. 1-3.
Step 404 includes processing the OCT image data using a model system that includes a machine learning model to generate a prediction output. The model system may be, for example, the model system 124 described with respect to fig. 1-3. The machine learning model may include, for example, the deep learning model 200 in fig. 2 or the feature-based model 300 in fig. 3. In some cases, the model system includes a segmentation model (e.g., segmentation model 206 in fig. 2-3). In some cases, the model system includes a feature extraction model (e.g., feature extraction model 306 in fig. 3). In some cases, the model system includes a CNV classifier (e.g., CNV classifier 308 in fig. 3).
The prediction output generated in step 404 may be, for example, prediction output 204 in fig. 2 or prediction output 304 in fig. 3. The predicted output may be a likelihood that the retina of a subject diagnosed with nAMD will develop fibrosis. In one or more embodiments, the predictive output is a binary classification indicating whether fibrosis progression is predicted. For example, a binary classification may indicate whether the risk of fibrosis developing is low or high. The prediction output may take the form of a score (e.g., a probability distribution value or a likelihood value) that indicates whether fibrosis progression is predicted.
The processing in step 404 may be performed in a variety of ways. In one or more embodiments, machine learning includes a deep learning model (e.g., at least one neural network, such as a convolutional neural network). The deep learning model can process OCT image data and generate a prediction output. The deep learning model may be, for example, a binary classification model. The OCT image data may be raw OCT image data generated by an OCT imaging device, or may be a pre-processed version of raw OCT image data (e.g., pre-processed via any number of normalization or normalization procedures).
In other embodiments, step 404 includes segmenting the OCT image data via a segmentation model (e.g., segmentation model 206 in fig. 2-3) to form segmented image data. The OCT image data may be raw OCT image data generated by an OCT imaging device, or may be a pre-processed version of raw OCT image data (e.g., pre-processed via any number of normalization or normalization procedures). The segmented image may then be processed by a deep learning model to generate a prediction output.
In other embodiments, step 404 includes a machine learning model that includes a feature-based model (e.g., feature-based model 300 in fig. 3), and the model system may further include a feature extraction model (e.g., feature extraction model 306 in fig. 3), CNV classifier 308, or both. The feature extraction model may receive segmented image data from the segmentation model and may use the segmented image data to extract retinal feature data (e.g., retinal feature data 120 in fig. 1 and 3) from the segmented image data. The retinal feature data may include at least one of a first feature value associated with at least one retinal layer element or a second feature value associated with at least one retinal pathological element. The retinal feature data may include, for example, 1 to 200 retinal features (or values of retinal features).
The machine learning model in step 404 may also be used to process clinical data (e.g., clinical data 122 in fig. 1-3) other than OCT image data, segmented image data, or retinal feature data. The clinical data may include baseline clinical data. For example, the clinical data may include a baseline Choroidal Neovascularization (CNV) type, a baseline visual acuity measurement, or a baseline age. The baseline visual acuity measurement may be a baseline BCVA or some other type of visual acuity measurement.
When the machine learning model includes a deep learning model for processing OCT image data or segmented image data, the deep learning model may include, for example, a Convolutional Neural Network (CNN) system, which may include modified forms of ResNet-50 or ResNet-50. In one or more embodiments, a first portion of the deep learning system (e.g., resNet-50 without one or more top layers) is used to process OCT image data or segmented image data to generate a first intermediate output. The second portion of the deep learning model (e.g., a substitute for one or more top layers of ResNet-50) may include custom dense portions (e.g., one or more dense layers). A set of vectors for one or more clinical variables included in the clinical data may be connected to the first intermediate output to form a second intermediate output. The second intermediate output may be processed using a second portion of the deep learning model (custom dense layer portion) to generate a predicted output.
Step 406 includes generating a final output indicative of a risk of developing fibrosis in the retina based on the predicted output. The final output, which may be the final output 114 in fig. 1-3, for example, may include a predicted output and/or a binary classification formed based on the predicted output. In some cases, the final output, which may be the final output 114 in fig. 1-3, may be a report that includes other information besides predicted output and/or binary classification. For example, the other information may include clinical trial advice to include or exclude subjects from a clinical trial based on a predicted output or binary classification. The information may include treatment recommendations that alter treatment type or adjust treatment regimen or both for the subject based on the prediction output or binary classification. In some embodiments, the final output includes at least a portion of the input used to generate the predicted output.
Fig. 5 is a flow diagram of a process 500 for predicting fibrosis progression using OCT image data in accordance with one or more embodiments. In one or more embodiments, the process 500 may be implemented using the prediction system 100 described in fig. 1 and/or the fibrosis predictor 110 described in fig. 1-2. Process 500 includes various steps and may be described with continued reference to fig. 1-2. One or more steps not explicitly shown in fig. 5 may be included before, after, between, or as part of the steps of process 500. In some embodiments, process 500 may begin at step 502. The process 500 in fig. 5 may be a more detailed version of the process 400 in fig. 4, particularly directed to generating a final output based on OCT image data.
Step 502 includes receiving Optical Coherence Tomography (OCT) image data for a retina of a subject having neovascular age-related macular degeneration (nAMD). The OCT image data may be, for example, OCT image data 116 described with respect to fig. 1-2. The OCT image data may be raw OCT image data generated by an OCT imaging device, or may be a pre-processed version of raw OCT image data (e.g., pre-processed via any number of normalization or normalization procedures).
Step 504 includes processing the OCT image data using a deep learning model of the model system to generate a prediction output. For example, the deep learning model may be deep learning model 200 in fig. 2. In one or more embodiments, the deep learning model includes a binary classification model. The deep learning model may include a deep convolutional neural network.
The prediction output generated in step 504 may be the prediction output 204 in fig. 2. The predicted output may be a likelihood that the retina of a subject diagnosed with nAMD will develop fibrosis. In one or more embodiments, the predictive output is a binary classification indicating whether fibrosis progression is predicted. For example, the binary classification may indicate: low or high risk of fibrosis progression, positive or negative prediction of fibrosis progression, or other types of binary classification. The prediction output may take the form of a score (e.g., a probability distribution value or a likelihood value) that indicates whether fibrosis progression is predicted.
Step 506 includes generating a final output indicative of a risk of developing fibrosis in the retina based on the predicted output. The final output may be, for example, the final output 114 described with respect to fig. 1-2. The final output may be similar to the final output described with respect to step 406 in fig. 4.
In some embodiments, step 502 includes receiving clinical data (e.g., clinical data 122 in fig. 1 and 2) for processing. The clinical data may include at least one of a baseline CNV type, a baseline visual acuity measurement, or a baseline age. In these embodiments, when clinical data is received in step 502, step 504 may include processing both OCT image data and clinical data using a model system to generate a predictive output.
Fig. 6 is a flow diagram of a process 600 for predicting the progression of fibrosis in accordance with one or more embodiments. In one or more embodiments, the process 600 may be implemented using the prediction system 100 described in fig. 1 and/or the fibrosis predictor 110 described in fig. 1-2. Process 600 includes various steps and may be described with continued reference to fig. 1-2. One or more steps not explicitly shown in fig. 6 may be included before, after, between, or as part of the steps of process 600. In some embodiments, process 600 may begin at step 602. The process 600 in fig. 6 may be a more detailed version of the process 400 in fig. 4, particularly directed to generating a final output based on segmented image data.
Step 602 may optionally include receiving an Optical Coherence Tomography (OCT) image of a retina of a subject having neovascular age-related macular degeneration (nAMD). The OCT image data may be, for example, OCT image data 116 described with respect to fig. 1-2. The OCT image data may be raw OCT image data generated by an OCT imaging device, or may be a pre-processed version of raw OCT image data (e.g., pre-processed via any number of normalization or normalization procedures).
Step 604 may optionally include segmenting the OCT image data using a segmentation model to generate segmented image data. The segmentation output may be, for example, the segmentation model 206 in fig. 2. In one or more embodiments, the segmentation model includes a U-Net based architecture that is pre-trained on training OCT image data that includes OCT images annotated via human raters (e.g., authenticated raters). The segmentation model may be trained to automatically segment one or more retinal pathology elements (e.g., SHRM, SRF, PED, IRF, etc.), one or more retinal layer elements (e.g., ILM, OPL-HFL, RPE, BM, etc.), or both.
Step 606 may include receiving segmented image data at a deep learning model. For example, the deep learning model may be deep learning model 200 in fig. 2.
Step 608 may include processing the segmented image data using a deep learning model to generate a prediction output (e.g., prediction output 204 in fig. 2). The predicted output may be a likelihood that the retina of a subject diagnosed with nAMD will develop fibrosis. In one or more embodiments, the predictive output is a binary classification indicating whether fibrosis progression is predicted. For example, the binary classification may indicate: low or high risk of fibrosis progression, positive or negative prediction of fibrosis progression, or other types of binary classification. The prediction output may take the form of a score (e.g., a probability distribution value or a likelihood value) that indicates whether fibrosis progression is predicted.
Step 610 may include generating a final output indicative of a risk of developing fibrosis in the retina based on the predicted output. The final output may be, for example, the final output 114 described with respect to fig. 1-2. The final output may be similar to the final output described with respect to step 406 in fig. 4.
In some embodiments, step 602 includes receiving clinical data (e.g., clinical data 122 in fig. 1 and 2) for processing. The clinical data may include at least one of a baseline CNV type, a baseline visual acuity measurement, or a baseline age. In these embodiments, when clinical data is received in step 602, step 608 may include processing both the segmented image data and the clinical data using a deep learning model to generate a prediction output.
FIG. 7 is a flow diagram of a process 700 for predicting the progression of fibrosis in accordance with one or more embodiments. In one or more embodiments, process 700 may be implemented using prediction system 100 depicted in fig. 1 and/or fibrosis predictor 110 depicted in fig. 1 and 3. Process 700 includes various steps and may be described with continued reference to fig. 1 and 3. One or more steps not explicitly shown in fig. 7 may be included before, after, between, or as part of the steps of process 700. In some embodiments, process 700 may begin at step 702. The process 700 in fig. 7 may be a more detailed version of the process 400 in fig. 4, particularly directed to generating a final output based on segmented image data.
Step 702 may optionally include receiving an Optical Coherence Tomography (OCT) image of a retina of a subject having neovascular age-related macular degeneration (nAMD). The OCT image data may be, for example, OCT image data 116 described with respect to fig. 1 and 3. The OCT image data may be raw OCT image data generated by an OCT imaging device, or may be a pre-processed version of raw OCT image data (e.g., pre-processed via any number of normalization or normalization procedures).
Step 704 may optionally include segmenting the OCT image data using a segmentation model to generate segmented image data (e.g., segmented image data 118 in fig. 1 and 3). The segmentation output may be, for example, the segmentation model 206 in fig. 3. In one or more embodiments, the segmentation model includes a U-Net based architecture that is pre-trained on training OCT image data that includes OCT images annotated via human raters (e.g., authenticated raters). The segmentation model may be trained to automatically segment one or more retinal pathology elements (e.g., SHRM, SRF, PED, IRF, etc.), one or more retinal layer elements (e.g., ILM, OPL-HFL, RPE, BM, etc.), or both.
Step 706 optionally includes extracting retinal feature data from the segmented image data via a feature extraction model. The feature extraction model may be, for example, feature extraction model 306 in fig. 3. The feature extraction model may receive segmented image data from the segmentation model and may use the segmented image data to extract retinal feature data (e.g., retinal feature data 120 in fig. 1 and 3) from the segmented image data. The retinal feature data may include at least one of a first feature value associated with at least one retinal layer element or a second feature value associated with at least one retinal pathological element. The retinal feature data may include, for example, 1 to 200 retinal features (or values of retinal features).
Step 708 may optionally include identifying a Choroidal Neovascularization (CNV) type using a CNV classifier (e.g., CNV classifier 308). The CNV classifier may be implemented using, for example, but not limited to, a deep learning model that uses OCT image data to detect and identify CNV types. The CNV type may be a model-generated CNV type that may be different from a baseline CNV type included in the clinical data (e.g., where the CNV type is determined by a person-scorer based on the FA image data).
Step 710 includes receiving at least one of retinal feature data, clinical data, or a CNV type for processing. The CNV type in step 710 may be the CNV type generated by the model identified in step 708. The retinal feature data may be the retinal feature data generated in step 706. The clinical data may be, for example, clinical data 122 in fig. 1 and 3. The clinical data may include at least one of a baseline CNV type, a baseline visual acuity measurement, or a baseline age.
Step 712 includes processing at least one of clinical data, retinal feature data, or CNV types using the feature-based model to generate a prediction output. The feature-based model may be, for example, feature-based model 300 in fig. 3. The feature-based model may include, for example, a regression model. The CNV type in step 708 may be the CNV type generated by the model identified in step 708.
The predicted output may be a likelihood that the retina of a subject diagnosed with nAMD will develop fibrosis. In one or more embodiments, the predictive output is a binary classification indicating whether fibrosis progression is predicted. For example, the binary classification may indicate: low or high risk of fibrosis progression, positive or negative prediction of fibrosis progression, or other types of binary classification. The prediction output may take the form of a score (e.g., a probability distribution value or a likelihood value) that indicates whether fibrosis progression is predicted.
Step 714 includes generating a final output indicative of a risk of developing fibrosis in the retina based on the predicted output. The final output may be, for example, the final output 114 described with respect to fig. 1 and 3. The final output may be similar to the final output described with respect to step 406 in fig. 4.
Exemplary images
Fig. 8 is an OCT image in accordance with one or more embodiments. OCT image 800 is one example of an implementation of an OCT image that may be included in OCT image data 116 described in the above articles section ii.a and ii.b. OCT image 800 may be a single OCT B-scan. In one or more embodiments, OCT image 800 can be processed as part of model input 202 for deep-learning model 200 in fig. 2. In one or more of the embodiments described herein,
FIG. 9 is a segmented image in accordance with one or more embodiments. The segmented image 900 is one example of an implementation of a segmented image that may be included in the segmented image data 118 described in the articles section ii.a and ii.b above. The segmented image 900 may be a representation of an OCT image (e.g., OCT image 800 in fig. 8) with multiple masks 902 already superimposed on the representation. In other examples, the segmented image 900 may be an OCT image (e.g., OCT image 800 in fig. 8) on which multiple masks 902 have been superimposed on the representation.
Here, a plurality of masks 902 express various retinal elements. These retinal elements may include, but are not limited to, intraretinal fluid (IRF), subretinal fluid (SRF), subretinal Highly Reflective Material (SHRM), pigment Epithelial Detachment (PED), interfaces between layers of the Inner Limiting Membrane (ILM) and the outer limiting membrane (ELM), interfaces between ILM and layers of the Retinal Pigment Epithelium (RPE), and interfaces between layers of the RPE and Bruch's Membrane (BM). Exemplary training and validation of machine learning models
V.a. exemplary data
Various machine learning models were trained and their performance was evaluated. Training includes training data using data obtained from clinical trials and/or training data generated based on data obtained from clinical trials. In particular, 935 eyes were selected from 1097 untreated eyes of nAMD subjects enrolled in the randomized, multicenter HARBOR trial phase 3. These nAMD subjects received 0.5mg or 2.0mg of ranibizumab treatment monthly or on demand over 12 months. In the HARBOR test, the CNV type is classified as occult CNV (e.g., having occult CNV lesions), typically dominant CNV, or slightly typical CNV according to the FA image. In the HARBOR test, the presence of fibrosis was assessed on day 0, month 3, month 6, month 12 and month 24.
The 935 eyes selected included eyes with clear fibrotic recordings at month 12 and baseline OCT image data. OCT image data includes a baseline OCT volume scan for each eye.
To train the deep learning model, five equally spaced B-scans were selected from each of 935 OCT volume scans, covering 1.44mm of the central macula. Specifically, among 128B-scans, scans 49, 56, 63, 70, and 77 are selected. The first deep learning model is trained using the original OCT beta-scan. The second deep learning model is trained using segmented images generated based on the original OCT B scan. Random horizontal and vertical flipping, scaling, rotation and shearing were used to enhance the data, giving a total of 30,000 samples.
The OCT volume scan is segmented using a pre-trained segmentation model (e.g., one example of an implementation of the segmentation model 206 in fig. 2-3). The segmentation model is pre-trained based on annotations made by authenticated raters. The segmentation model was trained to automatically segment 4 retinal pathology elements (SHRM, SRF, PED and IRF) and 5 retinal layer elements (ILM, OPL-HLF interface, inner and outer boundaries of RPE and BM). Each OCT volume scan segments an element to three topographical locations (e.g., circles of 1mm, 3mm, and 6mm in diameter).
Based on the segmented image data, a feature extraction model (e.g., one example of an implementation of feature extraction model 306 in fig. 3) is used to extract retinal feature data. The feature extraction model automatically extracts 105 quantitative retinal features. In particular, these retinal features include 36 volume-related features (e.g., 4 retinal pathological elements of each of the 3 readout variants for each of the 3 topographical locations), 15 layer-related volume features (e.g., 5 pairs of layers for each of the 3 topographical locations), and 54 layer-related thickness features (e.g., 6 pairs of layers for each of the 3 readout variants for each of the 3 topographical locations). All features are derived for each individual beta-scan of the OCT volume scan and then combined to form a full volume measurement.
Training of v.b. machine learning models
The presence of fibrosis at month 12 was defined as the result of the training and validation model. Folds are predefined to conduct five-fold cross-validation at the subject number level to ensure that the outcome variables are layered in folds. This operation was repeated ten times, resulting in 10 repetitions of 5 segments each, resulting in a total of 50 training/testing segments. The model is always trained on the training set and then used to predict the test set. All 50 segments of the feature-based model (e.g., an example of an implementation of feature-based model 300 in fig. 3) are validated, while only five segments of the first iteration of the deep-learning model (e.g., an example of an implementation of deep-learning model 200 in fig. 2) are validated in order to limit computational effort.
For feature-based models, feature-based models (e.g., logistic regression models) are fitted to various feature configurations using Lasso regularization. A combination of selected OCT derived quantitative retinal features with three baseline clinical variables (CNV type, BCVA and age) was used. When OCT derived quantitative retinal features are used, the degree of regularization is set to a constant high value.
For deep learning models (e.g., convolutional neural networks), the ResNet-50 architecture pre-trained on ImageNet is used. The architecture can be tuned by replacing the top layer with custom dense portions, allowing vectors of clinical variables to be connected to OCT image data, or used as is when clinical data is not used. Twenty generations of transfer learning are applied, leaving the underlying ResNet-50 layers frozen, followed by 40 or 120 generations of fine-tuning the entire network on either the segmented image data or the raw OCT image data, both with and without clinical data.
V.c. model performance
For baseline comparison, feature-based models were constructed for clinical data only (e.g., one for baseline CNV type only, one for Baseline Visual Acuity (BVA) and age only, and one for baseline CNV, BVA and age). The area under receiver operating characteristics (AUC) was used to evaluate performance by plotting observed event occurrence versus predicted event occurrence. In addition, the about log index was applied to the AUC curve to select cut-off points and reported as positive and negative predictive values for model predictions. Specificity and sensitivity were also assessed.
FIG. 10 is a table 1000 comparing statistics of feature-based models using clinical data in accordance with one or more embodiments. As shown in table 1000, based on the average AUC, feature-based models using baseline CNV types alone and feature-based models using baseline CNV types along with BVA and age performed best. However, the specificity of the feature-based model using baseline CNV alone is lower than the feature-based model using baseline CNV type with BVA and age.
Fig. 11 is a table 1100 comparing statistics using a feature-based model of retinal features derived from OCT image data in accordance with one or more embodiments. As shown in table 1100, based on the average AUC, the first feature-based model that uses OCT-derived retinal features to predict fibrosis progression and the second feature-based model that uses OCT-derived retinal features along with BVA and age to predict fibrosis progression perform similarly as compared to the feature-based model that uses baseline CNV types (as shown in fig. 10). Although not shown in table 1100, adding the baseline CNV type to the first or second feature-based model increases the average AUC to 0.809 and 0.821, respectively. These results indicate that feature-based models of OCT-derived retinal features can be used to accurately and reliably predict fibrosis progression.
FIG. 12 is a table 1200 comparing statistics of a deep learning model using OCT image data and segmented image data in accordance with one or more embodiments. As shown in table 1200, the average AUC of the deep-learning model using the segmented images to predict fibrosis progression is slightly higher than the average AUC of the deep-learning model using OCT image data to predict fibrosis progression. Further, the average AUC of the two deep-learning models performed similarly as compared to the feature-based model (shown in fig. 10) using the baseline CNV type.
FIG. 13 is a table 1300 comparing statistics of a deep learning model using OCT image data and segmented image data in combination with clinical data in accordance with one or more embodiments. As shown in table 1300, clinical data (e.g., BVA, age, and baseline CNV type) is added to the deep learning model such that the magnitude of increase in the average AUC of the deep learning model using segmented image data exceeds the magnitude of increase in the average AUC of the deep learning model using OCT image data.
Computer-implemented system
FIG. 14 is a block diagram illustrating a computer system in accordance with various embodiments. Computer system 1400 may be one example of an implementation of computing platform 102 in fig. 1. In various embodiments of the present teachings, computer system 1400 may include a bus 1402 or other communication mechanism for communicating information, and a processor 1404 coupled with bus 1402 for processing information. In various embodiments, computer system 1400 may also include a memory, which may be Random Access Memory (RAM) 1406 or other dynamic storage device, coupled to bus 1402 for determining instructions to be executed by processor 1404. The memory may also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1404. In various embodiments, computer system 1400 may further include a Read Only Memory (ROM) 1408 or other static storage device coupled to bus 1402 for storing static information and instructions for processor 1404. A storage device 1410, such as a magnetic disk or optical disk, may be provided and coupled to bus 1402 for storing information and instructions.
In various embodiments, computer system 1400 may be coupled via bus 1402 to a display 1412, such as a Cathode Ray Tube (CRT) or Liquid Crystal Display (LCD), for displaying information to a computer user. An input device 1414, including alphanumeric and other keys, may be coupled to bus 1402 for communicating information and command selections to processor 1404. Another type of user input device is cursor control 1416, such as a mouse, a trackball, or cursor direction keys, for communicating direction information and command selections to processor 1404 and for controlling cursor movement on display 1412. The input device 1414 typically has two degrees of freedom in two axes, a first axis (i.e., x) and a second axis (i.e., y), that allow the device to specify positions in a plane. However, it should be understood that input devices 1414 that allow 3-dimensional (x, y, and z) cursor movement are also contemplated herein.
Consistent with certain implementations of the present teachings, the results may be provided by computer system 1400 in response to processor 1404 executing one or more sequences of one or more instructions contained in RAM 1406. Such instructions may be read into RAM 1406 from another computer-readable medium or computer-readable storage medium, such as storage device 1410. Execution of the sequences of instructions contained in RAM 1406 can cause processor 1404 to perform the processes described herein. Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement the present teachings. Thus, implementations of the present teachings are not limited to any specific combination of hardware circuitry and software.
The term "computer-readable medium" (e.g., data storage device, etc.) or "computer-readable storage medium" as used herein refers to any medium that participates in providing instructions to processor 1404 for execution. Such a medium may take many forms, including but not limited to, non-volatile media, and transmission media. Examples of non-volatile media may include, but are not limited to, optical disks, solid state disks, magnetic disks (such as storage device 1410). Examples of volatile media may include, but are not limited to, dynamic memory such as RAM 1406. Examples of transmission media may include, but are not limited to, coaxial cables, copper wire and fiber optics, including the wires that comprise bus 1402.
Common forms of computer-readable media include: such as a floppy disk, a flexible disk, hard disk, magnetic tape, or any other magnetic medium; CD-ROM, any other optical medium; perforated cards, paper tape, any other physical medium having a pattern of holes; RAM, PROM and EPROM, FLASH-EPROM, any other memory chip or cartridge; or any other tangible medium that can be read by a computer.
In addition to computer-readable media, instructions or data may also be provided as signals on a transmission medium included in a communication device or system to provide one or more sequences of instructions to the processor 1404 of the computer system 1400 for execution. For example, the communication device may include a transceiver with signals indicative of instructions and data. The instructions and data are configured to cause one or more processors to implement the functions outlined in the disclosure herein. Representative examples of data communication transmission connections may include, but are not limited to, telephone modem connections, wide Area Networks (WANs), local Area Networks (LANs), infrared data connections, NFC connections, and the like.
It should be appreciated that the methods described herein (including the flowchart, figures, and accompanying disclosure) can be implemented using the computer system 1400 as a stand-alone device or on a distributed network of shared computer processing resources, such as a cloud computing network.
The methods described herein may be implemented in a variety of ways, depending on the application. For example, the methods may be implemented in hardware, firmware, software, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), processors, controllers, micro-controllers, microprocessors, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof.
In various embodiments, the methods of the present teachings may be implemented as firmware and/or software programs as well as application programs written in conventional programming languages C, C ++, python, and the like. If implemented as firmware and/or software, the embodiments described herein may be implemented on a non-transitory computer-readable medium having stored therein a program for causing a computer to perform the methods described above. It is to be appreciated that the various engines described herein may be provided on a computer system, such as computer system 1400, wherein processor 1404 will perform the analysis and determination provided by these engines in accordance with instructions provided by any one or a combination of memory components RAM 1406, ROM 1408, or storage device 1410, as well as user input provided via input device 1414.
Exemplary definitions and contexts
The present disclosure is not limited to the exemplary embodiments and applications described herein nor to the manner in which the exemplary embodiments and applications operate or are described herein. Furthermore, the figures may show simplified or partial views, and the dimensions of elements in the figures may be exaggerated or not to scale.
Unless defined otherwise, scientific and technical terms used in connection with the present teachings described herein shall have the meanings commonly understood by one of ordinary skill in the art. Furthermore, unless the context requires otherwise, singular terms shall include the plural and plural terms shall include the singular. Generally, nomenclature and techniques employed in connection with chemistry, biochemistry, molecular biology, pharmacology, and toxicology are described herein, which are those well known and commonly employed in the art.
In addition, when the terms "on … …," "attached," "connected," "coupled," or the like are used herein, one element (e.g., component, material, layer, substrate, etc.) may be "on," "attached," "connected" or "coupled to" another element, whether one element is directly on, directly attached to, directly connected to, or directly coupled to the other element, or there are one or more intervening elements between the one element and the other element. In addition, where a list of elements (e.g., elements a, b, c) is referred to, such reference is intended to include any one of the elements listed alone, any combination of less than all of the elements listed, and/or a combination of all of the elements listed. The division of the sections in the specification is merely for ease of examination and does not limit any combination of the elements in question.
The term "subject" may refer to a subject in a clinical trial, a person undergoing treatment, a person undergoing anti-cancer treatment, a person undergoing remission or recovery monitoring, a person undergoing prophylactic health analysis (e.g., due to its medical history), or any other person or patient of interest. In various instances, "subject" and "patient" may be used interchangeably herein.
As used herein, "substantially" means sufficient to achieve the intended purpose. Thus, the term "substantially" allows minor, insignificant changes to absolute or ideal conditions, dimensions, measurements, results, etc., as would be expected by one of ordinary skill in the art, without significantly affecting overall performance. When used with respect to a numerical value or a parameter or characteristic that may be expressed as a numerical value, substantially means within ten percent.
As used herein, the term "about" as used with respect to a numerical value or a parameter or feature that may be expressed as a numerical value means within ten percent of the numerical value. For example, "about 50" means a value in the range of 45 to 55, inclusive.
The term "one (ons)" means more than one.
The term "plurality" as used herein may be 2,3, 4,5, 6, 7, 8, 9, 10 or more.
As used herein, the term "set" refers to one or more. For example, a group of items includes one or more items.
As used herein, the phrase "at least one of … …" when used with a list of items means that different combinations of one or more of the listed items can be used, and that only one item in the list may be used. An item may be a particular object, thing, step, operation, procedure, or category. In other words, "at least one of … …" refers to any combination of items or number of items in the list that may be used, but not all items in the list may be used. For example, and without limitation, "at least one of item a, item B, or item C" means item a; item a and item B; item B; item a, item B, and item C; item B and item C; or items a and C. In some cases, "at least one of item a, item B, or item C" means, but is not limited to, two of item a, one of item B, and ten of item C; four of item B and seven of item C; or some other suitable combination.
As used herein, a "model" may include one or more algorithms, one or more mathematical techniques, one or more machine learning algorithms, or a combination thereof.
As used herein, "machine learning" may include the practice of using algorithms to parse data, learn from the data, and then make determinations or predictions of something in the world. Machine learning can use algorithms that can learn from data without relying on rule-based programming. Deep learning may be a form of machine learning.
As used herein, an "artificial neural network" or "neural network" (NN) may refer to a mathematical algorithm or computational model that models a set of interconnected artificial neurons, which process information based on a connection-oriented computational method. A neural network (which may also be referred to as a neural network) may use one or more layers of nonlinear cells to predict the output of a received input. Some neural networks may include one or more hidden layers in addition to the output layer. The output of each hidden layer may be used as an input to the next layer in the network, i.e., the next hidden layer or output layer. Each layer of the network generates an output from the received inputs based on the current values of the respective parameter sets. In various embodiments, a reference to a "neural network" may be a reference to one or more neural networks.
The neural network can process information in two ways; the neural network is in a training mode when it is training, and in an inferred (or predicted) mode when it puts the learned knowledge into practice. The neural network may learn through a feedback process (e.g., back propagation) that allows the network to adjust the weight factors of (modify the behavior of) the various nodes in the intermediate hidden layer so that the output matches the output of the training data. In other words, the neural network can learn and eventually learn how to obtain the correct output by being fed with training data (learning examples), even if it appears to have a new input range or set. The neural network may include at least one of, for example, but not limited to, a feed Forward Neural Network (FNN), a Recurrent Neural Network (RNN), a Modular Neural Network (MNN), a Convolutional Neural Network (CNN), a residual neural network (ResNet), a normal differential equation neural network (neural-ODE), a U-Net, a Full Convolutional Network (FCN), a stacked FCN using multi-channel learning, a squeeze and fire embedded neural network, mobileNet, or another type of neural network.
As used herein, "deep learning" may refer to the use of multiple layers of artificial neural networks to automatically learn a representation from input data (such as images, video, text, etc.) without human provided knowledge to provide highly accurate predictions in tasks such as object detection/recognition, speech recognition, language translation, etc.
Description of exemplary embodiments
Example 1: a method, comprising: receiving Optical Coherence Tomography (OCT) image data for a retina of a subject having neovascular age-related macular degeneration (nAMD); processing the OCT image data using a model system including a machine learning model to generate a prediction output; and generating a final output indicative of a risk of developing fibrosis in the retina based on the predicted output.
Example 2: the method of embodiment 1 wherein the machine learning model comprises a deep learning model, and wherein the processing comprises: segmenting OCT image data via a segmentation model comprising at least one neural network to form segmented image data; and processing the segmented image data using a deep learning model of the model system to generate a prediction output.
Example 3: the method of embodiment 2 wherein the machine learning model comprises a regression model, and wherein the processing further comprises: extracting retinal feature data from the segmented image data via a feature extraction model, wherein the retinal feature data includes at least one of a first feature value associated with at least one retinal layer element or a second feature value associated with at least one retinal pathological element; and processing the OCT image data using the regression model to generate a prediction output.
Example 4: the method of any one of embodiments 1 through 3 wherein the machine learning model comprises at least one convolutional neural network.
Example 5: the method of any of embodiments 1 through 4, wherein the machine learning model comprises a deep learning model, and wherein the processing comprises: the OCT image data and clinical data are processed using a deep learning model to generate a predictive output, wherein the clinical data includes at least one of a baseline Choroidal Neovascularization (CNV) type, a baseline visual acuity measurement, or a baseline age.
Example 6: the method of embodiment 5 wherein the deep learning model comprises a Convolutional Neural Network (CNN) system, wherein a first portion of the CNN system comprises a convolutional neural network and a second portion of the CNN system comprises a custom dense layer portion, and wherein the processing of the OCT image data and the clinical data comprises: processing OCT image data using a first portion of the CNN system to generate a first intermediate output; connecting a set of vectors for clinical data to the first intermediate output to form a second intermediate output; and processing the second intermediate output using the custom dense layer portion to generate a predicted output.
Example 7: the method of any one of embodiments 1-6, wherein the final output comprises at least one of: a binary classification indicating whether fibrosis progression is predicted; clinical trial advice to include or exclude subjects from a clinical trial based on a predicted output or binary classification; or alter a treatment recommendation for at least one of the subject of the treatment type or adjust the treatment regimen based on the prediction output or the binary classification.
Example 8: a method, comprising: receiving Optical Coherence Tomography (OCT) image data for a retina of a subject having neovascular age-related macular degeneration (nAMD); segmenting the OCT image data using a segmentation model to generate segmented image data; processing the segmented image data using a deep learning model to generate a prediction output; and generating a final output indicative of a risk of developing fibrosis in the retina based on the predicted output.
Example 9: the method of embodiment 8 wherein at least one of the segmentation model or the deep learning model comprises at least one convolutional neural network.
Example 10: the method of embodiment 8 or embodiment 9, wherein the processing comprises: the segmented image data and clinical data are processed using a deep learning model to generate a predictive output, wherein the clinical data includes at least one of a baseline Choroidal Neovascularization (CNV) type, a baseline visual acuity measurement, or a baseline age.
Example 11: the method of embodiment 10 wherein the deep learning model comprises a Convolutional Neural Network (CNN) system, wherein a first portion of the CNN system comprises the convolutional neural network and a second portion of the CNN system comprises the custom dense layer portion, and wherein the processing of the segmented image data and the clinical data comprises: processing the segmented image data using a first portion of the CNN system to generate a first intermediate output; connecting a set of vectors for clinical data to the first intermediate output to form a second intermediate output; and processing the second intermediate output using the custom dense layer portion to generate a predicted output.
Example 12: the method of any one of embodiments 8-11, wherein the final output comprises at least one of: a binary classification indicating whether fibrosis progression is predicted; clinical trial advice to include or exclude subjects from a clinical trial based on a predicted output or binary classification; or alter a treatment recommendation for at least one of the subject of the treatment type or adjust the treatment regimen based on the prediction output or the binary classification.
Example 13: a method, comprising: receiving at least one of clinical data or retinal feature data for a retina of a subject having neovascular age-related macular degeneration (nAMD); processing at least one of the clinical data or the retinal feature data using a regression model to generate a predicted output; and generating a final output indicative of a risk of developing fibrosis in the retina based on the predicted output.
Example 14: the method of embodiment 13, further comprising: retinal feature data is extracted from the segmented image data via a feature extraction model.
Example 15: the method of embodiment 14, further comprising: the OCT image data is segmented via a segmentation model comprising at least one neural network to form segmented image data.
Example 16: the method according to any one of embodiments 13-15, wherein the clinical data comprises at least one of a baseline Choroidal Neovascularization (CNV) type, a baseline visual acuity measurement, or a baseline age, and wherein the retinal feature data comprises at least one of a first feature value associated with at least one retinal layer element or a second feature value associated with at least one retinal pathological element.
Example 17: the method of any one of embodiments 13-16, wherein the regression model is trained using at least one of Ridge regularization, lasso regularization, or ELASTIC NET regularization.
Example 18: the method of any one of embodiments 13-17, wherein the predictive output comprises a score indicative of a probability that fibrosis is likely to develop.
Example 19: the method of any one of embodiments 13-18, wherein the final output comprises at least one of: a binary classification indicating whether fibrosis progression is predicted; clinical trial advice to include or exclude subjects from a clinical trial based on a predicted output or binary classification; or alter a treatment recommendation for at least one of the subject of the treatment type or adjust the treatment regimen based on the prediction output or the binary classification.
Example 20: the method of embodiment 3 or any of embodiments 13-19, wherein the retinal feature data comprises at least one of: the grade of subretinal highly reflective material (SRHM), the grade of Pigment Epithelial Detachment (PED), the maximum height of subretinal fluid (SRF), the maximum thickness between the interface of the outer stratum reticulare (OPL) and Henle Fiber Layer (HFL) and the Retinal Pigment Epithelium (RPE) layer, or the thickness between the Inner Limiting Membrane (ILM) layer and the RPE layer.
IX. other notes
Headings and subheadings between chapters and sub-chapters of this document are only used to improve readability and do not imply that features cannot be combined across chapters and sub-chapters. Thus, the sections and subsections do not describe separate embodiments. Any one or more of the embodiments described herein in any section or with respect to any figure may be combined or otherwise integrated with any one or more of the other embodiments described herein.
Some embodiments of the present disclosure include a system including one or more data processors. In some embodiments, the system includes a non-transitory computer-readable storage medium containing instructions that, when executed on the one or more data processors, cause the one or more data processors to perform part or all of one or more methods disclosed herein and/or part or all of one or more processes disclosed herein. Some embodiments of the present disclosure include a computer program product tangibly embodied in a non-transitory machine-readable storage medium, comprising instructions configured to cause one or more data processors to perform part or all of one or more methods disclosed herein and/or part or all of one or more processes disclosed herein.
The terms and expressions which have been employed are used as terms of description and not of limitation, and there is no intention in the use of such terms and expressions of excluding any equivalents of the features shown and described or portions thereof, but it is recognized that various modifications are possible within the scope of the invention claimed. Thus, although the claimed invention has been specifically disclosed by embodiments and optional features, it will be appreciated that modifications and variations of the concepts disclosed herein may be resorted to by those skilled in the art, and that such modifications and variations are considered to be within the scope of this invention as defined by the appended claims.
The following description merely provides preferred exemplary embodiments and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the ensuing description of the preferred exemplary embodiments will provide those skilled in the art with an enabling description for implementing various embodiments. It should be understood that various changes can be made in the function and arrangement of elements (elements in a block diagram or schematic, elements in a flow diagram, etc.) without departing from the spirit and scope as set forth in the appended claims.
In the following description, specific details are given to provide a thorough understanding of the embodiments. It may be evident, however, that the embodiments may be practiced without these specific details. For example, circuits, systems, networks, processes, and other components may be shown in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known circuits, processes, algorithms, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments.

Claims (20)

1. A method, comprising:
Receiving Optical Coherence Tomography (OCT) image data for a retina of a subject having neovascular age-related macular degeneration (nAMD);
Processing the OCT image data using a model system comprising a machine learning model to generate a prediction output; and
A final output is generated based on the predicted output indicative of a risk of developing fibrosis in the retina.
2. The method of claim 1, wherein the machine learning model comprises a deep learning model, and wherein the processing comprises:
segmenting the OCT image data via a segmentation model comprising at least one neural network to form segmented image data; and
The segmented image data is processed using the deep learning model of the model system to generate the prediction output.
3. The method of claim 2, wherein the machine learning model comprises a regression model, and wherein the processing further comprises:
extracting retinal feature data from the segmented image data via a feature extraction model, wherein the retinal feature data includes at least one of a first feature value associated with at least one retinal layer element or a second feature value associated with at least one retinal pathological element; and
Processing the OCT image data using the regression model to generate the prediction output.
4. A method according to any one of claims 1 to 3, wherein the machine learning model comprises at least one convolutional neural network.
5. The method of any of claims 1-4, wherein the machine learning model comprises a deep learning model, and wherein the processing comprises:
processing the OCT image data and clinical data using the deep learning model to generate the prediction output, wherein the clinical data includes at least one of a baseline Choroidal Neovascularization (CNV) type, a baseline visual acuity measurement, or a baseline age.
6. The method of claim 5, wherein the deep learning model comprises a Convolutional Neural Network (CNN) system, wherein a first portion of the CNN system comprises a convolutional neural network and a second portion of the CNN system comprises a custom dense layer portion, and wherein the processing of the OCT image data and the clinical data comprises:
Processing the OCT image data using the first portion of the CNN system to generate a first intermediate output;
Connecting a set of vectors for the clinical data to the first intermediate output to form a second intermediate output; and
The second intermediate output is processed using the custom dense layer portion to generate the predicted output.
7. The method of any one of claims 1 to 6, wherein the final output comprises at least one of:
A binary classification indicating whether fibrosis progression is predicted;
Clinical trial advice to include or exclude the subject from a clinical trial based on the predictive output or the binary classification; or alternatively
A treatment recommendation of at least one of a treatment type or an adjustment treatment regimen is altered for the subject based on the prediction output or the binary classification.
8. A method, comprising:
Receiving Optical Coherence Tomography (OCT) image data for a retina of a subject having neovascular age-related macular degeneration (nAMD);
Segmenting the OCT image data using a segmentation model to generate segmented image data;
processing the segmented image data using a deep learning model to generate a prediction output; and
A final output is generated based on the predicted output indicative of a risk of developing fibrosis in the retina.
9. The method of claim 8, wherein at least one of the segmentation model or the deep learning model comprises at least one convolutional neural network.
10. The method of claim 8 or claim 9, wherein the processing comprises:
Processing the segmented image data and clinical data using the deep learning model to generate the predictive output, wherein the clinical data includes at least one of a baseline Choroidal Neovascularization (CNV) type, a baseline visual acuity measurement, or a baseline age.
11. The method of claim 10, wherein the deep learning model comprises a Convolutional Neural Network (CNN) system, wherein a first portion of the CNN system comprises a convolutional neural network and a second portion of the CNN system comprises a custom dense layer portion, and wherein the processing of the segmented image data and the clinical data comprises:
processing the segmented image data using the first portion of the CNN system to generate a first intermediate output;
Connecting a set of vectors for the clinical data to the first intermediate output to form a second intermediate output; and
The second intermediate output is processed using the custom dense layer portion to generate the predicted output.
12. The method of any of claims 8-11, wherein the final output comprises at least one of:
A binary classification indicating whether fibrosis progression is predicted;
Clinical trial advice to include or exclude the subject from a clinical trial based on the predictive output or the binary classification; or alternatively
A treatment recommendation of at least one of a treatment type or an adjustment treatment regimen is altered for the subject based on the prediction output or the binary classification.
13. A method, comprising:
receiving at least one of clinical data or retinal feature data for a retina of a subject having neovascular age-related macular degeneration (nAMD);
processing the at least one of the clinical data or the retinal feature data using a regression model to generate a prediction output; and
A final output is generated based on the predicted output indicative of a risk of developing fibrosis in the retina.
14. The method as recited in claim 13, further comprising:
the retinal feature data is extracted from the segmented image data via a feature extraction model.
15. The method as recited in claim 14, further comprising:
the OCT image data is segmented via a segmentation model comprising at least one neural network to form the segmented image data.
16. The method of any one of claims 13-15, wherein the clinical data comprises at least one of a baseline Choroidal Neovascularization (CNV) type, a baseline visual acuity measurement, or a baseline age, and wherein the retinal feature data comprises at least one of a first feature value associated with at least one retinal layer element or a second feature value associated with at least one retinal pathological element.
17. The method of any one of claims 13-16, wherein the regression model is trained using at least one of Ridge regularization, lasso regularization, or ELASTIC NET regularization.
18. The method of any one of claims 13 to 17, wherein the predictive output comprises a score indicative of a probability that fibrosis is likely to develop.
19. The method of any of claims 13-18, wherein the final output comprises at least one of:
A binary classification indicating whether fibrosis progression is predicted;
Clinical trial advice to include or exclude the subject from a clinical trial based on the predictive output or the binary classification; or alternatively
A treatment recommendation of at least one of a treatment type or an adjustment treatment regimen is altered for the subject based on the prediction output or the binary classification.
20. The method of claim 3 or any of claims 13-19, wherein the retinal feature data comprises at least one of: a grade of subretinal highly reflective material (SRHM), a grade of Pigment Epithelial Detachment (PED), a maximum height of subretinal fluid (SRF), a maximum thickness between an interface of an outer mesh layer (OPL) and Henle Fiber Layer (HFL) and a Retinal Pigment Epithelium (RPE) layer, or a thickness between an Inner Limiting Membrane (ILM) layer to the RPE layer.
CN202280083690.4A 2021-12-16 2022-12-16 Prognosis model for predicting fibrosis development Pending CN118451452A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/290,628 2021-12-16
US202263330756P 2022-04-13 2022-04-13
US63/330,756 2022-04-13
PCT/US2022/081817 WO2023115007A1 (en) 2021-12-16 2022-12-16 Prognostic models for predicting fibrosis development

Publications (1)

Publication Number Publication Date
CN118451452A true CN118451452A (en) 2024-08-06

Family

ID=92333790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280083690.4A Pending CN118451452A (en) 2021-12-16 2022-12-16 Prognosis model for predicting fibrosis development

Country Status (1)

Country Link
CN (1) CN118451452A (en)

Similar Documents

Publication Publication Date Title
Benet et al. Artificial intelligence: the unstoppable revolution in ophthalmology
Prasad et al. Multiple eye disease detection using Deep Neural Network
Mantel et al. Automated quantification of pathological fluids in neovascular age-related macular degeneration, and its repeatability using deep learning
Panda et al. Describing the structural phenotype of the glaucomatous optic nerve head using artificial intelligence
Sharma et al. Machine learning approach for detection of diabetic retinopathy with improved pre-processing
JP2023551898A (en) Automated screening for diabetic retinopathy severity using color fundus image data
Randive et al. A review on computer-aided recent developments for automatic detection of diabetic retinopathy
Wang et al. Application of a deep machine learning model for automatic measurement of EZ width in SD-OCT images of RP
US20220319003A1 (en) Machine-learning techniques for prediction of future visual acuity
Bali et al. Analysis of deep learning techniques for prediction of eye diseases: A systematic review
EP4232999A1 (en) Multimodal geographic atrophy lesion segmentation
CN118451452A (en) Prognosis model for predicting fibrosis development
US20230135258A1 (en) Prediction of geographic-atrophy progression using segmentation and feature evaluation
Mani et al. An automated hybrid decoupled convolutional network for laceration segmentation and grading of retinal diseases using optical coherence tomography (OCT) images
KR20240125600A (en) Prognostic model for predicting fibrosis development
Chaudhary et al. Glaucoma detection using cross validation algorithm: A comparitive evaluation on rapidminer
US20230326024A1 (en) Multimodal prediction of geographic atrophy growth rate
Mazar Pasha et al. Diabetic Retinopathy Severity Categorization in Retinal Images Using Convolution Neural Network.
US20240038395A1 (en) Machine learning-based prediction of treatment requirements for neovascular age-related macular degeneration (namd)
US20230154595A1 (en) Predicting geographic atrophy growth rate from fundus autofluorescence images using deep neural networks
Cole et al. Artificial Intelligence in Retina
Kaur et al. Vessel segmentation of retinal images using marr-hildreth and FCM divide clustering method
JP2024516541A (en) Predicting treatment outcomes for neovascular age-related macular degeneration using baseline characteristics
CN118414671A (en) Predicting optimal treatment regimens for patients with neovascular age-related macular degeneration (NAMD) using machine learning
Narasimharao et al. Enhanced Diabetic Retinopathy Detection through Convolutional Neural Networks for Retinal Image Classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication