WO2023081729A1 - Procédés et systèmes de prédiction de biomarqueurs à l'aide d'une tomographie par cohérence optique - Google Patents

Procédés et systèmes de prédiction de biomarqueurs à l'aide d'une tomographie par cohérence optique Download PDF

Info

Publication number
WO2023081729A1
WO2023081729A1 PCT/US2022/079182 US2022079182W WO2023081729A1 WO 2023081729 A1 WO2023081729 A1 WO 2023081729A1 US 2022079182 W US2022079182 W US 2022079182W WO 2023081729 A1 WO2023081729 A1 WO 2023081729A1
Authority
WO
WIPO (PCT)
Prior art keywords
amd
oct
biomarker
prediction
biomarkers
Prior art date
Application number
PCT/US2022/079182
Other languages
English (en)
Inventor
Eran Halperin
Oren AVRAM
Berkin DURMUS
Srinivas Sadda
Jeffrey CHIANG
Original Assignee
The Regents Of The University Of California
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Regents Of The University Of California filed Critical The Regents Of The University Of California
Publication of WO2023081729A1 publication Critical patent/WO2023081729A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10101Optical tomography; Optical coherence tomography [OCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic

Definitions

  • the current disclosure is directed to deep learning methods and systems capable of detecting and classifying biomarkers for ocular diseases; and more particularly to methods and systems for detecting biomarkers within ophthalmic imaging volumes using such deep learning methods and systems.
  • Age-related macular degeneration may be a leading cause of vision loss for the older population. While the late stage of disease such as geographic atrophy (GA) where permanent vision loss has occurred may be irreversible, anti-vascular endothelial growth factor (anti-VEGF) treatments have been successful in managing the progression of exudative (wet) manifestations of macular degeneration and vitamin supplements have been associated with reduced rates of neovascularization and atrophy. Early disease management and timely intervention has generally been associated with better outcomes in terms of loss of visual acuity.
  • GA geographic atrophy
  • anti-VEGF anti-vascular endothelial growth factor
  • OCT Optical Coherence Tomography
  • Methods and systems in accordance with various embodiments of the invention utilize deep learning to capture three-dimensional (3D) information from two-dimensional (2D) images, and to detect irregularities in 3D images.
  • 3D images that can be analyzed include (but are not limited to): optical coherence tomography, and magnetic resonance imaging.
  • OCT optical coherence tomography
  • Several embodiments can detect biomarkers within optical coherence tomography (OCT) volumes using deep learning methods and systems.
  • OCT optical coherence tomography
  • Many embodiments can detect retinal disease related biomarkers from OCT images.
  • Several embodiments provide diagnosis and treatment of retinal diseases including (but not limited to) age-related macular degeneration (AMD), AMD subtypes, AMD progression, and/or AMD deterioration based on the biomarker detection.
  • AMD age-related macular degeneration
  • One embodiment of the invention includes a method to predict retinal disease biomarkers comprising:
  • the prediction is a transformation of the feature vector; where the final output is at least one biomarker selected from the group consisting of: incomplete retinal pigment epithelial and outer retinal atrophy (iRORA), complete retinal pigment epithelial and outer retinal atrophy (cRORA), and any combinations thereof.
  • iRORA incomplete retinal pigment epithelial and outer retinal atrophy
  • cRORA complete retinal pigment epithelial and outer retinal atrophy
  • the prediction takes place at least 3 months before the biomarker becomes pronounced.
  • the at least one biomarker is an indicator of a retinal disease selected from the group consisting of: age-related macular degeneration (AMD), an AMD subtype, AMD progression, AMD deterioration, and any combinations thereof.
  • AMD age-related macular degeneration
  • AMD AMD subtype
  • AMD AMD subtype
  • AMD AMD subtype
  • AMD AMD subtype
  • AMD AMD subtype
  • AMD AMD subtype
  • AMD progression AMD progression
  • AMD deterioration any combinations thereof.
  • an AMD subtype is selected from the group consisting of: early or intermediate AMD (earlylntAMD), wet AMD, geographic atrophy, and any combinations thereof.
  • the input data further comprising an electronic health record.
  • the electronic health record comprises a data selected from the group consisting of: age, sex, smoking status, race, ethnicity, cardiovascular comorbidity, and any combinations thereof.
  • the two-dimensional image is a fovea scan.
  • the pre-trained feature extractor is ResNet18.
  • the plurality of slices is stacked linearly to form the two-dimensional image.
  • An additional embodiment includes a method to predict retinal disease biomarkers comprising:
  • OCT optical coherent tomography
  • biomarker prediction • generating an output of biomarker prediction, wherein the prediction is a classification of the at least one three-dimensional OCT image; wherein the final output is at least one biomarker selected from the group consisting of: subretinal drusenoid deposits (SDD), high central drusen volume (DV), intraretinal hyperreflective foci (HRF), hyporeflective drusen cores (hDC), and any combinations thereof.
  • SDD subretinal drusenoid deposits
  • DV high central drusen volume
  • HRF intraretinal hyperreflective foci
  • hDC hyporeflective drusen cores
  • the prediction takes place at least 3 months before the biomarker becomes pronounced.
  • the at least one biomarker is an indicator of a retinal disease selected from the group consisting of: age-related macular degeneration (AMD), an AMD subtype, AMD progression, AMD deterioration, and any combinations thereof.
  • AMD age-related macular degeneration
  • AMD AMD subtype
  • AMD AMD subtype
  • AMD AMD subtype
  • AMD AMD subtype
  • AMD AMD subtype
  • AMD AMD subtype
  • AMD progression AMD progression
  • AMD deterioration any combinations thereof.
  • an AMD subtype is selected from the group consisting of: early or intermediate AMD (earlylntAMD), wet AMD, geographic atrophy, and any combinations thereof.
  • the two-dimensional image is a fovea scan.
  • the pre-trained feature extractor is ResNet18.
  • the plurality of slices is stacked linearly to form the two-dimensional image.
  • FIGs. 1A - 1 B provide atrophy examples of iRORA and cRORA lesions in accordance with prior art.
  • FIGs. 2A - 2D provide receiver operating characteristic and precision recall curves summarizing performance identifying iRORA and cRORA within the training set in accordance with an embodiment of the invention.
  • FIG. 3 provides feature importance maps from the training set in accordance with an embodiment of the invention.
  • FIGs. 4A - 4B provide iRORA and cRORA classification performance at the B- scan level on the independent test set of 60 B-scans in accordance with an embodiment of the invention.
  • FIG. 5 provides sensitivity and precision of iRORA and cRORA prediction in accordance with an embodiment of the invention.
  • FIG. 6 provides positive outputs from CAM validation set in accordance with an embodiment of the invention.
  • FIG. 7 provides negative outputs from CAM validation set in accordance with an embodiment of the invention.
  • FIGs. 8A and 8B provide volume level prediction performance on the external test set representative of the general-population in accordance with an embodiment of the invention.
  • FIG. 9 provides error analysis on AMISH validation in accordance with an embodiment of the invention.
  • FIG. 10 provides heat maps on external validation set for successful examples in accordance with an embodiment of the invention.
  • FIGs. 11 A - 11 B provide prediction of Wet AMD conversion in accordance with an embodiment of the invention.
  • FIGs. 12A - 12B provide Wet AMD imputation in accordance with an embodiment of the invention.
  • FIG. 13 provides a model architecture of SLIViT in accordance with an embodiment of the invention.
  • FIG. 14 provides the receiver operating characteristic and precision-recall area under the curve scores of four biomarkers from SLIViT in accordance with an embodiment of the invention.
  • FIG. 15 provides the receiver operating characteristic and precision-recall area under the curve scores of four biomarkers from three different sites from SLIViT in accordance with an embodiment of the invention.
  • FIG. 16 provides the precision and recall scores of three biomarkers from SLIViT and three clinicians in accordance with an embodiment of the invention
  • 3D images capable of capturing three-dimensional (3D) information from two-dimensional (2D) images, and methods and systems for detecting irregularities in 3D images using such deep learning methods and systems are provided.
  • 3D images that can be analyzed include (but are not limited to): optical coherence tomography, and magnetic resonance imaging.
  • OCT optical coherence tomography
  • Many embodiments can detect retinal disease related biomarkers from OCT images.
  • AMD age-related macular degeneration
  • AMD subtypes include (but are not limited to) early or intermediate AMD (earlylntAMD), wet AMD, and geographic atrophy (GA).
  • AMD progression in accordance with an embodiment includes that detection of biomarkers can happen a time period (for example, 3 months or 6 months) earlier before the biomarkers become more pronounced.
  • Some embodiments can detect the presence and/or absence of clinically useful biomarkers in OCT images using deep neural networks.
  • OCT is the primary method for diagnosing retinal diseases including (but not limited to) AMD and determining treatment.
  • OCT scans can produce three-dimensional representations of the retina, which are manually reviewed by a clinician.
  • clinicians are generally looking for the presence or absence of certain biomarkers that are associated with the disease, which they use to inform clinical decisions.
  • Many embodiments provide methods and systems to identify which of retinal disease related biomarkers are present in OCT scans.
  • AMD-related biomarkers include (but are not limited to): intraretinal hyperreflective feature over drusen, intraretinal hyperreflective feature over non drusen, intraretinal cystoid spaces, drusenoid pigment epithelial detachment (PED), subretinal tissue, subretinal drusenoid deposits (SDD), incomplete retinal pigment epithelial and outer retinal atrophy (iRORA), complete retinal pigment epithelial and outer retinal atrophy (cRORA), high central drusen volume (DV), intraretinal hyperreflective foci (HRF), and hyporeflective drusen cores (hDC).
  • intraretinal hyperreflective feature over drusen include (but are not limited to): intraretinal hyperreflective feature over drusen, intraretinal hyperreflective feature over non drusen, intraretinal cystoid spaces, drusenoid pigment epithelial detachment
  • embodiments are able to reach accurate prediction results using a limited amount of data.
  • a number of embodiments optimize clinical workflow by directly presenting diagnostic results of the biomarkers and obviating manual image review by clinicians.
  • the automated processes in accordance to several embodiments may not be prone to fatigue and/or human biases, thus reduce the risk of mistakes due to human errors.
  • Such embodiments improve diagnostic efficiency in reviewing the images while still leave the clinical decisions to ophthalmologists.
  • Many embodiments incorporate a deep learning neural network to analyze OCT images and detect the presence and/or absence of clinically relevant biomarkers.
  • Several embodiments employ transfer learning to overcome the lack of available training data for canonical deep learning approaches.
  • a number of embodiments employ transfer learning processes in order to apply machine learning in a setting where there is a relatively small amount of sample data.
  • transfer learning leverages an external dataset including (but not limited to) foveal scans.
  • Several embodiments implement transfer learning from a 2D pre-trained model to process 3D OCT scans. Transfer learning from a 2D pre-trained model instead of a 3D pre-trained model may possess a challenge for not considering the spatial information (such as interscans).
  • a vision transformer architecture with positional embedding can be used to overcome such challenges.
  • Many embodiments provide identification and characterization of changes in the outer retina and their relation to the progression of AMD to atrophy. Many embodiments implement machine learning methods to identify cRORA and iRORA lesions reliably and near human-level performance. Several embodiments provide that identification of cRORA and iRORA lesions using machine learning processes can support machine-assisted OCT reading systems, which may help enforce consistency across human readers. Certain embodiments provide that such methods have the potential of an automated approach for rapid and consistent large-scale validation of newly defined clinical biomarkers. In many embodiments, the machine learning processes include transfer learning to identify cRORA and iRORA lesions. A number of embodiments can annotate biomarker labels including (but not limited to) SDD, HRF, hDC, and DV with accuracy higher than the annotation performed by clinicians.
  • Several embodiments use a deep learning architecture to identify AMD-related biomarkers derived from OCT images.
  • OCT biomarker identification processes with EHR data including (but not limited to) demographics and comorbidities, to assess prediction performance relative to known factors.
  • EHR data-based algorithms in accordance with some embodiments outperform the solutions in clinically relevant metrics and can provide actionable information which has the potential to improve patient care.
  • Several embodiments provide frameworks for automated large-scale processing of OCT volumes, making it possible to analyze vast archives without human supervision.
  • FIGs. 1A and 1 B illustrate iRORA and cRORA lesions as examples of atrophy.
  • cRORA may lead to a high degree of inter-rater reliability when determining its presence or absence.
  • heterogeneity in the presentation of iRORA has resulted in lower levels of reliability in terms of identification, making its identification and diagnosis susceptible to differences in clinician bias and training.
  • iRORA can be considered a precursor to cRORA and eventually, GA, developing a reliable method for identifying iRORA as an intervention point and as an early indicator for advanced disease may be ideal.
  • Many embodiments implement machine learning methods to identify cRORA and iRORA lesions reliably and near human-level performance.
  • identification of cRORA and iRORA lesions using machine learning processes can support machine-assisted OCT reading systems, which may help enforce consistency across human readers.
  • Some embodiments provide that the intermediate stages iRORA and cRORA can be identified using automated methods and rigorously evaluate its performance in two external datasets. Certain embodiments provide that such methods have the potential of an automated approach for rapid and consistent large- scale validation of newly defined clinical biomarkers.
  • the machine learning processes include transfer learning to identify cRORA and iRORA lesions.
  • the models in accordance with some embodiments are developed using a relatively small collection of training data. Several embodiments provide that such approaches generalize well to independently collected data reflective of two different populations.
  • ALIROC and ALIPRC provide model performance using ALIROC and ALIPRC. While B-scan level performance is monitored for consistency with the training process, whole volume level performance provides a more realistic indication of how the model might perform in production. Given the distinctive appearance and well-defined criteria for cRORA, performance at both the B-scan level and Volume level are strong across all datasets.
  • the B-scan level ALIPRC for iRORA identification can be significantly higher in the external validation set (0.82 [0.69, 0.92]) versus the crossvalidated performance within the training set (0.32, [0.29, 0.36]) while ALIROC is slightly lower. This may suggest that while the model might be slightly less sensitive for identifying iRORA in independently acquired images, it is still able to perform with a high degree of positive predictive value.
  • a common limitation of machine learning models may be their ability to generalize beyond the intended cohort.
  • the machine models in accordance with some embodiments are developed using OCT volumes acquired from patients already referred to an ophthalmology clinic. Certain embodiments provide that ALIPRC reduces from 0.96 to 0.83 and 0.85 to 0.61 for cRORA and iRORA, respectively, when evaluated upon a sample representative of the general population in terms of AMD prevalence. However, for patients already exhibiting signs of macular degeneration, the model may be precise (external validation ALIPRC: 0.84 and 0.82 for cRORA and iRORA, respectively).
  • biomarker identification with machine learning processes can be selectively applied to patients exhibiting signs of macular degeneration and are thus at risk for atrophy in the ophthalmology clinic.
  • such processes can be used for rapidly identifying potential patients for clinical trials, which would save patient recruitment costs.
  • Some embodiments provide that the processes might be used to rapidly annotate and identify cohorts for future study and integrate into the current clinical research protocol.
  • Model performance can be quantified in terms of area under the receiver operating characteristic curve (ALIROC) and area under the precision-recall curve (ALIPRC) in accordance with some embodiments. While the ALIROC metric may be commonly used in binary classification tasks, in the presence of imbalance data the ALIPRC metric summarizes the trade-off between positive predictive value and sensitivity, making it an important indicator for feasibility and practicality.
  • the models can be evaluated within the training set using cross-validation, in which a new model is trained each iteration and evaluated on a left-out validation set, and the final models are then externally validated on two separate test sets independently collected across different sites.
  • Some embodiments provide internal validation. Within the training set (71 patients, 188 volumes, 10,266 B-scans; details on the training set data can be found below), ALIROC and ALIPRC are averaged across eight cross-validation folds to summarize performance. Cross validation can be stratified at the patient level, such that the data from any given patient could appear only in the training or the testing set, and not both. ALIROC and ALIPRC metrics can be computed for both iRORA and cRORA classification at the single B-scan level, capturing the ability of the model to accurately detect the lesion in any given slice, and at the volume (whole OCT) level, capturing the ability of the model to detect the lesion in any given eye.
  • the volume prediction can be obtained by taking the maximum output of the model for all slices contained with the OCT volume.
  • the model is able to identify cRORA with mean ALIROC 0.978 (95% Cl: ) and mean ALIPRC 0.803 (95% Cl: ), and is able to identify iRORA with mean AUROC 0.911 (95% Cl: ) and AUPRC 0.461 (95% Cl:).
  • the CNN model is able to identify cRORA with mean AUROC 0.981 (95% Cl: ) and mean AUPRC 0.972 (95% Cl: ), and is able to identify iRORA with mean AUROC 0.900 (95% Cl: ) and AUPRC 0.861 (95% Cl: ). This pattern of results shows when considering model outputs across the set of B-scans, it is able to accurately and precisely identify both iRORA and cRORA lesions, despite having relative difficulty on any single given B-scan.
  • FIGs. 2A - 2D Model performance using the training data in accordance with an embodiment is illustrated in FIGs. 2A - 2D.
  • 201 represent performance at the B-scan level.
  • 202 represent performance at the volume level (aggregating over B- scans). Numbers in the figure legends represent area under the respective curve with a 95% confidence interval computed using a bootstrapping procedure.
  • FIG. 3 Feature importance maps from the training set in accordance with an embodiment is illustrated in FIG. 3.
  • Left column B-scans exhibiting an iRORA lesion.
  • Middle column B-scans exhibiting a cRORA lesion.
  • Right column B-scans exhibiting healthy maculae. Heat maps for B-scans with lesions appear properly localized, and heat maps for B-scans of healthy maculae appear diffuse and result in a correct prediction.
  • a “final” model can be trained by randomly splitting the training dataset into 80/20 training/validation sets with the same training parameters.
  • the resulting model can be evaluated on two external datasets, on a set of B-scans from cohort of patients already exhibiting symptoms of Age-related Macular Degeneration (though not necessarily signs of atrophy), and one representative of a general elderly population, random sampled from the Amish population in order to assess the performance of the algorithm applied to a proxy of the general population versus a targeted cohort.
  • Certain embodiments provide clinical comparisons. Recent work has focused on characterizing and refining iRORA and cRORA lesions. In many embodiments, ROC and PRC curves can be generated against the majority judgments of the annotators. On the set of 60 randomly sampled B-scans, the model performed with ALIROC (iRORA: 0.71 , 95% Cl (0.57, 0.84); cRORA: 0.82, 95% Cl (0.69, 0.91 )) and AUPRC (iRORA: 0.83, 95% Cl (0.69, 0.92); cRORA: 0.84, 95% Cl (0.71 , 0.94)).
  • FIG. 4A illustrates the Receiver Operating Characteristic curves.
  • FIG. 4B illustrates the Precision-Recall curves. Numbers indicate area under the respective curve and the 95% confidence interval computed using a bootstrapping procedure.
  • FIG. 5 illustrates sensitivity and precision of iRORA and cRORA prediction in accordance with an embodiment of the invention
  • FIG. 6 Positive outputs from CAM validation set in accordance with an embodiment are illustrated in FIG. 6.
  • the model correctly assigns higher probabilities and feature representations to the iRORA and cRORA lesions depicted.
  • Negative outputs from CAM validation set in accordance with an embodiment are illustrated in FIG. 7.
  • the model represents abnormalities in the macula as shown by its heat map representation, but correctly assigns low probabilities to these examples.
  • OCT volumes 108,289 B-scans
  • Model performance can be evaluated at the whole-OCT level (26 iRORA and 33 cRORA cases) which are substantially different from the lesion prevalence in the training set.
  • the maximum model output value can be taken for each volume and considered the volume positive if there is a lesion in any of the component B-scans.
  • ALIROC performance may be nearly perfect.
  • model performance may be reduced for this external validation set relative to cross-validation results for both iRORA and cRORA classification in terms of AUPRC (iRORA: 0.61 , 95% Cl (0.45, 0.82), cRORA: 0.83, 95% CI(0.68, 0.95).
  • a decrease in performance was anticipated, as these patients were randomly sampled from the general elderly population where patients may not exhibit any signs of macular degeneration or ocular disease, as compared to the training set which was sampled from a clinical cohort.
  • the relative stability of model performance with respect to iRORA and cRORA classification despite the injection of completely healthy patients not previously encountered highlights its potential for generalizing across cohorts.
  • FIG. 8A illustrates Receiver operating characteristic curves.
  • FIG. 8B illustrates Precision recall curves. Numbers indicate area and 95% confidence intervals under the respective curves.
  • FIG. 9 Error analysis on AMISH validation in accordance with an embodiment is illustrated in FIG. 9.
  • the slices in which the model missed may have image artifacts or may be of poor quality.
  • Heat maps on external validation set for successful examples in accordance with an embodiment is illustrated in FIG. 10.
  • OCT biomarker identification processes with electronic health record (HER) data including (but not limited to) demographics and comorbidities, to assess prediction performance relative to known factors.
  • HER electronic health record
  • the OCT and EHR data-based algorithms in accordance with some embodiments outperform the solutions in clinically relevant metrics and can provide actionable information which has the potential to improve patient care.
  • Several embodiments provide frameworks for automated large-scale processing of OCT volumes, making it possible to analyze vast archives without human supervision.
  • SLIVER-net machine learning
  • the SLIVER-net processes leverage transfer learning to reduce the amount of required training data and provide a method for retrospectively labeling existing OCT volumes with clinically relevant biomarker scores, where the risk score may provide an estimate of the likelihood of the presence of the biomarker in the patient.
  • SLIVER-net offers a middle-ground between a purely data driven approach and a clinically driven approach based on biomarkers. This approach may allow for efficient, low-cost, large-scale studies and analyses of AMD progression while anchoring inferences and conclusions to clinically-relevant biomarkers.
  • SLIVER-net retrospectively to ophthalmic imaging data to evaluate the clinical utility of machine-based annotations.
  • SLIVER-net may be used to automatically annotate the available OCT volumes with a risk score for anatomical biomarkers not currently captured within the EHR, such as high central drusen volume, hyporeflective drusen core, hyperreflective foci, and subretinal drusenoid deposits.
  • EHR electronic health record
  • AMD AMD-related demographic, behavioral, and comorbid risk factors, can be extracted from the EHR as additional predictors.
  • Some embodiments implement out-of-sample prediction frameworks to evaluate the predictive utility of the machine-read OCT biomarkers relative to EHR- derived features and risk factors.
  • Some embodiments construct several candidate feature sets, consisting of machine-read OCT and EHR-derived features and compared prediction performance for models trained using the different feature sets.
  • SLIVER-net was used to automatically annotate OCT volumes for the following machine-read OCT features: hyperreflective foci (HRF), hyporeflective drusen core (hDC), subretinal drusenoid deposits (SDD), reticular pseudodrusen (RPD), and high central drusen volume (hcDV).
  • HRF hyperreflective foci
  • hDC hyporeflective drusen core
  • SDD subretinal drusenoid deposits
  • RPD reticular pseudodrusen
  • hcDV high central drusen volume
  • Several embodiments extract clinically relevant features and outcomes from the EHR for the corresponding encounters during which OCT volumes are acquired.
  • the following clinical risk factors are extracted: age, sex, smoking status, race, ethnicity, and cardiovascular comorbidities such as hypertension, hyperlipidemia, diabetes, and obesity.
  • ICD-10 codes relating to AMD H35.3XXX are also extracted and used to define the current AMD status for a given eye as well as future outcome.
  • Model performance in accordance with many embodiments can be quantified in terms of area under the receiver operating characteristic curve (AUROC) and area under the precision-recall curve (ALIPRC). While the ALIROC metric is used in binary classification tasks, in the presence of imbalanced data the ALIPRC metric summarizes the trade-off between positive predictive value and sensitivity, making it an important indicator for feasibility and practicality.
  • the models are evaluated within the training set using cross-validation, in which a new model is trained each iteration and evaluated on a left-out validation set.
  • Many embodiments provide prediction of future conversion to Wet AMD. Some embodiments apply predictive models including (but not limited to) logistic regression analyses, neural networks, and decision-tree based approaches, to predict future conversion to wet AMD based on extracted features. EHR and machine-read OCT features can be used as input features to predict a future diagnosis of Wet AMD within a few months such as 3 months, 6 months, etc. up to two years. For this analysis, all OCT scans of patients already presenting with advanced AMD (wet AMD or GA) are excluded.
  • predictive models including (but not limited to) logistic regression analyses, neural networks, and decision-tree based approaches, to predict future conversion to wet AMD based on extracted features.
  • EHR and machine-read OCT features can be used as input features to predict a future diagnosis of Wet AMD within a few months such as 3 months, 6 months, etc. up to two years. For this analysis, all OCT scans of patients already presenting with advanced AMD (wet AMD or GA) are excluded.
  • the models are evaluated for their ability to predict two-year outcome (i.e., whether the eye convert to Wet AMD within two years).
  • two-year outcome i.e., whether the eye convert to Wet AMD within two years.
  • AUROC area under the ROC curve
  • AUPRC Precision Recall Curve
  • ALIROC may not be significantly higher after inclusion of these machine-read features, presumably due to the high imbalance of class prevalence.
  • the machine-read OCT features are also by themselves highly predictive of Wet AMD conversion yielding AUROC 0.80 (0.72, 0.9) and AUPRC 0.44 (0.25, 0.67).
  • FIG. 11A illustrates prediction outcome at two years.
  • FIG. 11 B illustrates AUROC as a function of prediction time frame, and AUPRC as a function of prediction time frame. 95% Confidence intervals are computed using bootstrapping.
  • Recent work has applied deep learning to raw OCT volumes to predict 6-month wet AMD conversion in the fellow eye when a patient already had Wet AMD. See, e.g., Yim, et al., Nature Medicine, 2020, 26, 6, 892-99; the disclosure of which is incorporated herein by reference.) Certain embodiments apply the same inclusion and exclusion criteria by limiting a post-hoc analysis model performance in the fellow eye of patients who already had wet AMD.
  • Yim et al. did not report positive predictive value (PPV) for the model, but they included PPV metrics for three retinal specialists and three optometrists. Some embodiments perform on par with them - every clinician’s performance is within or below our confidence intervals.
  • Table 1 Performance metrics of the combined prediction model for a timeframe of 26 and 104 weeks. Results using a threshold selected for high sensitivity (greater than about 80%), a threshold for high specificity (greater than about 90%), and one for a balanced case are presented.
  • FIG. 12A illustrates Receiver Operating Characteristic (ROC) curves.
  • FIG. 12B illustrates Precision-Recall (PR) curves. Each curve represents the performance for a different algorithm. Numbers in the figure legends represent the area under the respective curve and a 95% confidence interval computed using a bootstrapping procedure.
  • ROC Receiver Operating Characteristic
  • PR Precision-Recall
  • Machine-driven annotation in accordance with some embodiments are able to accurately predict the onset of Wet AMD within two years of the OCT acquisition.
  • adding the machine-read biomarkers can improve the ability of logistic regression models trained to predict future conversion to Wet AMD.
  • Using a cross validation approach several embodiments observe this trend across 15000 OCT volumes collected from nearly 4200 patients. In certain embodiments, the study is performed on 4182 patients.
  • a number of embodiments provide that the dataset includes clinically more relevant annotations. Some embodiments provide that prediction based on these biomarkers outperforms previously reported performance by deep learning approaches.
  • OCT with the non-invasive visualization of the retinal structures can obtain high resolution of sub-clinical features that may not be visualized by the standard clinical ophthalmoscopy.
  • Early detection of OCT can slow down its progress and prevent severe vision impairment.
  • the annotation of early biomarkers in OCT scans is laborious and time-consuming. Due to the high inter-clinician annotations variability, automated medical image annotations may be able to make annotation processes more efficient and accurate.
  • CNNs Convolutional Neural Networks
  • a large annotated dataset may be needed to train deepvision models.
  • getting a large training dataset can be a hurdle, especially for medical imaging in which the annotation is a laborious process that is largely done by professional clinicians.
  • junior clinicians may often have a hard time differentiating a normal state from early disease-related subtle changes, especially when examining volumetric OCTs due to their natural complexity.
  • the large dataset requirement could be relaxed by a transfer learning approach, rather than starting the training with totally random values.
  • the weights of a CNN that was trained for one task of a source domain can be utilized as the training initialization values and are then fine-tuned for the task of interest of a target domain.
  • the supply of pre-trained models for 3D OCTs is very limited, and therefore, a straightforward transfer learning approach may not be applied.
  • AMD subtle biomarkers include (but are not limited to) sub-retinal drusenoid deposits (SDD), high central drusen volume (DV), intraretinal hyperreflective foci (HRF), and hyporeflective drusen cores (hDC). These phenotypes can be risk factors for the progression of AMD that may precede vision impairment.
  • SLIVER-net models can use a 2D OCT pre-trained model to extract a features map and then aggregate the features using 1 D CNNs. This aggregation allows SLIVER-net models to consider the spatial information that may be lost when the slices are tiled and reduce the effect of arbitrary scanning noise.
  • SLIViT vision transformers
  • SLIViT in accordance with several embodiments can use a pre-trained ResNet18 architecture to extract features map, and then use a vision transformer architecture (with positional embedding) to comprehensively aggregate the inter-slice information.
  • FIG. 13 illustrates a SLIViT model structure in accordance with an embodiment of the invention.
  • a 3D OCT volume image can be used as an input 1301.
  • N slices of the 3D volume can be tiled into a ’’long image” 1302.
  • Tiled image can be fed into a 2D pre-trained (widthinvariant) backbone 1303.
  • the pre-trained model can be (but not limited to) ResNet18.
  • Per-slice features map can be extracted 1304. Each feature can then be flattened and fed into a multi-head attention transformer encoder 1305. Transformer encoder’s output can go through several fully-connected (FC) layers which, in turn, output the biomarker- in-question existence probability 1306.
  • FC fully-connected
  • FIG. 14 illustrates the receiver operating characteristic (ROC) and precision-recall (PR) area under the curve (AUC) scores in accordance with an embodiment of the invention.
  • the ROC-AUC and PR-AUC are shown in the left and right panels, respectively.
  • a 95% confidence interval is computed by 100 bootstrap repetitions.
  • the dashed lines represent the expected AUC of a random classifier for the corresponding biomarker and score.
  • the positive label prevalence of SDD, HRF, hDC, and DV are 0.528, 0.435, 0.313, and 0.47, respectively.
  • SLIViT show better performance than SLIVER-net in the four biomarkers examined.
  • FIG. 15 illustrates the ROC-AUC and PR-ALIC scores of four biomarkers using three different test sets in accordance with an embodiment of the invention.
  • SLIViT and SLIVER-net are trained using the same dataset.
  • the ‘ground truth’ can be obtained by an expert clinician annotation.
  • This dataset contains about 2,638 OCT scans taken from about 1 ,332 different patients and was collected from three different independent medical sites. Each site is used as an independent test set and shown as site A, site B, and site C in FIG. 15.
  • FIG. 15 summarizes the performance on four biomarkers: SDD, HRF, hDC, and DV. Across biomarkers, the average (bootstrapped) PR-AUC improvements of SLIViT over SLIVER-net for the three sites are 0.4, 0.293, and 0.118, respectively.
  • FIG. 16 illustrates the precision and recall scores in a sub-sample taken from site C in accordance with an embodiment.
  • FIG. 16 shows the precision and recall scores of SLIViT and three junior clinicians for annotating three biomarkers in the sub-sample from site C.
  • SLIViT outperforms the three clinicians for detecting SDD and HRF. It shows the potential success of automating AMD early biomarkers detection if adopted by medical centers.
  • Biomarker data for iRORA and cRORA biomarker prediction were used to train the model. 101 OCT volumes (6138 B-scans) collected from 37 patients. All patients are in the clinic for advanced macular degeneration and displayed signs of atrophy. Each B-scan is read by a single expert and annotated for the presence of iRORA or cRORA. An additional control set of 34 patients (87 OCT volumes, 4128 B-scans) with intermediate age-related macular degeneration but no signs of atrophy are also used to train the model. The datasets are used to develop and train the model for iRORA and cRORA identification.
  • OCT volumes 108,289 B-scans
  • the same annotator labels each volume with the presence or absence of iRORA and cRORA, resulting in a single annotation per OCT volume.
  • the other dataset is collected for the purposes of validating atrophy criteria (CAM).
  • CAM validating atrophy criteria
  • a single B-scan is selected a priori from 60 different patients.
  • the two datasets are used to assess the performance and generalizability of the model. All datasets are summarized in Table 2 below.
  • Data for large-scale AMD deterioration prediction includes data consisted of OCT volumes as well as EHR data.
  • OCT volumes which we could not link to the EHR may not be used for this analysis.
  • the study cohort consists of 14,615 OCT volumes of 4, 182 patients. 1 ,486 OCT volumes are taken of eyes with current Wet AMD present (462 patients), and 4,246 volumes are taken from 1 ,568 patients who have not developed Wet AMD within 2 years during the study period. The average age of patients in the study cohort is 66.45 (SD 16.81 ), of which 53.8% is female. 65% of patients reported to have never been smokers, and 2.8% then-current smokers.
  • Convolutional neural networks are utilized in iRORA and cRORA identification.
  • Resnet18 implementation is adapted by PyTorch for iRORA and cRORA identification.
  • 1000 output units are replaced, which correspond with ImageNet categories.
  • a layer with 32 hidden units is added, connected to a final layer with two output units, corresponding to iRORA and cRORA respectively.
  • a sigmoidal activation function can be assigned to these output units.
  • the input to the model can be a single B-scan, which is resampled from its original resolution to 256px x 256px for compatibility with pretrained models, to improve computational efficiency, and to reduce the parameter space.
  • Model development and within-sample evaluation are performed using eightfold cross-validation on the training dataset.
  • the data are split at the patient level to ensure no signal leakage in case a patient has multiple scans or visits.
  • the data are partitioned into a training and validation set.
  • a new model is initialized and trained using the training set, and performance on the validation set is recorded.
  • model hyperparameters are determined, a final model is trained.
  • the data from the 71 patients are again randomly split at the patient level into a roughly 80% training set and 20% validation set for model parameter fitting.
  • Resnet models are initialized by pre-training on publicly available OCT B-scans.
  • a randomly initialized Resnet18 is trained to predict the four categories available in the public dataset. Then, the four output units are discarded and the remaining Resnet18 backbone is used as the initialization for all models trained.
  • each B-scan is subject randomly to the following changes during each training epoch: (1 ) translation by up to 16 pixels in any direction, (2) random horizontal flipping, (3) random rotation by up to 8 degrees either clockwise or anticlockwise.
  • Model parameters are optimized using stochastic gradient descent using the Adam optimizer implemented in the fastai library.
  • the learning rate is set at 2e-4 as determined by the lr_find() procedure also implemented by fastai.
  • Model training is carried out for 50 epochs (i.e. , passes through the training set) or until the validation loss stopped decreasing for 10 consecutive epochs (early stopping (cite)).
  • SLIVER-net has shown that deep learning models are able to successfully identify the presence or absence of AMD-related biomarkers within OCT volumes.
  • SLIVER-net is used to generate a score for each OCT volume for the following OCT features: Hyper-reflective Foci (HRF), High Central Drusen Volume, Subretinal Drusenoid Deposits (SDD), Reticular Pseudodrusen (RPD), and Hyporeflective Drusen Core (hDC).
  • HRF Hyper-reflective Foci
  • SDD Subretinal Drusenoid Deposits
  • RPD Reticular Pseudodrusen
  • hDC Hyporeflective Drusen Core
  • Electronic health record data is available for these patients.
  • diagnosis codes and demographic information for patients can be extracted.
  • Diagnosis codes assigned during the concurrent and future visit are used to define outcomes for the imputation and prediction analyses, respectively, and diagnosis codes assigned prior to the visits are aggregated to define cardiovascular comorbidities as defined by the Chronic Condition Warehouse.
  • AMD-subtypes are defined using the following ICD-10 codes: Dry AMD (H35.31XX), Wet AMD (H35.32XX). Dry AMD is further separated into Early Dry AMD (H35.31X1 ) and Intermediate Dry AMD (H35.31X2) for the Ordinal Regression analysis.
  • the demographic factors extracted are Age, Sex, Race, Ethnicity.
  • EHR and machine-read OCT features are used to predict progression of disease, specifically progression to exudative (wet) AMD.
  • the study cohort can be defined as follows. OCT volumes corresponding to eyes that do not already have Wet AMD can be identified. If volumes are acquired over multiple visits, a single OCT volume can be randomly sampled. Then, ICD-10 codes corresponding to that eye up to two years following the volume acquisition can be collected. If a Wet AMD diagnosis is found for the respective eye, the eye may be considered to have converted to Wet AMD and the time to first conversion is also recorded. If the eye does not convert, a random visit can be chosen for the purposes of recording the time to diagnosis.
  • the current AMD status (healthy or dry AMD) and time to next visit can be used as additional inputs for the models.
  • Four different combinations of feature groups can be compared.
  • the current state model uses only the current AMD status and time to next exam (defined above);
  • the risk factors model uses the EHR risk factors as well as the time to the next exam;
  • the biomarkers model uses only the machine-read OCT biomarkers, and the combined model incorporates all of the features available.
  • a threshold for a balance of sensitivity and specificity can be included by finding a threshold which maximizes true positive rate while minimizes false positive rate, i.e. finding a point on the ROC curve close to the top left corner.
  • Performance metrics can be acquired in the following manner: in one round of validation the data set is split to train- and validation sets.
  • the logistic regression model can be trained on the train set, after which it is used to generate predictions on the same train set. Based on the performance metrics of this prediction, 3 operating thresholds (balanced, high sensitivity, high specificity) are determined. Then the trained model generates predictions between 0 and 1 for the validation set, which predictions are binarized according to the thresholds. From the binarized predictions the rest of the performance metrics could be calculated. 8 rounds of such validation can be performed (i.e. 8-fold cross validation) and the 3 operating thresholds are calculated as the means of the thresholds determined during the cross-validation.
  • the logistic regression framework can be applied to diagnose the current Wet AMD status of each OCT volume.
  • two feature sets are compared.
  • EHR-derived risk factors Age, Smoking Status, Race/Ethnicity, Sex, and chronic comorbidities
  • EHR-derived risk factors and machine-read OCT AMD risk factors High Central Drusen Volume, hDC, SDD, RPD, and HRF. All analyses are performed using Python, particularly the Scikit-learn and Statsmodels packages.

Abstract

L'invention concerne des procédés et des systèmes d'apprentissage profond pour détecter des biomarqueurs à l'intérieur de volumes de tomographie par cohérence optique (OCT) faisant appel à de tels procédés et systèmes d'apprentissage profond. Des modes de réalisation prédisent la présence ou l'absence de biomarqueurs cliniquement utiles dans des images d'OCT à l'aide de réseaux neuronaux profonds.
PCT/US2022/079182 2021-11-02 2022-11-02 Procédés et systèmes de prédiction de biomarqueurs à l'aide d'une tomographie par cohérence optique WO2023081729A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163263437P 2021-11-02 2021-11-02
US63/263,437 2021-11-02

Publications (1)

Publication Number Publication Date
WO2023081729A1 true WO2023081729A1 (fr) 2023-05-11

Family

ID=86242180

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/079182 WO2023081729A1 (fr) 2021-11-02 2022-11-02 Procédés et systèmes de prédiction de biomarqueurs à l'aide d'une tomographie par cohérence optique

Country Status (1)

Country Link
WO (1) WO2023081729A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070258630A1 (en) * 2006-05-03 2007-11-08 Tobin Kenneth W Method and system for the diagnosis of disease using retinal image content and an archive of diagnosed human patient data
US9737205B2 (en) * 2013-07-31 2017-08-22 The Board Of Trustees Of The Leland Stanford Junior University Method and system for evaluating progression of age-related macular degeneration
US20180068083A1 (en) * 2014-12-08 2018-03-08 20/20 Gene Systems, Inc. Methods and machine learning systems for predicting the likelihood or risk of having cancer
WO2020056454A1 (fr) * 2018-09-18 2020-03-26 MacuJect Pty Ltd Procédé et système d'analyse d'images d'une rétine
WO2021151077A1 (fr) * 2020-01-24 2021-07-29 The Regents Of The University Of California Prédiction de biomarqueurs à l'aide d'une tomographie par cohérence optique

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070258630A1 (en) * 2006-05-03 2007-11-08 Tobin Kenneth W Method and system for the diagnosis of disease using retinal image content and an archive of diagnosed human patient data
US9737205B2 (en) * 2013-07-31 2017-08-22 The Board Of Trustees Of The Leland Stanford Junior University Method and system for evaluating progression of age-related macular degeneration
US20180068083A1 (en) * 2014-12-08 2018-03-08 20/20 Gene Systems, Inc. Methods and machine learning systems for predicting the likelihood or risk of having cancer
WO2020056454A1 (fr) * 2018-09-18 2020-03-26 MacuJect Pty Ltd Procédé et système d'analyse d'images d'une rétine
WO2021151077A1 (fr) * 2020-01-24 2021-07-29 The Regents Of The University Of California Prédiction de biomarqueurs à l'aide d'une tomographie par cohérence optique

Similar Documents

Publication Publication Date Title
Tong et al. Application of machine learning in ophthalmic imaging modalities
Phene et al. Deep learning and glaucoma specialists: the relative importance of optic disc features to predict glaucoma referral in fundus photographs
US10722180B2 (en) Deep learning-based diagnosis and referral of ophthalmic diseases and disorders
Zapata et al. Artificial intelligence to identify retinal fundus images, quality validation, laterality evaluation, macular degeneration, and suspected glaucoma
US20220165418A1 (en) Image-based detection of ophthalmic and systemic diseases
Keenan et al. A deep learning approach for automated detection of geographic atrophy from color fundus photographs
Sarhan et al. Machine learning techniques for ophthalmic data processing: a review
Nuzzi et al. The impact of artificial intelligence and deep learning in eye diseases: a review
US20140184608A1 (en) Systems and methods for analyzing in vivo tissue volumes using medical imaging data
Storås et al. Artificial intelligence in dry eye disease
EP3944185A1 (fr) Procédé mis en uvre par ordinateur, système et produit de programme informatique pour la détection d'une pathologie rétinienne à partir d'images de fond d' il
Xiao et al. Major automatic diabetic retinopathy screening systems and related core algorithms: a review
Al-Timemy et al. A hybrid deep learning construct for detecting keratoconus from corneal maps
Huang et al. A structure-related fine-grained deep learning system with diversity data for universal glaucoma visual field grading
Arslan et al. Deep learning applied to automated segmentation of geographic atrophy in fundus autofluorescence images
Khanna et al. Deep learning based computer-aided automatic prediction and grading system for diabetic retinopathy
Hemelings et al. Pointwise visual field estimation from optical coherence tomography in glaucoma using deep learning
Shi et al. Artifact-tolerant clustering-guided contrastive embedding learning for ophthalmic images in glaucoma
Shi et al. Improving interpretability in machine diagnosis: detection of geographic atrophy in OCT scans
Jayachandran et al. Multi-dimensional cascades neural network models for the segmentation of retinal vessels in colour fundus images
JP7222882B2 (ja) 医用画像評価のためのディープラーニングの応用
US20240108276A1 (en) Systems and Methods for Identifying Progression of Hypoxic-Ischemic Brain Injury
WO2023081729A1 (fr) Procédés et systèmes de prédiction de biomarqueurs à l'aide d'une tomographie par cohérence optique
WO2022271572A1 (fr) Système et procédé de détermination d'un état de selles
Young et al. Automated Detection of Vascular Leakage in Fluorescein Angiography–A Proof of Concept

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22891025

Country of ref document: EP

Kind code of ref document: A1