WO2022261513A1 - Procédés et systèmes de détection et de prédiction de maladie rénale chronique et de diabète de type 2 en utilisant des modèles d'apprentissage profond - Google Patents

Procédés et systèmes de détection et de prédiction de maladie rénale chronique et de diabète de type 2 en utilisant des modèles d'apprentissage profond Download PDF

Info

Publication number
WO2022261513A1
WO2022261513A1 PCT/US2022/033125 US2022033125W WO2022261513A1 WO 2022261513 A1 WO2022261513 A1 WO 2022261513A1 US 2022033125 W US2022033125 W US 2022033125W WO 2022261513 A1 WO2022261513 A1 WO 2022261513A1
Authority
WO
WIPO (PCT)
Prior art keywords
patient
ckd
fundus images
cohort
t2dm
Prior art date
Application number
PCT/US2022/033125
Other languages
English (en)
Inventor
Kang Zhang
Yuanxu GAO
Original Assignee
Kang Zhang
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kang Zhang filed Critical Kang Zhang
Publication of WO2022261513A1 publication Critical patent/WO2022261513A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography

Definitions

  • CKD chronic kidney disease
  • diabetes pose major health care challenges.
  • CKD is a highly prevalent disease and affects approximately 8–16% of the world population [1, 2].
  • CKD is a serious public health problem, as its adverse outcomes are not only limited to end-stage renal failure requiring dialysis or transplantation but also include vascular complications of impaired kidney function [3].
  • Type 2 diabetes mellitus (T2DM) is another major common chronic disease globally, with an estimated prevalence of 9.3% (463 million affected individuals) in 2019. According to the International Diabetes Federation, its prevalence has been increasing steadily in recent years and will reach an estimated 700 million by 2045 [4]. According to the US Centers for Disease Control and Prevention, diabetes is one of the leading causes of mortality globally. It is also a leading risk factor for many other common medical problems, including cardiovascular disease, kidney failure and blindness [5,6,7]. In many of these conditions, early diagnosis and treatment are crucial in reducing the associated comorbidities and mortality.
  • CNNs convolutional neural networks
  • transfer learning have facilitated efficient and accurate image-based diagnosis well beyond human capabilities [16,18].
  • CNNs convolutional neural networks
  • retinal images which can be acquired rapidly and non-invasively, may have the potential to provide ‘point of care’ biomarkers for systemic disease.
  • CKD detection using a retinal image- based AI system [21] prediction of CKD onset from a normal baseline and diagnosis of early CKD using AI have not yet been described and would have an important impact on disease prevention and favorable outcomes.
  • NAPI Neural Network API
  • Apple iPhone X smartphones now deploy a range of machine-learning algorithms including object tracking [24], face detection and recognition [25].
  • smartphone ownership is widespread worldwide. For instance, in Nigeria, the physician-to-patient ratio is 1:2,660, while smartphone ownership is 1:3.5.
  • An AI-based smartphone diagnostic system is thus an attractive way to broaden health care access by encouraging patients to self-monitor and allowing doctors to diagnose and follow-up with patients remotely.
  • the present disclosure provides a method comprising using at least one computer processor to: receive one or more retinal fundus images of a patient; apply a machine-learning classifier having been trained using a dataset of retinal fundus images of a patient cohort that have been classified regarding a chronic kidney disease (CKD) status, to classify the received one or more fundus images of the patient to thereby diagnose whether the patient has CKD.
  • the diagnosis result can then be provided to a user in a user readable form.
  • CKD chronic kidney disease
  • the present disclosure provides a method comprising using at least one computer processor to: receive one or more retinal fundus images of a patient; apply a machine-learning classifier having been trained using a dataset of retinal fundus images of a patient cohort that have been classified regarding a Type 2 diabetes mellitus (T2DM) status, to classify the received one or more fundus images of the patient to thereby diagnose whether the patient has CKD.
  • T2DM Type 2 diabetes mellitus
  • the diagnosis result can then be provided to a user in a user readable form.
  • the present disclosure provides a method comprising using at least one computer processor to: receive one or more retinal fundus images of a patient; apply a machine-learning classifier having been trained using a dataset of retinal fundus images of a longitudinal patient cohort regarding CKD development of the patients in the cohort over a period of time, to predict a likelihood of CKD incidence or progression for the patient in the future.
  • the prediction result can then be provided to a user in a user readable form.
  • the present disclosure provides a method comprising using at least one computer processor to: receive one or more retinal fundus images of a patient; apply a machine-learning classifier having been trained using a dataset of retinal images of a longitudinal patient cohort regarding T2DM development of the patients in the cohort over a period of time, to predict a likelihood of T2DM incidence or progression for the patient in the future.
  • the prediction result can then be provided to a user in a user readable form.
  • the methods herein can further comprise: prior to applying to the machine- learning classifier, enhancing the fundus images by at least one or both of Contrast Limited Adaptive Histogram Equalization (CLAHE) and color normalization techniques.
  • CLAHE Contrast Limited Adaptive Histogram Equalization
  • the machine-learning classifier further utilizes clinical metadata of the patient, and wherein the machine-learning classifier has been trained using a Cox proportional hazards (CPH) model based on the dataset of retinal fundus images of each of the patients in the cohort in combination with clinical metadata of each of the patients in the cohort.
  • the clinical metadata of a patient can include one or more of the following: age, sex, blood pressure, height, weight, BMI, and hypertension status.
  • the methods further comprises classifying the patient as belonging to one of a plurality categories having different risks for developing CKD in the future, such as low-risk, medium-risk, and high-risk.
  • the machine-learning classifier comprises a deep learning model.
  • the deep learning model can comprise convolutional neural networks (CNN).
  • CNN convolutional neural networks
  • the diagnosis and/or prediction results of the various embodiments herein can be presented in a user readable form, such as a report or other presentation (e.g., a physical copy, an electronic copy such as a Word, PDF, or other suitable format).
  • the present disclosure provides a system or device including at least one processor, a memory, and non-transitory computer readable storage media encoded with a program including instructions executable by the at least one processor and cause the at least one processor to perform the methods described herein.
  • the present disclosure provides a non-transitory computer readable medium encoded with a program including instructions executable by at least one processor and cause the at least one processor to perform the methods described herein.
  • Figure 1 depicts an overall AI system for the detection and incidence prediction of CKD/T2DM using retinal fundus images according to embodiments of the present disclosure.
  • Figure 2 shows performance of certain embodiments of the AI system in the identification of CKD and early CKD.
  • Figure 3 shows the performance of certain embodiments of the AI system of the present disclosure in assessing eGFR from retinal fundus images.
  • Figure 4 shows Kaplan–Meier plots for the prediction of CKD and advanced+ CKD development according to embodiments of the present disclosure.
  • Figure 5 shows performance of certain embodiments of AI system of the present disclosure in the identification and incidence predictionn of T2DM.
  • Figure 6 shows gradient visualizations of predictions of CKD staging and T2DM using an integrated gradient algorithm of the present disclosure.
  • Figure 7 shows a flowchart of an embodiment of the AI system of the present disclosure with an ensemble of model instances.
  • Figure 8 is a flow diagram describing the datasets used for an embodiment of the disclosed AI system/method for CKD/T2DM detection and incidence prediction.
  • Figure 9 shows a model performance of an embodiment of the AI system/method in assessing GFR/CKD staging using retinal fundus images.
  • Figure 10 shows prediction of fasting blood glucose using retinal fundus images according to embodiments of disclosed subject matter.
  • Figure 11 shows a comparison of the performance of an embodiment of the AI system/method of the present disclosure at detecting T2DM.
  • Figure 12 shows performance of an embodiment of the AI system/method of the present disclosure on an external multi-ethnicity validation cohort.
  • Figure 13 shows a Kaplan Meier plot predicting the incidence of CKD/T2DM using a clinical metadata-only model according to embodiments of disclosed subject matter.
  • Figure 14 shows cumulative hazard functions of three stratified risk subgroups (high- medium- and low-risk) using a combined progression prediction model according to embodiments of disclosed subject matter.
  • Figure 15 shows the prediction of the development of CKD and T2DM using time-dependent ROC curves.
  • Figure 16 shows performance of embodiments of disclosed subject matter on identifying documented CKD/T2DM in test sets.
  • Figure 17 shows schematically a hand-held smartphone camera attachment.
  • Figure 18 schematically illustrates a computer system or platform that is programmed or otherwise configured to implement methods provided herein.
  • the present disclosure provides a method comprising using at least one computer processor to: receive one or more retinal fundus images of a patient; apply a machine-learning classifier having been trained using a dataset of retinal fundus images of a patient cohort that have been classified regarding a chronic kidney disease (CKD) status, to classify the received one or more fundus images of the patient to thereby diagnose whether the patient has CKD.
  • CKD chronic kidney disease
  • the present disclosure provides a method comprising using at least one computer processor to: receive one or more retinal fundus images of a patient; apply a machine-learning classifier having been trained using a dataset of retinal fundus images of a patient cohort that have been classified regarding a Type 2 diabetes mellitus (T2DM) status, to classify the received one or more fundus images of the patient to thereby diagnose whether the patient has CKD.
  • T2DM Type 2 diabetes mellitus
  • the methods herein can further comprise: prior to applying to the machine- learning classifier, enhancing the fundus images by at least one or both of Contrast Limited Adaptive Histogram Equalization (CLAHE) and color normalization techniques.
  • the machine-learning classifier further utilizes clinical metadata of the patient, and wherein the machine-learning classifier has been trained using a Cox proportional hazards (CPH) model based on the dataset of retinal fundus images of each of the patients in the cohort in combination with clinical metadata of each of the patients in the cohort.
  • the clinical metadata of a patient can include one or more of the following: age, sex, blood pressure, height, weight, BMI, and hypertension status.
  • the methods further comprises classifying the patient as belonging to one of a plurality categories having different risks for developing CKD in the future, such as low-risk, medium-risk, and high-risk.
  • the machine-learning classifier comprises a deep learning model.
  • the deep learning model can comprise convolutional neural networks (CNN).
  • CNN convolutional neural networks
  • the present disclosure provides a system or device including at least one processor, a memory, and non-transitory computer readable storage media encoded with a program including instructions executable by the at least one processor and cause the at least one processor to perform the methods described herein.
  • the present disclosure provides a non-transitory electronic storage and computer readable medium encoded with a program including instructions executable by at least one processor and cause the at least one processor to perform the methods described herein.
  • the systems, devices, media, methods and applications described herein include a digital processing device.
  • the digital processing device is part of a point-of-care device integrating the diagnostic / prediction software described herein.
  • the medical diagnostic device comprises imaging equipment such as imaging hardware (e.g. a camera) for capturing medical data (e.g. medical images).
  • the equipment may include optic lens and/or sensors to acquire images at hundreds or thousands of magnification.
  • the medical imaging device comprises a digital processing device configured to perform the methods described herein.
  • the digital processing device includes one or more processors or hardware central processing units (CPU) that carry out the device's functions.
  • the digital processing device further comprises an operating system configured to perform executable instructions.
  • the digital processing device is optionally connected a computer network.
  • the digital processing device is optionally connected to the Internet such that it accesses the World Wide Web.
  • the digital processing device is optionally connected to a cloud computing infrastructure. In other embodiments, the digital processing device is optionally connected to an intranet. In other embodiments, the digital processing device is optionally connected to a data storage device.
  • suitable digital processing devices include, by way of non-limiting examples, server computers, desktop computers, laptop computers, notebook computers, sub-notebook computers, netbook computers, set-top computers, handheld computers, Internet appliances, mobile smartphones, tablet computers, personal digital assistants, video game consoles, and vehicles. Those of skill in the art will recognize that many smartphones are suitable for use in the system described herein.
  • the system, media, methods and applications described herein include one or more non-transitory computer readable storage media encoded with a program including instructions executable by the operating system of an optionally networked digital processing device.
  • a computer readable storage medium is a tangible component of a digital processing device.
  • a computer readable storage medium is optionally removable from a digital processing device.
  • a computer readable storage medium includes, by way of non-limiting examples, CD-ROMs, DVDs, flash memory devices, solid state memory, magnetic disk drives, magnetic tape drives, optical disk drives, cloud computing systems and services, and the like.
  • the program and instructions are permanently, substantially permanently, semi-permanently, or non-transitorily encoded on the media.
  • the system, media, methods and applications described herein include at least one computer program, or use of the same.
  • a computer program includes a sequence of instructions, executable in the digital processing device's CPU, written to perform a specified task.
  • Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types.
  • APIs Application Programming Interfaces
  • a computer program may be written in various versions of various languages.
  • the functionality of the computer readable instructions may be combined or distributed as desired in various environments.
  • a computer program comprises one sequence of instructions. In some embodiments, a computer program comprises a plurality of sequences of instructions. In some embodiments, a computer program is provided from one location. In other embodiments, a computer program is provided from a plurality of locations. In various embodiments, a computer program includes one or more software modules. In various embodiments, a computer program includes, in part or in whole, one or more web applications, one or more mobile applications, one or more standalone applications, one or more web browser plug-ins, extensions, add-ins, or add-ons, or combinations thereof. In some embodiments, a computer program includes a web application.
  • a web application in various embodiments, utilizes one or more software frameworks and one or more database systems.
  • the systems, devices, media, methods and applications described herein include software, server, and/or database modules, or use of the same.
  • software modules are created by techniques known to those of skill in the art using machines, software, and languages known to the art.
  • the software modules disclosed herein are implemented in a multitude of ways.
  • a software module comprises a file, a section of code, a programming object, a programming structure, or combinations thereof.
  • a software module comprises a plurality of files, a plurality of sections of code, a plurality of programming objects, a plurality of programming structures, or combinations thereof.
  • the one or more software modules comprise, by way of non-limiting examples, a web application, a mobile application, and a standalone application.
  • software modules are in one computer program or application. In other embodiments, software modules are in more than one computer program or application.
  • software modules are hosted on one machine. In other embodiments, software modules are hosted on more than one machine. In further embodiments, software modules are hosted on cloud computing platforms. In some embodiments, software modules are hosted on one or more machines in one location.
  • Model performance in assessing eGFR from retinal fundus images a–c, Correlation analysis of the predicted GFR versus actual eGFR generated using the regression model.
  • d–f Bland– Altman plot for the agreement between the predicted GFR and eGFR.
  • the x axis represents the mean of predicted GFR and eGFR, and the y axis represents the difference between the two measurements.
  • the performance of the AI system on the internal test set (d), external test set 1 (e) and external test 2 a prospective point-of- care pilot study (f).
  • the y axis is the survival probability, measuring the probability of not progressing to a disease outcome.
  • the x axis is the time in months. Survival curves in different colours represent the high-risk, medium-risk and low-risk subgroups stratified by the upper and lower quartiles in the tuning dataset. Shaded areas are 95% CIs.
  • a,b The incidence of CKD in the internal longitudinal test set (a) and the external longitudinal test set (b).
  • c,d The incidence of advanced+ CKD (corresponding to stage 3 or more severe) in the internal longitudinal test set (c) and the external longitudinal test set (d).
  • P value is computed using a one-sided log-rank test between all groups.
  • Figure 5. Performance of the AI system in the identification and incidence predictionn of T2DM.
  • d–f Correlation analysis of predicted blood glucose versus actual blood glucose on: the internal test set (d), external test set 1 (e) and external test set 2: point-of-care study (f).
  • FIG. 7 The flowchart of the AI platform with an ensemble of model instances.
  • CLAHE contrast-limited adaptive histogram equalization
  • FIG. 8 Flow diagram describing the datasets used for our AI system for CKD/T2DM detection and incidence prediction. Patient inclusion and exclusion criteria were also considered.
  • Figure 9. Model performance in assessing GFR/CKD staging using retinal fundus images. a-c, Bland-Altman plot for predicted and actual eGFR after calibrating the model output. AI performance on a, the internal test set, b, the external test set 1 and c, the external test set 2: “point-of-care” study.
  • ROC curves showing performance of binary classification models in the internal test set show performance of binary classification models in the internal test set.
  • a and b Progression to CKD on a, internal longitudinal test set, b, external longitudinal test set.
  • FIG. 18 schematically illustrates a computer system or platform that is programmed or otherwise configured to implement methods provided herein.
  • the system comprises a computer system 2101 that is programmed or otherwise configured to carry out executable instructions such as for carrying out image analysis.
  • the computer system includes at least one CPU or processor 2105.
  • the computer system includes at least one memory or memory location 2110 and/or at least one electronic storage unit 2115.
  • the computer system comprises a communication interface 2120 (e.g. network adaptor).
  • the computer system 2101 can be operatively coupled to a computer network (“network") 2130 with the aid of the communication interface 2120.
  • an end user device 2135 is used for uploading image data such as embryo images, general browsing of the database 2145, or performance of other tasks.
  • the database 2145 is one or more databases separate from the computer system 2101.
  • Example 1 Methods Dataset characteristics To develop an AI system for the detection of CKD and T2DM, fundus image data were collected from the CC-FII, which included the following participants: COACS in Tangshan City, Hebei province; the Peking University First affiliated Hospital and the Peking University Second affiliated Hospital, both in Beijing; West China Hospital in Chengdu, Sichuan province; Renji Hospital in Chongqing; Kunshan Hospital of Jiangsu University, Kunshan, Jiangsu province; and Zhong Shan Ophthalmic Center of Sun Yat-sen University and Ncapturing Hospital both in Guangzhou, Guangdong province.
  • the COACS is a community-based, prospective study to investigate how suboptimal health status contributes to the incidence of non-communicable chronic diseases in Chinese adults [32].
  • This COACS study has two phases: a cross-sectional survey followed by a longitudinal study. The participants were recruited from Tangshan city, which is a large, modern industrial city and adjoins two mega cities: Beijing and Tianjin. In phase I, all participants underwent clinical laboratory measurements.
  • CKD and T2DM detection we used retinal fundus images from retrospective datasets.
  • the first one a cross-sectional cohort (CC-FII-C) from CC-FII, included a total of 86,312 fundus images from 43,156 participants as the developmental dataset of our AI models for systemic disease detection.
  • External test set 1 included participants undergoing an annual health check in Zhong Shan Ophthalmic Center of Sun Yat-sen University and Ntriggering Hospital both in Guangzhou, Guangdong province. Study participants also subsequently underwent ophthalmological examinations with fundus imaging. Only retinal fundus images were included in this study with 8,059 participants from external test set 1.
  • external test set 2 is a prospective point-of-care setting, for the evaluation of the generalizability of the AI system with smartphone-based devices.
  • the first longitudinal dataset CC-FII-L is a subset from CC-FII, which consisted of 10,269 participants for routine annual health checks with a follow-up period of six years. All the participants from the CC-FII-L were randomly divided into a developmental set and an internal longitudinal test set (3,376 individuals) in an 8:2 ratio. Another longitudinal dataset, an external longitudinal test set, was used for external validation of the AI system for incidence prediction of the systemic diseases. This is a population-based study of Chinese from Beijing, China, which recruited patients from hospitals or health centres for annual health checks, including DR screening in Peking University’s First affiliated Hospital and Third affiliated Hospital.
  • Phase I graders consisted of individuals trained by ophthalmologists and evaluated to perform at least 95% accuracy determined by a quiz consisting of 1,000 fundus images of various retinal diseases.
  • Phase II graders consisted of ophthalmologists who individually reviewed every image classified by phase I graders. To check consistency among phase II graders, 20% of images were randomly selected and reviewed by three senior retinal specialists. The second tier of five ophthalmologists independently read and verified the true labels for each image. To account for disagreement, the evaluation test set was also checked by expert consensus.
  • kidney disease clinical classification based on the Kidney Disease: Improving Global Outcomes (KDIGO) Clinical Practice Guideline, the staging and actual risk of adverse outcomes of kidney disease are stratified by the renal glomerular filtration function defined as the GFR categories [33]. , which equals the total amount of fluid filtered through the functioning nephrons per unit of time [34].
  • eGFR The Estimated Glomerular Filtration Rate (eGFR) in this study was based on the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) Equation. This equation has been extensively validated in Chinese and Asian populations [35,36].
  • CKD was diagnosed as an eGFR of more than 60 ml min ⁇ 1 per 1.73 m2 with albuminuria or less than 60 ml min ⁇ 1 per 1.73 m 2 , confirmed in at least two visits separated by three months. Healthy controls were defined as eGFR above 60 ml min ⁇ 1 per 1.73 m2 without albuminuria, checked by a negative urine dip-stick test.
  • stage 1 eGFR ⁇ 90 ml min ⁇ 1 per 1.73 m2 with albuminuria
  • stage 2 eGFR 60–89 ml min ⁇ 1 per 1.73 m2 with albuminuria
  • stage 3 eGFR 30–59 ml min ⁇ 1 per 1.73 m2
  • stage 4+ eGFR ⁇ 30 ml min ⁇ 1 per 1.73 m2
  • stage 1 and 2 we defined potential or mild kidney impairment, called early CKD, as stages 1 and 2.
  • Stage 3 is denoted as advanced CKD.
  • CKD management is to prevent end-stage kidney failure, which is associated with the need for renal dialysis or transplantation and with increased mortality. Detection and early intervention in severe+ CKD are also crucial, which is defined with an eGFR cut-off of 30 ml min ⁇ 1 per 1.73 m2 (corresponding to stage 4 or above). T2DM was diagnosed by fasting blood glucose ⁇ 7.0 mmol l ⁇ 1 at least two times, an HbA1c value of 6.5% or more and/or a history of drug treatment for diabetes. For the detection of T2DM patients, there exist two subsets of fundus images: a DR group and a NDR group.
  • the image feature vector derived from the CNN model was concatenated with the clinical feature of the same patient.
  • a multilayer perception (MLP) took these features as input for classification.
  • MLP multilayer perception
  • ResNet-50 which is a convolutional neural network that is 50 layers deep [39] as the backbone
  • ResNet-50 is a five-stage network with one convolution and four identity blocks, which utilizes skip connections to overcome the degradation problem of deep-learning models.
  • eGFR and fast blood glucose a fully connected layer with one scalar as output was used as the final layer in the ResNet-50 model.
  • binary classification tasks an additional softmax layer beside a fully connected layer was attached to the model.
  • Retraining consisted of loading the convolutional layers with pretrained weights, newly initializing additional layers for our regression and binary classification tasks and training models on the corresponding development sets.
  • the MLP was jointly trained with the CNN.
  • the MSE loss was used as an objective function for the regression tasks of prediction of continuous values and the binary cross entropy loss was used for binary classification tasks.
  • Training of models by back-propagation of errors was performed in batches of 32 images resized to 512 ⁇ 512 pixels for 50 epochs with a learning rate of 10–3. Training was performed using the Adam optimizer with a weight decay of 10–6.
  • Model ensemble To improve the overall performance of the AI, we applied a model ensemble. For each task, we trained four model instances with different processed fundus images as input. Each input image was pre-processed into three variations by applying CLAHE only, normalization only, and both CLAHE and normalization.
  • T2DM time to day.
  • the development of T2DM was diagnosed as T2DM incidence data (or end-point) within the yearly clinical follow-up.
  • the CPH models on the training and tuning set using variables based on the metadata and fundus image-based risk score.
  • the metadata-based model comprised age, sex, blood pressure, height, weight, BMI, hypertension and T2DM.
  • the image-based risk core is the predicted z-score (standard score) of the first visit generated from the CKD/T2DM detection model and used to predict progression risks of patients in combination with metadata.
  • the patients are triaged into three groups: low, medium, and high risk according to the upper and lower quartiles of predicted risk scores in the tuning set, respectively.
  • the risk scores were treated as categorical variables according to quartiles during the incidence analysis on validation sets (Table 2).
  • Kaplan–Meier curves were constructed for the risk groups, and the significance of differences between group curves was computed using the log-rank test.
  • Time-dependent ROC curves[40] were used to quantify model performance on validation sets at the time of interest. ROC curves were constructed at a landmark time from predicted risk scores of relative patients made using the model. The univariable and multivariable CPH models were fitted.
  • Ii is the integrated gradients for pixel i
  • x is the input image
  • x′ is the baseline image
  • xi and x′ are values of pixel i in x and x′
  • f is the model to be visualized.
  • the saliency maps generated by integrated gradient indicate the effect of each pixel on the model predictions.
  • the 95% CIs of AUCs were estimated with the non-parametric bootstrap method (1,000 random resampling with replacement). Sensitivity and specificity were determined by the selected thresholds on the validation set. The prediction of continuous values of eGFR and blood glucose level were evaluated with regression models. CKD and T2DM detection were evaluated with binary classification models. For each patient, we made a prediction with the AI system at the image level and then averaged the image-level output at a patient level for a final prediction in each patient. ICC was used to assess the agreement between AI predicted values of left and right eyes, where stage CKD was measured by predicted eGFRs and diabetes, measured by log-likelihood ratios of predicted probability of T2DM presence.
  • CKD results Definitions of CKD and T2DM Based on international guidelines and previous studies [1,26], diagnosing an individual with CKD relies primarily on eGFR, an index of kidney function, and renal damage markers (for example, urinary albumin).
  • the presence of CKD is defined by an eGFR ⁇ 60 ml min ⁇ 1 per 1.73 m 2 with albuminuria or eGFR ⁇ 60 ml min ⁇ 1 per 1.73 m 2 , confirmed in at least two visits separated by 3 months; healthy controls are defined by an eGFR ⁇ 60 ml min ⁇ 1 per 1.73 m 2 without albuminuria.
  • CKD can be divided into five stages depending on severity [27,28].
  • early CKD stages 1 and 2
  • advanced CKD stage 3
  • severe+ CDK stages 4 and 5
  • Methods Methods 4 and 5
  • T2DM we utilized images and corresponding clinical data, including laboratory values, from patients who had already been diagnosed with T2DM based on a fasting blood glucose level ⁇ 7.0 mmol l ⁇ 1 confirmed in at least two visits, an hemoglobin A1c (HbA1c) value ⁇ 6.5% and/or a history of drug treatment for diabetes.
  • HbA1c hemoglobin A1c
  • the T2DM cohort consisted mainly of individuals with no diabetic retinopathy (NDR) and a small proportion of individuals with diabetic retinopathy (DR) as defined by Early Treatment Diabetic Retinopathy Study (ETDRS) standards [29].
  • NDR no diabetic retinopathy
  • DR diabetic retinopathy
  • EDRS Early Treatment Diabetic Retinopathy Study
  • CC-FII-C cross- sectional dataset
  • All participants in CC-FII-C were split randomly into mutually exclusive sets for training, tuning and internal testing of the AI algorithm at a 7:1:2 ratio.
  • CC-FII-C cross- sectional dataset
  • the first external cohort consisted of non-selected 8,059 individuals who underwent an annual health check from Guangdong province (Table 1 and Methods).
  • the first dataset (CC-FII-L) for model development is a longitudinal cohort from CC-FII that contained 10,269 individuals from Tangshan City, Hebei province, China, who underwent routine annual health checks during a six-year follow-up period.
  • the CC-FII-L dataset was randomly split into a developmental dataset and a longitudinal validation set (internal longitudinal test set) at a ratio of 8:2.
  • Early CKD is defined by eGFR >60 ml min ⁇ 1 per 1.73 m2 with albuminuria (a sign of kidney damage), whereas healthy controls have eGFR ⁇ 60 ml min ⁇ 1 per 1.73 m2 without albuminuria.
  • AI-based prediction of early CKD from healthy controls is thus a useful means for early disease detection and prevention.
  • the AI system s performance in predicting early CKD followed a similar trend across the three models (Fig.2d,e).
  • the clinical metadata model When evaluated using the internal test set from CC-FII-C, the clinical metadata model achieved an AUC of 0.805 (95% CI: 0.772–0.846), the fundus image model achieved 0.839 (95% CI: 0.805–0.868) and the combined model achieved 0.864 (95% CI: 0.837–0.894) (Fig.2d).
  • the AUCs for predicting the presence of early CKD were 0.800 (95% CI: 0.780–0.824) for the clinical metadata model, 0.829 (95% CI: 0.811–0.849) for the fundus image model and 0.848 (95% CI: 0.828–0.869) for the combined model (Fig.2e).
  • the smartphone camera-captured images led to comparably good CKD detection performance based on fundus images alone.
  • the AI delivered AUCs of 0.817 (95% CI: 0.785–0.842) for the metadata-only model, 0.870 (95% CI: 0.847–0.893) for the fundus-only model and 0.897 (95% CI: 0.855–0.902) for the combined model in external test set 2, the point-of-care cohort (Fig.2c).
  • the AI model When tested on the second external test cohort using smartphone-captured images, the AI model achieved non-inferior performance with an ICC of 0.53 (95% CI: 0.50–0.55) (Fig.3f).
  • Bland–Altman plots showed a negative proportional bias (slope of linear fit); that is, the AI-based model underestimated the eGFR level to a greater extent at high levels of eGFR than at low levels.
  • the models were trained by minimizing the mean-square error (MSE) loss, resulting in a smaller output variance than the variance of the ground-truthed data. This proportional bias (slope of linear fit) was reduced after the model outputs were calibrated to have the same variance as that of the ground-truthed measurements (Fig.9a–c).
  • MSE mean-square error
  • the regression model showed comparable performance to the classification model in the detection of severe+ CKD with an AUC of 0.825 (95% CI: 0.776–0.867) (Fig.9d).
  • the AUC for severe+ CKD identification was 0.842 (95% CI: 0.803–0.892) for the direct classification model and 0.837 (95% CI: 0.788–0.877) for the indirect regression model (Fig.9e).
  • the metadata-based model achieved a C-index of 0.756 (95% CI: 0.699–0.810) on the internal test set and a C-index of 0.651 (95% CI: 0.569– 0.730) on the external test set.
  • the model performance improved to a C-index of 0.845 (95% CI: 0.789–0.910) on the internal test set and of 0.719 (95% CI: 0.627– 0.807) on the external test set.
  • the combined progression prediction model had a statistically significant improvement compared with clinical metadata only-based prediction, as shown in Supplementary Table 5 (permutation test).
  • the operating point of an AI system could be set differently to balance the positive predictive value true positive rate (TPRPPV) and the false-positive rate negative predictive value (NPV(FPR).
  • Performance metrics for CKD and T2DM detection in the cohorts were determined by the operating points selected from the tuning dataset (shown in Supplementary Tables 8 and 9).
  • a very-high decision threshold with a relatively higher PPV.
  • Kidney disease Improving global outcomes (KDIGO) CKD work group. KDIGO 2012 clinical practice guideline for the evaluation and management of chronic kidney disease. Kidney Int. Suppl.3, 1–150 (2013). 34. Levey, A. S., Becker, C. & Inker, L. A. Glomerular filtration rate and albuminuria for detection and staging of acute and chronic kidney disease in adults: a systematic review. JAMA 313, 837–846 (2015). 35. Bikbov, B. et al. Global, regional, and national burden of chronic kidney disease, 1990–2017: a systematic analysis for the Global Burden of Disease Study 2017. Lancet 395, 709–733 (2020). 36.
  • a 22-diopter double-convex aspheric condensing lens (Volk Optics, Ohio, USA) was mounted inside a custom-designed smartphone fundoscope attachment (Suuplemental Fig.12).
  • the plastic components of the fundoscope were computer- designed on SolidWorks with collision simulation, converted to Standard Triangle Language format, and 3D printed (fused filament fabrication) in polylactic acid to a resolution of 100 microns.
  • the lens was stably anchored to the fundoscope cone using a rubber ring and printed lens locking system. Two perpendicular polarizing filters were also incorporated within the optical system to minimize the reflection of the smartphone flashlight from the cornea.
  • the same AI system for detecting CKD or T2DM as used for analyzing professional fundus camera-derived images was used for the handheld fundoscopy-derived images, which detected systemic diseases of CKD or T2DM.
  • the performance of the model was evaluated using ROC curves.
  • Imaging protocol using the smartphone attachment Standard operating procedure for fundus image capture using the smartphone fundoscope attachment 1. The pupil is dilated with a drop of 1% tropicamide. 2. Select ‘New Patient’ from the main program display and enter patient information. 3. First, hold the iPhone X (Apple Inc, Cupertino, CA, USA) with fundoscope attachment in the right hand approximately 10 cm from the patient’s eye to obtain a red reflex at the center of the display. 4.
  • T2DM Type 2 Diabetes Mellitus
  • CKD chronic kidney disease
  • eGFR estimated glomerular filtration rate
  • DR diabetic retinopathy
  • NDR diabetes mellitus with no DR.
  • Supplementary Table 2 AI Performance for detection of CKD or T2DM using logistic regression models on internal and external test sets.
  • Supplementary Table 3 Univariate and multivariate survival analyses of CKD/T2DM conducted using Cox proportional hazards methods (likelihood ratio test).
  • Supplementary Table 4 Incidence rates of the Advanced+ CKD (per 1000 person-year) on the internal longitudinal test set and the external longitudinal test set according to three-strata of the AI models.
  • Supplementary Table 5 Performance of progression prediction model to CKD or advanced+ CKD event based on the metadata-only model, and the combined model (including fundus images and metadata) on the internal and external test sets.
  • Concordance index (C-index) for right-censored data and 95% confidence intervals (CI) measure the model performance by comparing the progression information (disease labels and progression days) with predicted risk scores. A larger C-index correlates with better progression prediction performance.
  • Supplementary Table 6 Performance of progression prediction model to T2DM event based on the metadata-only model, and the combined model (including fundus images and metadata) on the internal and external test sets.
  • C-index Concordance index for right-censored data and 95% confidence intervals (CI) measure the model performance by comparing the progression information (disease labels and progression days) with predicted risk scores. A larger C-index correlates with better progression prediction performance.
  • Supplementary Table 7 Numbers at risk of the Kaplan Meier plots illustrating the incidence of CKD/T2DM stratified by three risk subgroups (high- medium- and low-risk). T2DM, Type 2 diabetes mellitus; CKD, chronic kidney disease. T is the follow-up time (months).
  • Supplementary Table 8 Performance of the AI system for CKD detection from normal controls using retinal fundus images.
  • Each row represents metrics based on the corresponding operation point set to perform with high NPV and PPV for CKD screening.
  • CI confidence interval
  • PPV positive predictive value
  • NPV negative predictive value.
  • Supplementary Table 9 Performance of the AI system for T2DM detection using retinal fundus images.
  • Each row represents metrics based on the corresponding operation point set to perform with high NPV and PPV for T2DM screening.
  • CI confidence interval; PPV,positive predictive value; NPV, negative predictive value.
  • Supplementary Table 10 Basic characteristics of patients in the Multi-ethnicity validation cohort for systemic diseases detection. Shown are the number of retinal fundus images used for identifying systemic conditions.
  • T2DM Type 2 diabetes mellitus
  • CKD chronic kidney disease
  • eGFR estimated glomerular filtration rate
  • DR diabetic retinopathy
  • NDR diabetes mellitus with no DR
  • BMI Body mass index

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

L'invention concerne un procédé mis en œuvre par ordinateur pour identifier une maladie rénale chronique (CKD) et un diabète sucré de type 2 (T2DM) sur la base de modèles d'apprentissage profond entraînés sur des images de fond de la rétine. L'invention concerne également un procédé mis en œuvre par ordinateur basé sur des modèles d'apprentissage profond pour prédire le risque d'apparition et/ou de progression de ces maladies dans le futur. L'invention concerne également des systèmes, des dispositifs, un appareil, des supports de stockage non volatils configurés pour réaliser ou mettre en œuvre les procédés.
PCT/US2022/033125 2021-06-10 2022-06-10 Procédés et systèmes de détection et de prédiction de maladie rénale chronique et de diabète de type 2 en utilisant des modèles d'apprentissage profond WO2022261513A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163209341P 2021-06-10 2021-06-10
US63/209,341 2021-06-10

Publications (1)

Publication Number Publication Date
WO2022261513A1 true WO2022261513A1 (fr) 2022-12-15

Family

ID=84425376

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/033125 WO2022261513A1 (fr) 2021-06-10 2022-06-10 Procédés et systèmes de détection et de prédiction de maladie rénale chronique et de diabète de type 2 en utilisant des modèles d'apprentissage profond

Country Status (1)

Country Link
WO (1) WO2022261513A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120077690A1 (en) * 2010-09-24 2012-03-29 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Biomarkers of renal injury
US20150110368A1 (en) * 2013-10-22 2015-04-23 Eyenuk, Inc. Systems and methods for processing retinal images for screening of diseases or abnormalities
US20150259742A1 (en) * 2012-11-09 2015-09-17 The Regents Of The University Of California Methods for predicting age and identifying agents that induce or inhibit premature aging
US20180235467A1 (en) * 2015-08-20 2018-08-23 Ohio University Devices and Methods for Classifying Diabetic and Macular Degeneration
US20210042916A1 (en) * 2018-02-07 2021-02-11 Ai Technologies Inc. Deep learning-based diagnosis and referral of diseases and disorders

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120077690A1 (en) * 2010-09-24 2012-03-29 University Of Pittsburgh - Of The Commonwealth System Of Higher Education Biomarkers of renal injury
US20150259742A1 (en) * 2012-11-09 2015-09-17 The Regents Of The University Of California Methods for predicting age and identifying agents that induce or inhibit premature aging
US20150110368A1 (en) * 2013-10-22 2015-04-23 Eyenuk, Inc. Systems and methods for processing retinal images for screening of diseases or abnormalities
US20180235467A1 (en) * 2015-08-20 2018-08-23 Ohio University Devices and Methods for Classifying Diabetic and Macular Degeneration
US20210042916A1 (en) * 2018-02-07 2021-02-11 Ai Technologies Inc. Deep learning-based diagnosis and referral of diseases and disorders

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KANG EUGENE YU-CHUAN, HSIEH YI-TING, LI CHIEN-HUNG, HUANG YI-JIN, KUO CHANG-FU, KANG JE-HO, CHEN KUAN-JEN, LAI CHI-CHUN, WU WEI-CH: "Deep Learning–Based Detection of Early Renal Function Impairment Using Retinal Fundus Images: Model Development and Validation", JMIR MEDICAL INFORMATICS, vol. 8, no. 11, 26 November 2020 (2020-11-26), pages e23472, XP093017727, DOI: 10.2196/23472 *
POUR ET AL.: "Automatic detection and monitoring of diabetic retinopathy using efficient convolutional neural networks and contrast limited adaptive histogram equalization", IEEE ACCESS, vol. 8, 25 June 2020 (2020-06-25), pages 136668 - 136673, XP011802628, Retrieved from the Internet <URL:https://ieeexplore.ieee.org/abstract/document/9125868> [retrieved on 20220819], DOI: 10.1109/ACCESS.2020.3005044 *
YU-CHUAN EUGENE, KANG YI-TING, HSIEH CHIEN-HUNG, LI YI-JIN, HUANG CHANG-FU, KUO JE-HO, KANG KUAN-JEN, CHEN CHI-CHUN, LAI WEI-CHI, : "A Deep Learning Model for Detecting Early Renal Function Impairment Using Retinal Fundus Images: Model Development and Validation Study", JMIR PUBLICATIONS, 13 August 2020 (2020-08-13), XP093017725, Retrieved from the Internet <URL:https://s3.ca-central-1.amazonaws.com/assets.jmir.org/assets/preprints/preprint-23472-accepted.pdf> [retrieved on 20230125] *
ZHANG ET AL.: "learning models for the detection and incidence prediction of chronic . kidney disease and type 2 diabetes from retinal fundus images", NATURE BIOMEDICAL ENGINEERING, vol. 5, no. 6, 15 June 2021 (2021-06-15), pages 533 - 545, XP037483440, Retrieved from the Internet <URL:https://www.nature.com/articles/s41551-021-00745-6> [retrieved on 20220819], DOI: 10.1038/s41551-021-00745-6 *

Similar Documents

Publication Publication Date Title
WO2020200087A1 (fr) Détection basée sur une image de maladies ophtalmiques et systémiques
US10722180B2 (en) Deep learning-based diagnosis and referral of ophthalmic diseases and disorders
Kermany et al. Identifying medical diagnoses and treatable diseases by image-based deep learning
Keel et al. Development and validation of a deep‐learning algorithm for the detection of neovascular age‐related macular degeneration from colour fundus photographs
Lim et al. Different fundus imaging modalities and technical factors in AI screening for diabetic retinopathy: a review
Mirzania et al. Applications of deep learning in detection of glaucoma: a systematic review
Christopher et al. Effects of study population, labeling and training on glaucoma detection using deep learning algorithms
Daich Varela et al. Artificial intelligence in retinal disease: clinical application, challenges, and future directions
Saeed et al. Accuracy of using generative adversarial networks for glaucoma detection: Systematic review and bibliometric analysis
JP2023551898A (ja) カラー眼底画像データを使用する糖尿病性網膜症重症度についての自動スクリーニング
Hemelings et al. A generalizable deep learning regression model for automated glaucoma screening from fundus images
Gong et al. Application of deep learning for diagnosing, classifying, and treating age-related macular degeneration
Datta et al. Hyper parameter tuning based gradient boosting algorithm for detection of diabetic retinopathy: an analytical review
Bali et al. Analysis of Deep Learning Techniques for Prediction of Eye Diseases: A Systematic Review
Fang et al. REFUGE2 challenge: A treasure trove for multi-dimension analysis and evaluation in glaucoma screening
Gao et al. Using a dual-stream attention neural network to characterize mild cognitive impairment based on retinal images
WO2022261513A1 (fr) Procédés et systèmes de détection et de prédiction de maladie rénale chronique et de diabète de type 2 en utilisant des modèles d&#39;apprentissage profond
US20230093471A1 (en) Methods and systems for predicting rates of progression of age-related macular degeneration
Yang et al. AI and retinal image analysis at Baidu
Jiang et al. Segmentation of Laser Marks of Diabetic Retinopathy in the Fundus Photographs Using Lightweight U‐Net
Chuter et al. Deep Learning Identifies High-Quality Fundus Photographs and Increases Accuracy in Automated Primary Open Angle Glaucoma Detection
Rajarajeswari et al. Simulation of diabetic retinopathy utilizing convolutional neural networks
Ghebrechristos et al. RetiNet—feature extractor for learning patterns of diabetic retinopathy and age-related macular degeneration from publicly available datasets
Elsawy et al. A deep network DeepOpacityNet for detection of cataracts from color fundus photographs
Shrimali Image Filtering and Utilization of Deep Learning Algorithms to Detect the Severity of Diabetic Retinopathy

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22821166

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE