WO2019246217A1 - Deep learning assisted macronutrient estimation for feedforward-feedback control in artificial pancreas systems - Google Patents

Deep learning assisted macronutrient estimation for feedforward-feedback control in artificial pancreas systems Download PDF

Info

Publication number
WO2019246217A1
WO2019246217A1 PCT/US2019/037928 US2019037928W WO2019246217A1 WO 2019246217 A1 WO2019246217 A1 WO 2019246217A1 US 2019037928 W US2019037928 W US 2019037928W WO 2019246217 A1 WO2019246217 A1 WO 2019246217A1
Authority
WO
WIPO (PCT)
Prior art keywords
insulin
glucose
delivery
exogenous input
content
Prior art date
Application number
PCT/US2019/037928
Other languages
French (fr)
Inventor
Eyal Dassau
Ankush CHAKRABARTY
Francis J. Doyle Iii
Original Assignee
President And Fellows Of Harvard College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by President And Fellows Of Harvard College filed Critical President And Fellows Of Harvard College
Publication of WO2019246217A1 publication Critical patent/WO2019246217A1/en

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M5/00Devices for bringing media into the body in a subcutaneous, intra-vascular or intramuscular way; Accessories therefor, e.g. filling or cleaning devices, arm-rests
    • A61M5/14Infusion devices, e.g. infusing by gravity; Blood infusion; Accessories therefor
    • A61M5/142Pressure infusion, e.g. using pumps
    • A61M5/14244Pressure infusion, e.g. using pumps adapted to be carried by the patient, e.g. portable on the body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/14532Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue for measuring glucose, e.g. by tissue impedance measurement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4836Diagnosis combined with treatment in closed-loop systems or methods
    • A61B5/4839Diagnosis combined with treatment in closed-loop systems or methods combined with drug delivery
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M5/00Devices for bringing media into the body in a subcutaneous, intra-vascular or intramuscular way; Accessories therefor, e.g. filling or cleaning devices, arm-rests
    • A61M5/14Infusion devices, e.g. infusing by gravity; Blood infusion; Accessories therefor
    • A61M5/142Pressure infusion, e.g. using pumps
    • A61M2005/14208Pressure infusion, e.g. using pumps with a programmable infusion control system, characterised by the infusion program

Definitions

  • UC4DK108483 and DP3DK104057 awarded by the National Institutes of Health (NIH). The government has certain rights in the invention.
  • the present invention is directed to glucose control systems. More specifically, the present invention is directed towards glucose monitoring (CGM) sensors and continuous subcutaneous insulin infusion systems.
  • CGM glucose monitoring
  • Diabetes is a metabolic disorder that afflicts tens of millions of people throughout the world. Diabetes results from the inability of the body to properly utilize and metabolize carbohydrates, particularly glucose. Normally, the finely-tuned balance between glucose in the blood and glucose in bodily tissue cells is maintained by insulin, a hormone produced by the pancreas which controls, among other things, the transfer of glucose from blood into body tissue cells. Upsetting this balance causes many complications and pathologies including heart disease, coronary and peripheral artery sclerosis, peripheral neuropathies, retinal damage, cataracts, hypertension, coma, and death from hypoglycemic shock.
  • the symptoms of the disease can be controlled by administering additional insulin (or other agents that have similar effects) by injection or by external or implantable insulin pumps.
  • The“correct” insulin dosage is a function of the level of glucose in the blood. Ideally, insulin administration should be continuously readjusted in response to changes in blood glucose level.
  • “insulin” instructs the body's cells to take in glucose from the blood.
  • “Glucagon” acts opposite to insulin, and causes the liver to release glucose into the blood stream.
  • The“basal rate” is the rate of continuous supply of insulin provided by an insulin delivery device (pump).
  • The“bolus” is the specific amount of insulin that is given to raise blood concentration of the insulin to an effective level when needed (as opposed to continuous).
  • Such systems at present require intervention by a patient to calculate and control the amount of insulin to be delivered.
  • the patient is sleeping, he or she cannot intervene in the delivery of insulin, yet control of a patient's glucose level is still necessary.
  • a system capable of integrating and automating the functions of glucose monitoring and controlled insulin delivery would be useful in assisting patients in maintaining their glucose levels, especially during periods of the day when they are unable to intervene.
  • a closed-loop system also called the“artificial pancreas (AP) consists of three components: a glucose monitoring device such as a continuous glucose monitor (“CGM”) that measures subcutaneous glucose concentration (“SC”); a titrating algorithm to compute the amount of analyte such as insulin and/or glucagon to be delivered; and one or more analyte pumps to deliver computed analyte doses subcutaneously.
  • a glucose monitoring device such as a continuous glucose monitor (“CGM”) that measures subcutaneous glucose concentration (“SC”
  • SC subcutaneous glucose concentration
  • a titrating algorithm to compute the amount of analyte such as insulin and/or glucagon to be delivered
  • analyte pumps to deliver computed analyte doses subcutaneously.
  • zone model predictive control In some known zone model predictive control (MPC) approaches to regulating glucose, the MPC penalizes the distance of predicted glucose states from a carefully designed safe zone based on clinical requirements. This helps avoid unnecessary control moves that reduce the risk of hypoglycemia.
  • the zone MPC approach was originally developed based on an auto- regressive model with exogenous inputs, and was extended to consider a control-relevant state- space model and a diurnal periodic target zone. Specifically, an asymmetric cost function was utilized in the zone MPC to facilitate independent design for hyperglycemia and hypoglycemia.
  • a model predictive iterative learning control approach has also been proposed to adapt controller behavior with patient’ s day-to-day lifestyle.
  • a multiple model probabilistic predictive controller was developed to achieve improved meal detection and prediction.
  • a dynamic insulin-on-board approach has also been proposed to compensate for the effect of diurnal insulin sensitivity variation.
  • a switched linear parameter-varying approach was developed to adjust controller modes for hypoglycemia, hyperglycemia and euglycemia situations.
  • a run-to-run approach was developed to adapt the basal insulin delivery rate and carbohydrate-to- insulin ratio by considering intra- and inter-day insulin sensitivity variability.
  • a major drawback in the proposed AP designs is the difficulty in achieving satisfactory blood glucose regulation in terms of hyperglycemia and hypoglycemia prevention through designing smart control algorithms. Furthermore, errors in macronutrient estimation can be attributed either to gaps in knowledge regarding dietary content of food types, or incorrect assessment of serving sizes.
  • a system for the delivery of insulin to a patient includes an image capturing device configured to capture image data associated with an exogenous input.
  • the system also includes a processor communicatively coupled to the image capturing device and configured to receive image data associated with the exogenous input.
  • the processor includes a convolutional neural network architecture (CNN) configured to predict a class of the exogenous input.
  • the system also includes an insulin delivery device communicatively coupled to the processor.
  • the system also includes a nutrient database communicatively coupled to the processor.
  • the processor is configured to receive a macronutrient content assessment of the exogenous input from the nutrient database based on the predicted class. Furthermore, the processor is further configured to determine a dosage of glucose altering substance to administer, using the macronutrient content assessment, and send a command to the insulin delivery device to administer the dosage of the glucose altering substance.
  • the exogenous input includes a food item to be consumed by the patient or a beverage to be consumed by the patient.
  • the image capturing device includes a camera located at a mobile device.
  • the nutrient database is located remote to the processor.
  • the insulin delivery device includes a pump surgically connected to the patient.
  • the macronutrient content is a measurement of carbohydrate content.
  • the dosage of glucose altering substance is proportional to the measurement of the carbohydrate content.
  • the macronutrient content is further based on weight of the exogenous input.
  • the macronutrient content is a measurement of fat and protein content.
  • the dosage of glucose altering substance is proportional to the measurement of the fat and protein content.
  • a method for providing closed loop adaptive glucose controller is also provided.
  • the method includes receiving image data from at least one image capturing device.
  • the image data received is associated with an exogenous input.
  • the image data received is processed by a convolutional neural network architecture (CNN) to predict a class of the exogenous input based on the image data received.
  • a macronutrient content assessment of the exogenous input is received from a nutrient database based on the predicted class.
  • a dosage of glucose altering substance to administer is determined using the macronutrient content assessment received.
  • a command is sent to an insulin delivery device to administer the dosage of the glucose altering substance.
  • FIG. 1 is a basic block diagram of a closed-loop system for continuous glucose monitoring and continuous subcutaneous insulin infusion using a model predictive controller (MPC), in accordance with an embodiment of the disclosure;
  • MPC model predictive controller
  • FIG. 2 is an exemplary flow chart illustrating a process associated with a convolutional neural network architecture for multi-class image recognition of food types, in accordance with an embodiment of the disclosure
  • FIG. 3 illustrates an exemplary flow chart illustrating a process for feedforward- feedback control, in accordance with an embodiment of the disclosure
  • FIG. 4. is a graphical illustration of a confusion matrix of a deep neural network tested on different food images, in accordance with an embodiment of the disclosure
  • FIG. 5 is a graphical illustration of histograms of images that have been classified correctly and incorrectly at different confidence levels, in accordance with an embodiment of the disclosure
  • FIG. 6 illustrates a clinical protocol for testing the disclosed process, in accordance with an embodiment of the disclosure
  • FIG. 7 is a graphical illustration of a distribution of neural network predictions for each meal, in accordance with an embodiment of the disclosure
  • FIG. 8 is a graphical comparison of deep learning assisted AP control algorithms, in accordance with an embodiment of the disclosure.
  • FIG. 9 are box plots illustrating an exemplary simulation of percentage time in different glycemic ranges using the proposed algorithm with meal size estimation errors, in accordance with an embodiment of the disclosure.
  • FIG. l is a basic block diagram of a closed-loop system 20 for continuous glucose monitoring and for continuous subcutaneous insulin infusion using a model predictive controller (MPC) 100.
  • the MPC 100 can include a processor 100, and a dosage calculator 140.
  • the MPC 100 can be communicatively coupled to a nutrient database 130, over a network. It should be understood that either of these components can be configured as separate components, or performed by the same electronic component.
  • the MPC 100 can simply include a single processor configured to additionally perform the functionality of the dosage calculator 140.
  • the nutrient database 130 can be located within the MPC 100.
  • the MPC 100 can be located at, on or surgically inserted within a patient (not shown).
  • the closed-loop system 20 can also include a pump 300 for administering insulin to the patient. Similar to the MPC 100, the pump 300 can be located at, on or surgically inserted within the patient.
  • the MPC 100 can be communicatively linked to a camera 200 over a LAN or WAN network.
  • the camera 200 can be located near or on the patient. In some embodiments, the camera 200 can be located on a mobile device, such as a smartphone, a tablet, smartglasses, or any other wearable smart device.
  • the patient receives exogenous inputs, such as food items 400.
  • the food item 400 is visually captured in the form of image data by the camera 200.
  • the image data is processed at the processor 120 within the MPC 100.
  • a convolutional neural network architecture (CNN) 125 is implemented to determine a food class for the food item 400 based on the processed image data.
  • the nutrient database 130 can be traversed to determine the macronutrient content of the food based on the food class.
  • the dosage calculator 140 can determine a dosage of insulin to administer to a subject based on the macronutrient content.
  • a command can be sent to the pump 300 to administer the dosage of insulin to the subject.
  • the MPC 100 is effectively able to control a delivery device, such as the pump 300, to deliver medication to the patient to control blood glucose based on image processing techniques described herein.
  • the nutrient database 130 can include the United States
  • USD A nutrition database can be queried to obtain the macronutrient content per serving size.
  • the serving size of the food item is assessed by the end-user, using which a feedforward insulin bolus is computed.
  • Carbohydrates (CHO) is used as the macronutrient that influences meal bolusing for testing purposes.
  • CHO Carbohydrates
  • the present disclosure is described with reference to incorporating a deep learning assisted feedforward-feedback control algorithm in AP systems, the disclosed algorithm is applicable to other insulin bolusing systems as well.
  • the present disclosure also discloses processes to safely administer the feedforward bolus integrated with the deep neural network by employing confidence metrics and partial meal bolusing.
  • a feedback controller can also be implemented to a deep learning based image recognition algorithm. Incorporation of the feedback controller improves glucose regulation via in-silico testing while ensuring additional safety to errors induced by the deep network or errors in serving size estimation.
  • the present disclosure also incorporates additional safety constraints using confidence-metrics on predictions and purposeful reduction of meal bolus amounts.
  • the CNN 125 of the MPC 100 can be represented in discrete-time using transfer functions, nonlinear state-space realizations, or input-output autoregressive structures.
  • the classes of models are represented using the general form:
  • y[k ⁇ h(x[kl cGM[ ]), where ⁇ e denotes the time index, x[k ⁇ e R nx denotes the system state, the scalar y[k ⁇ ⁇ e R denotes an estimate of the patient’s true glucose value, and cGM[&] denotes a person’s blood glucose (BG) concentration, estimated by a continuous glucose monitor (CGM) sensor.
  • the scalar u u [k ⁇ E R denotes the insulin infusion rate in units per t minutes (U/t min) derived based solely on feedback, where t is the sample-period of the system.
  • the derivation of the control action u[k ⁇ is not rigidly described in this work to encompass multiple control architectures, which may include different safety layers in conjunction with peripheral systems like state estimators.
  • T1DM type 1 diabetes mellitus
  • M E grams of carbohydrate
  • gCHO grams of carbohydrate
  • k* denote the most recent time instant when a CGM measurement was transmitted successfully to the control module.
  • CR[c] denotes a basic bolus size based solely on meal size and carbohydrate sensitivity of the individual expressed through the carbohydrate ratio CR, and
  • correction bolus is a correction bolus computed using a personalized correction factor CF derived empirically by clinicians.
  • the correction bolus is added to the basic bolus Bo when glucose exceeds 140 mg/dL.
  • the min ⁇ ⁇ , ⁇ ⁇ operation ensures that the correction bolus never exceeds 2 U of insulin.
  • FIG. 2 is an exemplary flow chart illustrating a process associated with the CNN
  • a set of input images is provided to the network for training. Each image / enters the network in raw pixel form.
  • the raw pixel form may not undergo prior feature extraction or descriptor selection, but there may be resizing or linear transformations in the pre-processing phase.
  • the raw pixel form is represented as a tensor A 0 of dimension Nh x Nw x 3, where Nh, ft E I are the image height and width, respectively.
  • the color space is described by three channels (such as red, green, blue for the RGB space).
  • the tensor A 0 is filtered by Nd e N fixed-size convolution kernels K 1 written as a tensor of size iT x fFx 3 x /Vd in a so-called‘convolution layer’ at 230.
  • Feature maps containing important image characterization information such as edges and textures are extracted by performing the convolution operation A 0 (g> K l .
  • the output of the first convolution layer will be a tensor of size (Nh H + 1) x (Nw - W + ⁇ ) x Nd.
  • the convolution layer alters the third dimension of its input tensor, but the ReLU unit does not.
  • Feature maps constructed in the convolution layers are sub-sampled into lower- dimensions via‘pooling layers’ to reduce the overall storage and processing requirements of the CNN.
  • Common pooling methods include average pooling or max pooling.
  • average/max pooling subregions of size H x W are represented in each of the channels by the average/maximum of the elements in the corresponding subregions. Assuming a single stride, this can be illustrated as: for average pooling, and
  • a deep CNN it is common to have multiple convolution and pooling layers for extracting both coarse and fine features automatically.
  • a generic L number of hidden layers is assumed.
  • a CNN generally culminates in a‘fully-connected’ layer.
  • the output of a fully-connected layer is a linear combination of all components of the output of the previous hidden layer, similar to a classical multi-layer perceptron. This can be written as follows: where n is the dimension of the fully-connected layer, o jk E M are the connection weights, and bk e R is a bias term.
  • activation functions f ⁇ (such as a softmax or sigmoid function) are used to determine the class to which the input image belongs.
  • the prediction of the neural network will be the class with the highest confidence score, that is,
  • T are max p, ⁇
  • the weights of the neural network w,/ ⁇ are determined via backpropagation using gradient descent algorithms that minimize objective functions of the form
  • J £(p, y) + AR(a )
  • £ is a loss function that penalizes the error between the prediction p e R c whose /th component is given in (2), and the correct label vector e R c .
  • the positive scalar x denotes a regularization weight
  • Jl is a regularization function.
  • the widely used stochastic gradient descent update is employed: where the scalar m > 0 is a momentum weight, the scalar a > 0 is a learning rate, and Vu> is the gradient computed with respect to w.
  • the nutrient database 130 can be configured with
  • FIG. 3 illustrates an exemplary flow chart illustrating a process for feedforward- feedback control, in accordance with an embodiment of the disclosure.
  • a safe feedforward insulin bolusing strategy is formulated.
  • a confidence metric b is assigned on the network prediction.
  • the confidence score is the output of the final softmax activation function as in (3); hence b e [0, 1]
  • the user chooses a threshold 0 « /?min ⁇ 1 to label a prediction as high-confidence only if b > j?mm. If a prediction is deemed high-confidence, then the nutritional database 130 of FIG. 1 is queried to obtain the macronutrient (specifically, carbohydrate, fat, and protein) content per serving size of the predicted image.
  • macronutrient specifically, carbohydrate, fat, and protein
  • the total g-CHO in the meal image is estimated, upon which we provide a bolus of magnitude meal [k ⁇ min ⁇ iijn eai [ ⁇ ] ⁇ Umax! (4)
  • umcai[ A ] has been defined in (1)
  • p e (0, 1] is the fraction of the computed umeal that is allowed to be bolused
  • umax is a strict upper bound on the magnitude of the deep network assisted insulin feedforward bolus or the user-estimated bolus.
  • the fat/protein content is not discussed herein, those macronutrients can also be used to further customize the magnitude and shape of the meal bolus.
  • FIG. 4. is a graphical illustration of a confusion matrix of the CNN tested on different food images, in accordance with an embodiment of the disclosure.
  • the trained neural network exhibits a top-l accuracy of 81.65%.
  • Initial pre-loading of the network required 96.94 s. Consequently, the average and standard deviation of the pre-diction time was 0.57 ⁇ 0.02 s for each image.
  • the diagonal elements illustrate that the classifier mostly identified the food categories in the test images correctly.
  • FIG. 6 illustrates a clinical protocol for testing the disclosed process, in accordance with an embodiment of the disclosure.
  • 1000 images are randomly selected from among the test images of the nutrient database 130.
  • the randomly selected images are divided into 100 images per in-silico patient.
  • 10 scenarios are generated with unique but random CGM noise seeds, where each scenario includes 10 meals.
  • the serving sizes are pre-selected to ensure that carbohydrate content for any image is between 15 and 110 g.
  • a unique image is randomly selected and the deep neural network prediction P and confidence b associated with the image is computed, as in (3).
  • Umeai is bolus as in (4), otherwise Umeai is set to zero, and the controller operates purely on feedback. However, the truth value of the image and the corresponding serving size information reported by the user is maintained for comparison with controllers operating under fully announced meals.
  • FIG. 7 is a graphical illustration of a distribution of neural network predictions for each meal, in accordance with an embodiment of the disclosure.
  • the overall distribution of predictions for 100 scenarios (10 patients, 10 images each) is shown therein.
  • Per meal the percentage of high-confidence correct classifications were 71.90 ⁇ 3.0, with low-confidence predictions comprising of 25.90 ⁇ 4.15%.
  • High-confidence errors were small, at 2.20 ⁇ 1.55%.
  • the feedback control algorithm used includes zone boundaries set at 80 mg/dL and 120 mg/dL at day time, with the lower boundary raised to 90 mg/dL at night.
  • FIG. 8 is a graphical comparison of deep learning assisted AP control algorithms, in accordance with an embodiment of the disclosure.
  • the results of the comparative 65-hour study are illustrated therein.
  • full meal announcement dashed line
  • it is typically higher than the fully announced meal because performance is sacrificed for safety and autonomy. Since the constraint b > /?mm prevents feedforward bolusing for each image, some images do not result in assistive meal announcement and the AP runs solely on glucose feedback.
  • the mean insulin bolused with the proposed algorithm is generally lower than the AP with meal announcement. This is illustrated in the lower plot of FIG. 8. Referring momentarily to FIG. 7, the glucose trajectories around the 6th and 7th meals closely resemble the trajectory with announced meals since the number of erroneous/low-confidence classifications are low. Conversely, the 4th and 9th meal exhibit higher deviations between the glucose dynamics due to higher percent of unannounced meals due to more low-confidence predictions.
  • FIG. 9 are box plots illustrating an exemplary simulation of percentage time in different glycemic ranges using the proposed algorithm with meal size estimation errors, in accordance with an embodiment of the disclosure.
  • small errors ⁇ 20%) in estimation of food quantity is compensated satisfactorily, with time in the 70-180 mg/dL range contained within 90.1 to 90.8%, exhibiting indistinguishable statistical properties (p > 0.05), without a discernible shift in the median times in hypoglycemia or hyperglycemia.
  • the mechanism of providing corrective feedback control actions in conjunction with deep learning based feed-forward assist maintains safe glycemic outcomes in spite of mistakes in macronutrient estimation.
  • a deep learning assisted AP system for automatic feedforward bolusing is provided herein.
  • the proposed framework has additional safety layers to enable safe and robust operation: namely, a pure feedback controller that monitors and corrects for adverse scenarios that could arise due to misidentification of food images by the deep network or incorrect meal size estimation by the user.
  • insulin-glucose dynamics affected by CHO content of meals is discussed herein, the present disclosure can be applied to the effect of fat and protein to customize the bolus shape and update the bolus magnitude. This will lead to improved quality of care in next-generation decision support systems, while reducing the burden on people with T1DM.
  • the disclosure herein may be implemented with any type of hardware and/or software, and may be a pre-programmed general purpose computing device.
  • the system may be implemented using a server, a personal computer, a portable computer, a thin client, or any suitable device or devices.
  • the disclosure and/or components thereof may be a single device at a single location, or multiple devices at a single, or multiple, locations that are connected together using any appropriate communication protocols over any communication medium such as electric cable, fiber optic cable, or in a wireless manner.
  • modules which perform particular functions. It should be understood that these modules are merely schematically illustrated based on their function for clarity purposes only, and do not necessary represent specific hardware or software. In this regard, these modules may be hardware and/or software implemented to substantially perform the particular functions discussed. Moreover, the modules may be combined together within the disclosure, or divided into additional modules based on the particular function desired. Thus, the disclosure should not be construed to limit the present invention, but merely be understood to illustrate one example implementation thereof.
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device).
  • client device e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device.
  • Data generated at the client device e.g., a result of the user interaction
  • Implementations of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components.
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
  • LAN local area network
  • WAN wide area network
  • Internet inter network
  • peer-to-peer networks e
  • Implementations of the subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Implementations of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus.
  • the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
  • a computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them.
  • a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal.
  • the computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices).
  • the term“data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing
  • the apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • the apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them.
  • the apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment.
  • a computer program may, but need not, correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • the processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • special purpose logic circuitry e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read-only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • a computer need not have such devices.
  • a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few.
  • Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • the terms“a” and“an” and“the” and similar references used in the context of describing a particular embodiment of the application can be construed to cover both the singular and the plural.
  • the recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context.

Landscapes

  • Health & Medical Sciences (AREA)
  • Vascular Medicine (AREA)
  • Engineering & Computer Science (AREA)
  • Anesthesiology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Hematology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)

Abstract

A system for the delivery of insulin to a patient is provided. The system includes an image capturing device configured to capture image data associated with an exogenous input. A processor communicatively coupled to the image capturing device and configured to receive image data associated with the exogenous input, is also provided. The processor includes a deep neural network architecture, and in some cases a convolutional neural network architecture (CNN) configured to predict a class of the exogenous input. An insulin delivery device and a nutrient database communicatively coupled to the processor, are also provided. The processor is configured to receive a macronutrient content assessment of the exogenous input from the nutrient database based on the predicted class. Furthermore, the processor is configured to determine a dosage of glucose altering substance to administer, using the macronutrient content assessment.

Description

DEEP LEARNING ASSISTED MACRONUTRIENT ESTIMATION FOR
FEEDFORWARD-FEEDBACK CONTROL IN ARTIFICIAL PANCREAS SYSTEMS
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH
[0001] This invention was made with government support under NIH Grant Nos.
UC4DK108483 and DP3DK104057 awarded by the National Institutes of Health (NIH). The government has certain rights in the invention.
CROSS REFERENCE TO RELATED APPLICATION
[0002] This application is an International Application which claims the benefit under 35
U.S.C. § 119(e) to U.S. Provisional Application No. 62/686,991, filed on June 19, 2018, the contents of that application are hereby incorporated by reference in their entirety.
FIELD
[0003] The present invention is directed to glucose control systems. More specifically, the present invention is directed towards glucose monitoring (CGM) sensors and continuous subcutaneous insulin infusion systems.
BACKGROUND
[0004] Diabetes is a metabolic disorder that afflicts tens of millions of people throughout the world. Diabetes results from the inability of the body to properly utilize and metabolize carbohydrates, particularly glucose. Normally, the finely-tuned balance between glucose in the blood and glucose in bodily tissue cells is maintained by insulin, a hormone produced by the pancreas which controls, among other things, the transfer of glucose from blood into body tissue cells. Upsetting this balance causes many complications and pathologies including heart disease, coronary and peripheral artery sclerosis, peripheral neuropathies, retinal damage, cataracts, hypertension, coma, and death from hypoglycemic shock.
[0005] In patients with insulin-dependent diabetes, the symptoms of the disease can be controlled by administering additional insulin (or other agents that have similar effects) by injection or by external or implantable insulin pumps. The“correct” insulin dosage is a function of the level of glucose in the blood. Ideally, insulin administration should be continuously readjusted in response to changes in blood glucose level. In diabetes management,“insulin” instructs the body's cells to take in glucose from the blood.“Glucagon” acts opposite to insulin, and causes the liver to release glucose into the blood stream. The“basal rate” is the rate of continuous supply of insulin provided by an insulin delivery device (pump). The“bolus” is the specific amount of insulin that is given to raise blood concentration of the insulin to an effective level when needed (as opposed to continuous).
[0006] Presently, systems are available for continuously monitoring blood glucose levels by implanting a glucose sensitive probe into the patient. Such probes measure various properties of blood or other tissues, including optical absorption, electrochemical potential, and enzymatic products. The output of such sensors can be communicated to a hand held device that is used to calculate an appropriate dosage of insulin to be delivered into the blood stream in view of several factors, such as a patient's present glucose level, insulin usage rate, carbohydrates consumed or to be consumed, and exercise, among others. These calculations can then be used to control a pump that delivers the insulin, either at a controlled basal rate, or as a bolus. When provided as an integrated system, the continuous glucose monitor, controller, and pump work together to provide continuous glucose monitoring and insulin pump control.
[0007] Such systems at present require intervention by a patient to calculate and control the amount of insulin to be delivered. However, there may be periods when the patient is not able to adjust insulin delivery. For example, when the patient is sleeping, he or she cannot intervene in the delivery of insulin, yet control of a patient's glucose level is still necessary. A system capable of integrating and automating the functions of glucose monitoring and controlled insulin delivery would be useful in assisting patients in maintaining their glucose levels, especially during periods of the day when they are unable to intervene. A closed-loop system, also called the“artificial pancreas (AP), consists of three components: a glucose monitoring device such as a continuous glucose monitor (“CGM”) that measures subcutaneous glucose concentration (“SC”); a titrating algorithm to compute the amount of analyte such as insulin and/or glucagon to be delivered; and one or more analyte pumps to deliver computed analyte doses subcutaneously.
[0008] In some known zone model predictive control (MPC) approaches to regulating glucose, the MPC penalizes the distance of predicted glucose states from a carefully designed safe zone based on clinical requirements. This helps avoid unnecessary control moves that reduce the risk of hypoglycemia. The zone MPC approach was originally developed based on an auto- regressive model with exogenous inputs, and was extended to consider a control-relevant state- space model and a diurnal periodic target zone. Specifically, an asymmetric cost function was utilized in the zone MPC to facilitate independent design for hyperglycemia and hypoglycemia.
[0009] Throughout the development and adaptation of the MPC approaches, different controller adaptation methods have been utilized for AP design. Earlier studies considered basal rate and meal bolus adaptation by using run-to-run approaches based on sparse blood glucose (BG) measurements. The availability of CGM further provided the opportunity of designing adaptive AP utilizing advanced feedback controllers. For instance, a nonlinear adaptive MPC has been proposed to maintain normoglycemia during fasting conditions using Bayesian model parameter estimation. In other examples, a generalized predictive control (GPC) approach that adopted a recursively updated subject model has been employed on a bi-hormone AP; this approach has also been explored to eliminate the need of meal or exercise announcements.
[0010] A model predictive iterative learning control approach has also been proposed to adapt controller behavior with patient’ s day-to-day lifestyle. In some approaches, a multiple model probabilistic predictive controller was developed to achieve improved meal detection and prediction. A dynamic insulin-on-board approach has also been proposed to compensate for the effect of diurnal insulin sensitivity variation. A switched linear parameter-varying approach was developed to adjust controller modes for hypoglycemia, hyperglycemia and euglycemia situations. A run-to-run approach was developed to adapt the basal insulin delivery rate and carbohydrate-to- insulin ratio by considering intra- and inter-day insulin sensitivity variability.
[0011] A major drawback in the proposed AP designs is the difficulty in achieving satisfactory blood glucose regulation in terms of hyperglycemia and hypoglycemia prevention through designing smart control algorithms. Furthermore, errors in macronutrient estimation can be attributed either to gaps in knowledge regarding dietary content of food types, or incorrect assessment of serving sizes. SUMMARY
[0012] A system for the delivery of insulin to a patient is provided. The system includes an image capturing device configured to capture image data associated with an exogenous input. The system also includes a processor communicatively coupled to the image capturing device and configured to receive image data associated with the exogenous input. The processor includes a convolutional neural network architecture (CNN) configured to predict a class of the exogenous input. The system also includes an insulin delivery device communicatively coupled to the processor. The system also includes a nutrient database communicatively coupled to the processor. The processor is configured to receive a macronutrient content assessment of the exogenous input from the nutrient database based on the predicted class. Furthermore, the processor is further configured to determine a dosage of glucose altering substance to administer, using the macronutrient content assessment, and send a command to the insulin delivery device to administer the dosage of the glucose altering substance.
[0013] In some embodiments, the exogenous input includes a food item to be consumed by the patient or a beverage to be consumed by the patient. In some embodiments, the image capturing device includes a camera located at a mobile device. In some embodiments, the nutrient database is located remote to the processor. In some embodiments, the insulin delivery device includes a pump surgically connected to the patient.
[0014] In some embodiments, the macronutrient content is a measurement of carbohydrate content. The dosage of glucose altering substance is proportional to the measurement of the carbohydrate content. The macronutrient content is further based on weight of the exogenous input. In some embodiments, the macronutrient content is a measurement of fat and protein content. The dosage of glucose altering substance is proportional to the measurement of the fat and protein content.
[0015] A method for providing closed loop adaptive glucose controller is also provided.
The method includes receiving image data from at least one image capturing device. The image data received is associated with an exogenous input. The image data received is processed by a convolutional neural network architecture (CNN) to predict a class of the exogenous input based on the image data received. A macronutrient content assessment of the exogenous input is received from a nutrient database based on the predicted class. A dosage of glucose altering substance to administer is determined using the macronutrient content assessment received. Furthermore, a command is sent to an insulin delivery device to administer the dosage of the glucose altering substance.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] The accompanying drawings, which are incorporated in and constitute a part of this specification, exemplify the embodiments of the present invention and, together with the description, serve to explain and illustrate principles of the invention. The drawings are intended to illustrate major features of the exemplary embodiments in a diagrammatic manner. The drawings are not intended to depict every feature of actual embodiments nor relative dimensions of the depicted elements, and are not drawn to scale.
[0017] FIG. 1 is a basic block diagram of a closed-loop system for continuous glucose monitoring and continuous subcutaneous insulin infusion using a model predictive controller (MPC), in accordance with an embodiment of the disclosure;
[0018] FIG. 2 is an exemplary flow chart illustrating a process associated with a convolutional neural network architecture for multi-class image recognition of food types, in accordance with an embodiment of the disclosure;
[0019] FIG. 3 illustrates an exemplary flow chart illustrating a process for feedforward- feedback control, in accordance with an embodiment of the disclosure;
[0020] FIG. 4. is a graphical illustration of a confusion matrix of a deep neural network tested on different food images, in accordance with an embodiment of the disclosure;
[0021] FIG. 5 is a graphical illustration of histograms of images that have been classified correctly and incorrectly at different confidence levels, in accordance with an embodiment of the disclosure;
[0022] FIG. 6 illustrates a clinical protocol for testing the disclosed process, in accordance with an embodiment of the disclosure; [0023] FIG. 7 is a graphical illustration of a distribution of neural network predictions for each meal, in accordance with an embodiment of the disclosure;
[0024] FIG. 8 is a graphical comparison of deep learning assisted AP control algorithms, in accordance with an embodiment of the disclosure; and
[0025] FIG. 9 are box plots illustrating an exemplary simulation of percentage time in different glycemic ranges using the proposed algorithm with meal size estimation errors, in accordance with an embodiment of the disclosure.
[0026] In the drawings, the same reference numbers and any acronyms identify elements or acts with the same or similar structure or functionality for ease of understanding and convenience. To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the Figure number in which that element is first introduced.
DETAILED DESCRIPTION
[0027] The present invention is described with reference to the attached figures, where like reference numerals are used throughout the figures to designate similar or equivalent elements. The figures are not drawn to scale, and they are provided merely to illustrate an instant embodiment. Several embodiments are described below with reference to example applications for illustration. It should be understood that numerous specific details, relationships, and methods are set forth to provide a full understanding of the disclosure. One having ordinary skill in the relevant art, however, will readily recognize that the disclosed embodiments can be practiced without one or more of the specific details, or with other methods. In other instances, well-known structures or operations are not shown in detail to avoid obscuring the disclosure. The present disclosure is not limited by the illustrated ordering of acts or events, as some acts may occur in different orders and/or concurrently with other acts or events. Furthermore, not all illustrated acts or events are required to implement a methodology in accordance with the present disclosure.
[0028] FIG. l is a basic block diagram of a closed-loop system 20 for continuous glucose monitoring and for continuous subcutaneous insulin infusion using a model predictive controller (MPC) 100. The MPC 100 can include a processor 100, and a dosage calculator 140. The MPC 100 can be communicatively coupled to a nutrient database 130, over a network. It should be understood that either of these components can be configured as separate components, or performed by the same electronic component. For example, the MPC 100 can simply include a single processor configured to additionally perform the functionality of the dosage calculator 140. In other embodiments, the nutrient database 130 can be located within the MPC 100. The MPC 100 can be located at, on or surgically inserted within a patient (not shown). The closed-loop system 20 can also include a pump 300 for administering insulin to the patient. Similar to the MPC 100, the pump 300 can be located at, on or surgically inserted within the patient. The MPC 100 can be communicatively linked to a camera 200 over a LAN or WAN network. The camera 200 can be located near or on the patient. In some embodiments, the camera 200 can be located on a mobile device, such as a smartphone, a tablet, smartglasses, or any other wearable smart device.
[0029] The patient receives exogenous inputs, such as food items 400. The food item 400 is visually captured in the form of image data by the camera 200. The image data is processed at the processor 120 within the MPC 100. A convolutional neural network architecture (CNN) 125 is implemented to determine a food class for the food item 400 based on the processed image data. Based on the determined food class, the nutrient database 130 can be traversed to determine the macronutrient content of the food based on the food class. The dosage calculator 140 can determine a dosage of insulin to administer to a subject based on the macronutrient content. A command can be sent to the pump 300 to administer the dosage of insulin to the subject. The MPC 100 is effectively able to control a delivery device, such as the pump 300, to deliver medication to the patient to control blood glucose based on image processing techniques described herein.
[0030] In some embodiments, the nutrient database 130 can include the United States
Department of Agriculture (USD A) nutrition database. The USD A nutrition database can be queried to obtain the macronutrient content per serving size. The serving size of the food item is assessed by the end-user, using which a feedforward insulin bolus is computed. In some examples herein, Carbohydrates (CHO) is used as the macronutrient that influences meal bolusing for testing purposes. [0031] Furthermore, while the present disclosure is described with reference to incorporating a deep learning assisted feedforward-feedback control algorithm in AP systems, the disclosed algorithm is applicable to other insulin bolusing systems as well. The present disclosure also discloses processes to safely administer the feedforward bolus integrated with the deep neural network by employing confidence metrics and partial meal bolusing.
[0032] Although not indicated above with respect to FIG. 1, a feedback controller can also be implemented to a deep learning based image recognition algorithm. Incorporation of the feedback controller improves glucose regulation via in-silico testing while ensuring additional safety to errors induced by the deep network or errors in serving size estimation. The present disclosure also incorporates additional safety constraints using confidence-metrics on predictions and purposeful reduction of meal bolus amounts.
[0033] The CNN 125 of the MPC 100 can be represented in discrete-time using transfer functions, nonlinear state-space realizations, or input-output autoregressive structures. The classes of models are represented using the general form:
X[k + 1] =/ (x[&], Uu[k\ , Wmeal[ ]),
y[k\ = h(x[kl cGM[ ]), where ^ e denotes the time index, x[k\ e Rnx denotes the system state, the scalar y[k~\ e R denotes an estimate of the patient’s true glucose value, and cGM[&] denotes a person’s blood glucose (BG) concentration, estimated by a continuous glucose monitor (CGM) sensor. The scalar uu[k\ E R denotes the insulin infusion rate in units per t minutes (U/t min) derived based solely on feedback, where t is the sample-period of the system. This feedback control action has two components: iiu[k\ = u\k\ + u*[k where u*[k\ denotes the subject-specific, time-varying basal insulin infusion rate in U / t min, and u\K\ is the controller action in U / t min. The derivation of the control action u[k\ is not rigidly described in this work to encompass multiple control architectures, which may include different safety layers in conjunction with peripheral systems like state estimators.
[0034] People with type 1 diabetes mellitus (“T1DM”), irrespective of whether or not they are AP users, are required to provide a feedforward insulin bolus Wmeai[&] proportional to the grams of carbohydrate (gCHO) content of the meal they are about to consume, denoted M E (0, ¥) gCHO. This is known as‘meal announcement’, and the insulin bolus designed to compensate for the meal is known as a‘meal bolus’. Let CR[&] e (0, ¥) gCHO/U denote a personalized carbohydrate ratio and let k* denote the most recent time instant when a CGM measurement was transmitted successfully to the control module. Thus, the meal bolus Wmeai is calculated using the formula:
B0 [k] + B^k], yCGM ³ 140 mg/dl and ( k— k*) t < 20 min
M-meal [^] 0.8 B0 [k], otherwise (1) where
M
B0 [k]
CR[/c] denotes a basic bolus size based solely on meal size and carbohydrate sensitivity of the individual expressed through the carbohydrate ratio CR, and
Figure imgf000011_0001
is a correction bolus computed using a personalized correction factor CF derived empirically by clinicians. The correction bolus is added to the basic bolus Bo when glucose exceeds 140 mg/dL. The min{ ·, · } operation ensures that the correction bolus never exceeds 2 U of insulin.
[0035] When the blood glucose falls below 140 mg/dL or if no CGM measurement has been successfully transmitted to the controller for 20 min, the basic bolus is reduced by 20% for safety, and a correction bolus is not delivered. Note, when a meal bolus is provided (that is, umeal > 0), the feedback bolus uu is set to zero at that time instant. Although proper use of the meal bolus has been demonstrated to have significant clinical advantages in glucose regulation, the task of carbohydrate estimation associated with computing meal bolus sizes is challenging and burdensome to the user. To alleviate this problem, a learning-based assistive technology for meal estimation in AP systems is employed herein.
[0036] FIG. 2 is an exemplary flow chart illustrating a process associated with the CNN
125 (of FIG. 1) for multi-class image recognition of food types, in accordance with an embodiment of the disclosure. Major advantages of the CNN topology over other feedforward neural networks include: computation reduction via sharing of weights and regular sub-sampling throughout the hidden layers, and automatic feature extraction via convolution kernels. A set of input images is provided to the network for training. Each image / enters the network in raw pixel form. The raw pixel form may not undergo prior feature extraction or descriptor selection, but there may be resizing or linear transformations in the pre-processing phase. The raw pixel form is represented as a tensor A0 of dimension Nh x Nw x 3, where Nh, ft E I are the image height and width, respectively. The color space is described by three channels (such as red, green, blue for the RGB space).
[0037] The tensor A0 is filtered by Nd e N fixed-size convolution kernels K1 written as a tensor of size iT x fFx 3 x /Vd in a so-called‘convolution layer’ at 230. Feature maps containing important image characterization information such as edges and textures are extracted by performing the convolution operation A0 (g> Kl. The output of the first convolution layer will be a tensor of size (Nh H + 1) x (Nw - W + \) x Nd.
[0038] For clarity, the exact of the input of fth layer are dispensed, and the equation becomes X ' e
Figure imgf000012_0001
Nd . Then the general convolution layer operation with a kernel Kl of size Hx Wx N^ x Nd can be written as
Figure imgf000012_0002
for any
Figure imgf000012_0003
[0039] In order to introduce nonlinear effects to the outputs of these convolution layers without implementing a computationally intensive nonlinearity, the output of a convolution layer is generally passed through a rectified linear unit (ReLU) layer, that is, Ae+1 = max (0, Y1 }, where the maximum is taken point-wise. The convolution layer alters the third dimension of its input tensor, but the ReLU unit does not.
[0040] Feature maps constructed in the convolution layers are sub-sampled into lower- dimensions via‘pooling layers’ to reduce the overall storage and processing requirements of the CNN. Common pooling methods include average pooling or max pooling. In average/max pooling, subregions of size H x W are represented in each of the
Figure imgf000012_0004
channels by the average/maximum of the elements in the corresponding subregions. Assuming a single stride, this can be illustrated as:
Figure imgf000013_0001
for average pooling, and
Figure imgf000013_0002
for max pooling, where 0 <
Figure imgf000013_0003
= N . Note that pooling results in smaller height and width of the input tensor but does not change its depth.
[0041] In a deep CNN, it is common to have multiple convolution and pooling layers for extracting both coarse and fine features automatically. For example, in Fig. 1, a generic L number of hidden layers is assumed. A CNN generally culminates in a‘fully-connected’ layer. As the name suggests, the output of a fully-connected layer is a linear combination of all components of the output of the previous hidden layer, similar to a classical multi-layer perceptron. This can be written as follows:
Figure imgf000013_0004
where n is the dimension of the fully-connected layer, o jk E M are the connection weights, and bk e R is a bias term.
[0042] Following a fully-connected layer, activation functions f{·) (such as a softmax or sigmoid function) are used to determine the class to which the input image belongs. Concretely, the confidence score for the /th class prediction can be represented mathematically as pj(x) = f](Cί) (2) where C denotes the number of classes. The prediction of the neural network will be the class with the highest confidence score, that is,
T = are max p,·
o £j£c 1 with the corresponding confidence score b = max Ό (3) o £j£c 1
The weights of the neural network w,/< are determined via backpropagation using gradient descent algorithms that minimize objective functions of the form
J = £(p, y) + AR(a ) where £ is a loss function that penalizes the error between the prediction p e Rc whose /th component is given in (2), and the correct label vector e Rc. The positive scalar x denotes a regularization weight, and Jl is a regularization function. For the optimization of the weights, the widely used stochastic gradient descent update is employed:
Figure imgf000014_0001
where the scalar m > 0 is a momentum weight, the scalar a > 0 is a learning rate, and Vu> is the gradient computed with respect to w.
[0043] Typically, deep neural networks are characterized by millions of parameters that require gigantic data sets for training. This translates to enormous computational expenditure with large quantities of time devoted to network training. Instead of searching for parameters of the whole deep network, one needs only to fine-tune a small subset of the original network parameters to the new problem.
[0044] Referring momentarily to FIG. 1, the nutrient database 130 can be configured with
101,000 images of C = 101 classes of most commonly uploaded food items. The dataset is divided into 750x 101 = 75, 750 images used for training, and 250 x 101 = 25, 250 images for testing an image recognition system. The dataset is quite challenging to develop classifiers for, due to the inherent diversity in image resolution, size, illumination, and noise, in conjunction with varied perspectives, obstruction of meals, and occasional mislabeling.
[0045] All image labels are encoded in a one-hot fashion, and images are resized to 299 c
299 x 3 tensors (that is, Nw = Nh = 299), denoting RGB pixels. Image augmentation was performed in order to reduce overfitting of the model by translating image pixels by up to 20% of their respective heights and widths, flipping images horizontally, shifting color channels by up to 30%, and zooming out by up to 20%. An open source deep network is used that leverages transfer learning on an Inception V3 model. By transfer learning, most of the Inception V3 architecture is left frozen (the weights are pre-fixed) and only the top layer is allowed to have free weights that will be trained for classifying the nutrient database 130. In the disclosed CNN 125, a 2D 8 c 8 average pooling layer is added with a dropout rate of 0.4, an Li regularization (that is, R(u>) =
2
II wII^II) parameter x = 0.0005, and use a softmax activation function for assigning confidence scores between 0 and 1. The network is trained using batch-wise stochastic gradient descent with a batch size of 64, momentum m = 0.9 and a learning rate scheduled to decrease from a = 0.01 to a = 3.2 x 10-6. Training stopped at 29 epochs, after which the categorical cross-entropy loss function L(p, y ) = -yT log(p) stopped decreasing. The trained model is stored as an hdf5 file, and occupies 175.1 MB.
[0046] FIG. 3 illustrates an exemplary flow chart illustrating a process for feedforward- feedback control, in accordance with an embodiment of the disclosure. Upon training the neural network, a safe feedforward insulin bolusing strategy is formulated. A confidence metric b is assigned on the network prediction. The confidence score is the output of the final softmax activation function as in (3); hence b e [0, 1] The user chooses a threshold 0 « /?min < 1 to label a prediction as high-confidence only if b > j?mm. If a prediction is deemed high-confidence, then the nutritional database 130 of FIG. 1 is queried to obtain the macronutrient (specifically, carbohydrate, fat, and protein) content per serving size of the predicted image. With a user- estimated serving size amount, the total g-CHO in the meal image is estimated, upon which we provide a bolus of magnitude meal [k\ min{ iijneai [^]< Umax! (4) where umcai[A] has been defined in (1), p e (0, 1] is the fraction of the computed umeal that is allowed to be bolused, and umax is a strict upper bound on the magnitude of the deep network assisted insulin feedforward bolus or the user-estimated bolus. Although the fat/protein content is not discussed herein, those macronutrients can also be used to further customize the magnitude and shape of the meal bolus.
[0047] The rationale behind partially bolusing (when p e (0, 1)) is as follows. Even if ?min is chosen very close to one, convolution neural networks can (albeit rarely) be tricked into incorrect predictions with very high confidence by adversarial noise. Thus, it becomes necessary to provide a limit to the bolusing amount to a magnitude that will not be impossible for the feedback controller to deter from feedforward-induced hypoglycemia.
[0048] FIG. 4. is a graphical illustration of a confusion matrix of the CNN tested on different food images, in accordance with an embodiment of the disclosure. Using the CNN 125 of FIG. 1, the trained neural network exhibits a top-l accuracy of 81.65%. Initial pre-loading of the network required 96.94 s. Consequently, the average and standard deviation of the pre-diction time was 0.57 ± 0.02 s for each image. The diagonal elements illustrate that the classifier mostly identified the food categories in the test images correctly.
[0049] Since the feedforward bolus is provided only if the confidence score associated with the neural network prediction is above /?min = 0.95, it is important to investigate how often the image prediction is wrong with high confidence. To this end, the distribution of confidence scores is plotted for all test images in FIG. 5. Of 25,250 testing set images, 14,275 images (« 56%) have been classified correctly with confidence b > /?min, and 342 (» 1%) images have been classified incorrectly with b > j?mm. These high-confidence mistaken predictions mostly arise due to poor illumination in testing images or incorrect labeling of food items in the labels of the testing set. In an overwhelming majority of the cases where the deep network made a high confidence mistake, it recognized the food item as another having a similar macronutrient ratio.
[0050] For example, the most common high confidence mistake of the class steaksN s with the class filet mignon , both of which are low-CHO, high-protein foods. Another example is that the network is often mistaken with high confidence amongst donuts, beignets, and apple pies , or ice cream and frozen yogurt , all of which are high-CHO food types, and look quite similar under odd perspective or lighting transformations. An example of a more dangerous mistake (which occurred 2 times out of 342) of the network is when it misclassified miso soup , which contains very low CHO, with ramen , which is a high CHO food. The safety constraint of bolusing for the identified CHO content in conjunction with a pure feedback controller component to the deep learning assisted AP will result in safe control in spite of (rare) high-confidence classification errors.
[0051] The in-silico patient estimates the weight of the food (in grams). Subsequently, the nutrient database 130 is queried for the macronutrient content of the predicted food item. The in- silico simulations are performed by estimating the CHO content of the meal (M) in the image, which is used to compute a bolus Umeai as in (4). Thus, Uma = 4 U.
[0052] FIG. 6 illustrates a clinical protocol for testing the disclosed process, in accordance with an embodiment of the disclosure. In order to test the performance of the proposed algorithm, 1000 images are randomly selected from among the test images of the nutrient database 130. The randomly selected images are divided into 100 images per in-silico patient. For each patient, 10 scenarios are generated with unique but random CGM noise seeds, where each scenario includes 10 meals. The serving sizes are pre-selected to ensure that carbohydrate content for any image is between 15 and 110 g. When a meal is announced in the metabolic simulator, a unique image is randomly selected and the deep neural network prediction P and confidence b associated with the image is computed, as in (3). If the confidence is higher than ?min, Umeai is bolus as in (4), otherwise Umeai is set to zero, and the controller operates purely on feedback. However, the truth value of the image and the corresponding serving size information reported by the user is maintained for comparison with controllers operating under fully announced meals.
[0053] Implementing the claimed process, 741 images out of 1000 were identified with high-confidence, of which 719 were predicted to be of the correct food category, and only 1 image resulted in an CHO estimation error of over > 15g.
[0054] FIG. 7 is a graphical illustration of a distribution of neural network predictions for each meal, in accordance with an embodiment of the disclosure. The overall distribution of predictions for 100 scenarios (10 patients, 10 images each) is shown therein. Per meal, the percentage of high-confidence correct classifications were 71.90 ± 3.0, with low-confidence predictions comprising of 25.90 ± 4.15%. High-confidence errors were small, at 2.20 ± 1.55%. The feedback control algorithm used includes zone boundaries set at 80 mg/dL and 120 mg/dL at day time, with the lower boundary raised to 90 mg/dL at night.
[0055] FIG. 8 is a graphical comparison of deep learning assisted AP control algorithms, in accordance with an embodiment of the disclosure. The results of the comparative 65-hour study are illustrated therein. The mean blood glucose and insulin inputs are shown, and the proposed algorithm is implemented with full (p = 1) deep-learning assisted feedforward boluses. While full meal announcement (dashed line) exhibits the best mean glucose on the plot, the algorithm with p = 1 generates a similar glucose trajectory. However, it is typically higher than the fully announced meal because performance is sacrificed for safety and autonomy. Since the constraint b > /?mm prevents feedforward bolusing for each image, some images do not result in assistive meal announcement and the AP runs solely on glucose feedback. Thus, the mean insulin bolused with the proposed algorithm is generally lower than the AP with meal announcement. This is illustrated in the lower plot of FIG. 8. Referring momentarily to FIG. 7, the glucose trajectories around the 6th and 7th meals closely resemble the trajectory with announced meals since the number of erroneous/low-confidence classifications are low. Conversely, the 4th and 9th meal exhibit higher deviations between the glucose dynamics due to higher percent of unannounced meals due to more low-confidence predictions.
PERFORMANCE (MEAN±STD. DEV.) OF PROPOSED METHOD AGAINST
CONTROLLERS WITH AND WITHOUT MEAL ANNOUNCEMENT, STATISTICAL SIGNIFICANCE (p < 0.05) AGAINST METRICS DERIVED WITH MEAL ANNOUNCEMENT IS DENOTED BY J, AND WITH UNANNOUNCED MEALS BY‘*\
Figure imgf000018_0001
TABLE I [0056] Comparisons amongst the glucose metrics are also illustrated in Table I with two extra cases: when p = 0.5 and when the meals are fully unannounced. As expected, the metrics for the proposed AP is in between the other two control algorithms. However, it is interesting to note that the percent time below 70 mg/dL is not statistically significant based on a (non-parametric) Wilcoxon rank sum test, indicating that the proposed approach, in spite of 1% mistaken images, does not compromise the health of the patient by inducing hypoglycemia. Another noteworthy observation is that while most of the other metrics are significantly better than the controller with unannounced meals and worse than the controller with announced meals, the percent time in the 70-180 mg/dL range does not show this statistical property, in spite of an improvement in percent time in range. This is because « 26% of the images were not classified with high confidence, implying that many images did not contribute to meal bolusing. Since a majority of such images contained high carbohydrate content, the statistics are skewed towards higher glucose concentrations, resulting in trajectories reminiscent of unannounced meals rather than announced meals. However, raising p to one results in making the time above 250 mg/dL statistically indistinguishable from meal announcement.
[0057] It is common for AP users to incur large errors in meal size estimation. Since meal size is proportional to the meal bolus magnitude, incorrect estimation of meal sizes may indirectly lead to self-induced hypoglycemia (due to overestimation) or prolonged time in hyperglycemia (due to underestimation). To test the safety of the controller, multiple simulations are performed with meal size estimation errors ranging from -40% to +40% in steps of 10% for a scenario with 10 meals ranging from 20 gCHO to 80 gCHO, using the same protocol shown in FIG. 6.
[0058] FIG. 9 are box plots illustrating an exemplary simulation of percentage time in different glycemic ranges using the proposed algorithm with meal size estimation errors, in accordance with an embodiment of the disclosure. With the feedback component of the proposed algorithm acting as a safety layer, small errors (±20%) in estimation of food quantity is compensated satisfactorily, with time in the 70-180 mg/dL range contained within 90.1 to 90.8%, exhibiting indistinguishable statistical properties (p > 0.05), without a discernible shift in the median times in hypoglycemia or hyperglycemia. As expected, severe overestimations (>20%) result in heightened time in hypoglycemia due to over-bolusing (0% to 0.7%), but this degradation is not statistically significant because the feed-back controller suspends the infusion of insulin. With severe underestimations, since enough compensatory insulin has not been bolused with the meal, the feedback controller makes successive corrective actions and while we see an increase in the time above 180 mg/dL, the change is not significant (9.2% to 10.2%).
[0059] The mechanism of providing corrective feedback control actions in conjunction with deep learning based feed-forward assist maintains safe glycemic outcomes in spite of mistakes in macronutrient estimation. A deep learning assisted AP system for automatic feedforward bolusing is provided herein. The proposed framework has additional safety layers to enable safe and robust operation: namely, a pure feedback controller that monitors and corrects for adverse scenarios that could arise due to misidentification of food images by the deep network or incorrect meal size estimation by the user. While insulin-glucose dynamics affected by CHO content of meals is discussed herein, the present disclosure can be applied to the effect of fat and protein to customize the bolus shape and update the bolus magnitude. This will lead to improved quality of care in next-generation decision support systems, while reducing the burden on people with T1DM.
[0060] It should initially be understood that the disclosure herein may be implemented with any type of hardware and/or software, and may be a pre-programmed general purpose computing device. For example, the system may be implemented using a server, a personal computer, a portable computer, a thin client, or any suitable device or devices. The disclosure and/or components thereof may be a single device at a single location, or multiple devices at a single, or multiple, locations that are connected together using any appropriate communication protocols over any communication medium such as electric cable, fiber optic cable, or in a wireless manner.
[0061] It should also be noted that the disclosure is illustrated and discussed herein as having a plurality of modules which perform particular functions. It should be understood that these modules are merely schematically illustrated based on their function for clarity purposes only, and do not necessary represent specific hardware or software. In this regard, these modules may be hardware and/or software implemented to substantially perform the particular functions discussed. Moreover, the modules may be combined together within the disclosure, or divided into additional modules based on the particular function desired. Thus, the disclosure should not be construed to limit the present invention, but merely be understood to illustrate one example implementation thereof.
[0062] The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some implementations, a server transmits data (e.g., an HTML page) to a client device (e.g., for purposes of displaying data to and receiving user input from a user interacting with the client device). Data generated at the client device (e.g., a result of the user interaction) can be received from the client device at the server.
[0063] Implementations of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), an inter network (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks).
[0064] Implementations of the subject matter and the operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions, encoded on computer storage medium for execution by, or to control the operation of, data processing apparatus. Alternatively, or in addition, the program instructions can be encoded on an artificially-generated propagated signal, e.g., a machine generated electrical, optical, or electromagnetic signal that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. A computer storage medium can be, or be included in, a computer-readable storage device, a computer-readable storage substrate, a random or serial access memory array or device, or a combination of one or more of them. Moreover, while a computer storage medium is not a propagated signal, a computer storage medium can be a source or destination of computer program instructions encoded in an artificially-generated propagated signal. The computer storage medium can also be, or be included in, one or more separate physical components or media (e.g., multiple CDs, disks, or other storage devices).
[0065] The operations described in this specification can be implemented as operations performed by a“data processing apparatus” on data stored on one or more computer-readable storage devices or received from other sources.
[0066] The term“data processing apparatus” encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, a system on a chip, or multiple ones, or combinations, of the foregoing The apparatus can include special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can also include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, a cross-platform runtime environment, a virtual machine, or a combination of one or more of them. The apparatus and execution environment can realize various different computing model infrastructures, such as web services, distributed computing and grid computing infrastructures.
[0067] A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, object, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
[0068] The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform actions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
[0069] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for performing actions in accordance with instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a universal serial bus (USB) flash drive), to name just a few. Devices suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
[0070] The various methods and techniques described above provide a number of ways to carry out the invention. Of course, it is to be understood that not necessarily all objectives or advantages described can be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that the methods can be performed in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objectives or advantages as taught or suggested herein. A variety of alternatives are mentioned herein. It is to be understood that some embodiments specifically include one, another, or several features, while others specifically exclude one, another, or several features, while still others mitigate a particular feature by inclusion of one, another, or several advantageous features.
[0071] Furthermore, the skilled artisan will recognize the applicability of various features from different embodiments. Similarly, the various elements, features and steps discussed above, as well as other known equivalents for each such element, feature or step, can be employed in various combinations by one of ordinary skill in this art to perform methods in accordance with the principles described herein. Among the various elements, features, and steps some will be specifically included and others specifically excluded in diverse embodiments.
[0072] Although the application has been disclosed in the context of certain embodiments and examples, it will be understood by those skilled in the art that the embodiments of the application extend beyond the specifically disclosed embodiments to other alternative embodiments and/or uses and modifications and equivalents thereof.
[0073] In some embodiments, the terms“a” and“an” and“the” and similar references used in the context of describing a particular embodiment of the application (especially in the context of certain of the following claims) can be construed to cover both the singular and the plural. The recitation of ranges of values herein is merely intended to serve as a shorthand method of referring individually to each separate value falling within the range. Unless otherwise indicated herein, each individual value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (for example,“such as”) provided with respect to certain embodiments herein is intended merely to better illuminate the application and does not pose a limitation on the scope of the application otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the application. [0074] Certain embodiments of this application are described herein. Variations on those embodiments will become apparent to those of ordinary skill in the art upon reading the foregoing description. It is contemplated that skilled artisans can employ such variations as appropriate, and the application can be practiced otherwise than specifically described herein. Accordingly, many embodiments of this application include all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the application unless otherwise indicated herein or otherwise clearly contradicted by context.
[0075] Particular implementations of the subject matter have been described. Other implementations are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results.
[0076] All patents, patent applications, publications of patent applications, and other material, such as articles, books, specifications, publications, documents, things, and/or the like, referenced herein are hereby incorporated herein by this reference in their entirety for all purposes, excepting any prosecution file history associated with same, any of same that is inconsistent with or in conflict with the present document, or any of same that may have a limiting affect as to the broadest scope of the claims now or later associated with the present document. By way of example, should there be any inconsistency or conflict between the description, definition, and/or the use of a term associated with any of the incorporated material and that associated with the present document, the description, definition, and/or the use of the term in the present document shall prevail.
[0077] In closing, it is to be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the application. Other modifications that can be employed can be within the scope of the application. Thus, by way of example, but not of limitation, alternative configurations of the embodiments of the application can be utilized in accordance with the teachings herein. Accordingly, embodiments of the present application are not limited to that precisely as shown and described.

Claims

1. A system for the delivery of insulin to a patient, the system comprising: an image capturing device configured to capture image data associated with an exogenous input; a processor communicatively coupled to the image capturing device and configured to receive image data associated with the exogenous input, wherein the processor comprises a deep neural network architecture configured to predict a class of the exogenous input; an insulin delivery device communicatively coupled to the processor; and a nutrient database communicatively coupled to the processor, wherein the processor is configured to receive a macronutrient content assessment of the exogenous input from the nutrient database based on the predicted class; and wherein the processor is further configured to determine a dosage of glucose altering substance to administer, using the macronutrient content assessment, and send a command to the insulin delivery device to administer the dosage of the glucose altering substance.
2. The system for the delivery of insulin of claim 1, wherein the exogenous input comprises a food item to be consumed by the patient.
3. The system for the delivery of insulin of claim 1, wherein the exogenous input comprises a beverage to be consumed by the patient.
4. The system for the delivery of insulin of claim 1, wherein the image capturing device comprises a camera located at a mobile device.
5. The system for the delivery of insulin of claim 1, wherein the nutrient database is located remote to the processor.
6. The system for the delivery of insulin of claim 1, wherein the insulin delivery device comprises a pump surgically connected to the patient.
7. The system for the delivery of insulin of claim 1, wherein the macronutrient content is a measurement of carbohydrate content.
8. The system for the delivery of insulin of claim 7, wherein the dosage of glucose altering substance is proportional to the measurement of the carbohydrate content.
9. The system for the delivery of insulin of claim 8, wherein the macronutrient content is further based on weight of the exogenous input.
10. The system for the delivery of insulin of claim 1, wherein the macronutrient content is a measurement of fat and protein content.
11. The system for the delivery of insulin of claim 10, wherein the dosage of glucose altering substance is proportional to the measurement of the fat and protein content.
12. A method for providing closed loop adaptive glucose controller, the method comprising: receiving image data from at least one image capturing device, wherein the image data received is associated with an exogenous input; processing the image data received by a machine learning classifier to predict a class of the exogenous input based on the image data received; receiving a macronutrient content assessment of the exogenous input from a nutrient database based on the predicted class; determining a dosage of glucose altering substance to administer using the macronutrient content assessment received; and sending a command to an insulin delivery device to administer the dosage of the glucose altering substance.
13. The method for providing closed loop adaptive glucose controller of claim 12, wherein the exogenous input comprises a food item to be consumed by a patient.
14. The method for providing closed loop adaptive glucose controller of claim 12, wherein the exogenous input comprises a beverage to be consumed by a patient.
15. The method for providing closed loop adaptive glucose controller of claim 12, wherein the insulin delivery device comprises a pump surgically connected to a patient.
16. The method for providing closed loop adaptive glucose controller of claim 12, wherein the macronutrient content is a measurement of carbohydrate content.
17. The method for providing closed loop adaptive glucose controller of claim 17, wherein the dosage of glucose altering substance is proportional to the measurement of the carbohydrate content.
18. The method for providing closed loop adaptive glucose controller of claim 18, wherein the macronutrient content is further based on weight of the exogenous input.
19. The method for providing closed loop adaptive glucose controller of claim 12, wherein the macronutrient content is a measurement of fat and protein content.
20. The method for providing closed loop adaptive glucose controller of claim 19, wherein the dosage of glucose altering substance is proportional to the measurement of the fat and protein content.
21. The method of claim 12, wherein the machine learning classifier is a convolutional neural network architecture (CNN).
PCT/US2019/037928 2018-06-19 2019-06-19 Deep learning assisted macronutrient estimation for feedforward-feedback control in artificial pancreas systems WO2019246217A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862686991P 2018-06-19 2018-06-19
US62/686,991 2018-06-19

Publications (1)

Publication Number Publication Date
WO2019246217A1 true WO2019246217A1 (en) 2019-12-26

Family

ID=68983987

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/037928 WO2019246217A1 (en) 2018-06-19 2019-06-19 Deep learning assisted macronutrient estimation for feedforward-feedback control in artificial pancreas systems

Country Status (1)

Country Link
WO (1) WO2019246217A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222480A (en) * 2020-01-13 2020-06-02 佛山科学技术学院 Grape weight online estimation method and detection device based on deep learning
WO2021147524A1 (en) * 2020-01-21 2021-07-29 Medtrum Technologies Inc. A closed-loop artificial pancreas controlled by body movements
US20220061706A1 (en) * 2020-08-26 2022-03-03 Insulet Corporation Techniques for image-based monitoring of blood glucose status

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160163037A1 (en) * 2013-07-02 2016-06-09 Joachim Dehais Estmation of food volume and carbs
US20170249445A1 (en) * 2014-09-12 2017-08-31 Blacktree Fitness Technologies Inc. Portable devices and methods for measuring nutritional intake

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160163037A1 (en) * 2013-07-02 2016-06-09 Joachim Dehais Estmation of food volume and carbs
US20170249445A1 (en) * 2014-09-12 2017-08-31 Blacktree Fitness Technologies Inc. Portable devices and methods for measuring nutritional intake

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHAKRABARTY, A. ET AL.: "Deep Learning Assisted Macronutrient Estimation For Feedforward-Feedback Control In Artificial Pancreas Systems", 2018 ANNUAL AMERICAN CONTROL CONFERENCE (ACC), 27 June 2018 (2018-06-27), pages 3564 - 3570, XP033384566, DOI: 10.23919/ACC.2018.8431790 *
YAN LUO: "Machine Learning of Lifestyle Data for Diabetes", 1 January 2016 (2016-01-01), pages 1 - 109, XP055664687, Retrieved from the Internet <URL:https://ir.lib.uwo.ca/etd/3650> *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111222480A (en) * 2020-01-13 2020-06-02 佛山科学技术学院 Grape weight online estimation method and detection device based on deep learning
CN111222480B (en) * 2020-01-13 2023-05-26 佛山科学技术学院 Online grape weight estimation method and detection device based on deep learning
WO2021147524A1 (en) * 2020-01-21 2021-07-29 Medtrum Technologies Inc. A closed-loop artificial pancreas controlled by body movements
WO2022116602A1 (en) * 2020-01-21 2022-06-09 Medtrum Technologies Inc. Bilaterally driven drug infusion system
US20220061706A1 (en) * 2020-08-26 2022-03-03 Insulet Corporation Techniques for image-based monitoring of blood glucose status

Similar Documents

Publication Publication Date Title
US11574742B2 (en) Diabetes management therapy advisor
Turksoy et al. Multivariable adaptive closed-loop control of an artificial pancreas without meal and activity announcement
US20240017008A1 (en) Lqg artificial pancreas control system and related method
AU2018221048B2 (en) System, method, and computer readable medium for a basal rate profile adaptation algorithm for closed-loop artificial pancreas systems
US20220054748A1 (en) Control model for artificial pancreas
WO2019246217A1 (en) Deep learning assisted macronutrient estimation for feedforward-feedback control in artificial pancreas systems
CA2789630C (en) Systems, devices and methods to deliver biological factors or drugs to a subject
CN103907116A (en) Method, system and computer readable medium for adaptive advisory control of diabetes
Pinsker et al. Evaluation of an artificial pancreas with enhanced model predictive control and a glucose prediction trust index with unannounced exercise
US20220203029A1 (en) System and method for artificial pancreas with multi-stage model predictive control
US20220061706A1 (en) Techniques for image-based monitoring of blood glucose status
EP2603133A1 (en) Method and system for improving glycemic control
CN110753967A (en) Insulin titration algorithm based on patient profile
US20220257199A1 (en) System and method for online domain adaptation of models for hypoglycemia prediction in type 1 diabetes
Chakrabarty et al. Deep learning assisted macronutrient estimation for feedforward-feedback control in artificial pancreas systems
EP4228498A1 (en) Method and system of closed loop control improving glycemic response following an unannounced source of glycemic fluctuation
WO2022235618A1 (en) Methods, systems, and apparatuses for preventing diabetic events
US20240233938A1 (en) Decision support and treatment administration systems
EP3996100A1 (en) Diabetes therapy based on determination of food items
Hettiarachchi et al. A reinforcement learning based system for blood glucose control without carbohydrate estimation in type 1 diabetes: In silico validation
US20220392609A1 (en) Personalized food recommendations based on sensed biomarker data
US20210151141A1 (en) Joint state estimation prediction that evaluates differences in predicted vs. corresponding received data
US20210162127A1 (en) Adaptive zone model predictive control with a glucose and velocity dependent dynamic cost function for an artificial pancreas
Boiroux Model predictive control algorithms for pen and pump insulin administration
Hettiarachchi et al. G2P2C—A modular reinforcement learning algorithm for glucose control by glucose prediction and planning in Type 1 Diabetes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19821720

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19821720

Country of ref document: EP

Kind code of ref document: A1