WO2019104411A1 - System and method for voice-enabled disease management - Google Patents

System and method for voice-enabled disease management Download PDF

Info

Publication number
WO2019104411A1
WO2019104411A1 PCT/CA2018/000180 CA2018000180W WO2019104411A1 WO 2019104411 A1 WO2019104411 A1 WO 2019104411A1 CA 2018000180 W CA2018000180 W CA 2018000180W WO 2019104411 A1 WO2019104411 A1 WO 2019104411A1
Authority
WO
WIPO (PCT)
Prior art keywords
voice
patient
disease management
data
health
Prior art date
Application number
PCT/CA2018/000180
Other languages
French (fr)
Inventor
Timon Ledain
Christian Nadeau
Xavier LARUE
Edward W. Sarfeld
David Andrew CAMPBELL
Original Assignee
Macadamian Technologies Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Macadamian Technologies Inc. filed Critical Macadamian Technologies Inc.
Publication of WO2019104411A1 publication Critical patent/WO2019104411A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Definitions

  • the present disclosure relates to systems and methods for managing data, including but not limited to eystems and methods for disease management.
  • Devices can be used to assist a patient in tracking and achieving specific goals and objectives to mitigate effects -of a disease, chronic illness, or medical condition with which the patient has been diagnosed.
  • a fitness tracker or other connected device can be used to measure a particular parameter with respect to Improving patient health and/or mitigating the effects of the disease.
  • Figure 1 is a block diagram of .a system for voice-enabled disease management according to an embodiment of the present disclosure.
  • Figure 2 is a flowchart illustrating steps in a method for voice-enabled disease management according to an embodiment of the present disclosure.
  • Figure 3 is a block diagram of components of a system for voice-enabled disease management according to an example embodiment of the present disclosure.
  • Figured is a flowchart illustrating steps in another method for voice-enabled disease management according to en embodiment of the present disclosure.
  • Figure 5 is a block diagram of components of a system for voice-enabled disease management according to another example embodiment of the present disclosure.
  • Figure 6 is a flowchart illustrating steps in a further method for voice-enabled disease management according to an embodiment of the present disclosure.
  • Figure 7 Is a block diagram of a system for voice-enabled disease management according to another embodiment of the present disclosure.
  • a method and system are provided for voice-enabled disease management
  • the system provides a network module In communication with a mobile device module, both of which communicate with a common database to enable multi-model (voice, text, etc.) disease management for a patient
  • the present disclosure provides a system for voice- enabled disease management comprising: a network disease management module comprising a voice service application configured to run on a network device to provide voice-be sed disease management services to a patient in a voice interaction mode; a mobile disease management module comprising a mobile service application configured to run on a mobile device to provide graphical or text-based disease management services to the patient in a mobile interaction mode, the mobile disease management module being in communication with the network disease management module; and a * disease management database in communication with the voice service application and the mobile service application, the disease management database configured to provide a common set of data accessible by the voice service application and the mobile service application such that the voice-based disease management services provided by the voice service application In the voice Interaction mode are integrated with the visual or text-based disease management services provided by the mobile disease management application in the mobile interaction mode.
  • the voice-based disease management services provided by the voice service application and the graphical or text-based disease management services provided by the mobile service application are both based on the common set of data provided by the disease management database.
  • the voice service application is configured by a first non'transitory memory storing statements and instructions for execution by a first processor to: receive voice data associated with a voice command generated by a patient; identify spoken patient health data in the received voice command; obtain stored health data related to the spoken patient health data; and generate voice feedback data for providing context-sensitive voice feedback to the patient based on the obtained stored health data and on the identified spoken patient health data, the context-sensitive voice feedback suggesting a course of action or requesting additional Information based on the received voice command.
  • the mobile service application is configured by a second non-traneltory memory storing statements and instructions for execution by a second processor to: receive gathered health data from one or more connected health devices; and provide the gathered health data to the network disease management module.
  • the mobile service application Is configured by a second non-transitory memory storing statements and instructions for execution by a ' second processor to : receive, from the network disease management module, disease- related patient health data associated with the context-sensitive voice feedback; and cause display of the disease-related patient health data to the patient.
  • the network disease management module is configured to: obtain a first data sat associated with the voice command, and provide the first data set to the disease management database, the voice command received at the network disease management module; and obtain a second data set associated with the gathered health data, and provide the second data set to the disease management database, the gathered health data received at the mobile disease management module.
  • the disease management database, the network disease management module, and the mobile disease management module cooperate to: provide a first disease management function using the voice-based disease management service in the voice interaction mode; and provide the same first disease management function using the mobile disease management service in the mobile interaction mode.
  • the disease management database, the network disease management module, and the mobile disease management module cooperate to: provide a Becond disease management function using the mobile disease management service, in the mobile interaction 'mode to supplement a first disease management function provided using the voice-based disease management service in the voice Interaction mode.
  • the disease management database, the . network disease management module, and the mobile disease management module cooperate to: provide a notification via both the voice interaction mode and the mobile interaction mode; and clear the notification from one mode of communication in response to an indication that the notification hae been acknowledged via the other mode of communication.
  • the voice service application it configured to: perform ⁇ natural language processing on voice data associated with a received voice command to Identify spoken patient health data in the received voice command; and create recorded patient health portal data, based on the received spoken patient health data, In a first format similar to a second format of the stored health data in the disease management database.
  • the network disease management module comprises: an audio prompt generator configured to create an actionable voice prompt for delivery to the patient from the voice service application via a voice service platform, the actionable voice prompt being created based on the gathered health data received from the one or more connected health devices.
  • the network disease management module comprises: a disease parameter tracker configured to communicate wtth one or more connected health devices to receive gathered health data; a pattern tracker, In communication with the disease parameter tracker, configured to generate one or more scores based on how far a patient's measured disease parameter is from a clintelan-eet target disease parameter; a prompt content generator, in communication with the pattern tracker, configured to generate patient-speclfia content identifying one or more targeted disease parameters that require attention to improve the patient's health condition; and an audio prompt generator, in communication with the prompt content generator, configured to convert the generated patient-specific content from the prompt content generator into actionable voice prompts for delivery to the patient via a voice service platform.
  • the voice service application is configured to perform natural language processing on a received voice command to identify spoken patient health data In the received voice command, and to format the spoken patient health data similar to stored health data to facilitate obtaining the stored health data related to the voice command, and the audio prompt generator Is tn communication with the voice service application.
  • the actionable voice prompt comprises a conversational-style prompt including: data for Improvement. of the patient's health condition; and a behavior change component associated with the data for Improvement of the patient's health condition and customized to the patient.
  • the behavior change component is created based on the relationship of the data for improvement of the patient's health condition with an associated target.
  • the disease parameter tracker comprises an activity/blood sugar level tracker, and wherein the one or more connected health devices are configured to receive the gathered health data selected from the group consisting of: blood sugar level; exercise data; weight; amount of sleep; food intake; geotocation data; and time of day.
  • the network disease management module is in communication with a health information portal storing health portal data
  • the network disease management module comprises: a question intent converter configured to convert voice data associated with a patient's voice command to a patient inquiry including additional patient context data in a format compatible with the health portal data; a preprocessor, in communication with the question Intent converter and with the health portal data, configured to convert, using a lookup table, data in the patient inquiry to a health portal query to facilitate health portal data content lookup; and a post-prooessor, in communication with the health portal data and the voice service platform, configured to modify the obtained health portal data prior to providing the context-sensitive voice feedback to the patient.
  • the voice service application is configured to perform natural language processing on voice data associated with the received voice command to identify the spoken patient hearth data in the received voice data, and to format the spoken patient health data similar to stored health data in the disease management database to facilitate obtaining the stored heafth data related to the voice command, and one or mora of the question intent converter and the poet-processor is in communication with the voice service application.
  • system further comprises a secure patient portal, in communication with the network disease management module, configured to provkta. authorization-based access to patient data to a relative or clinician.
  • system further comprises a secure messaging module, configured to enable the patient to securely interact with the clinician without either the patient or the oilniclan having to share personal contact Informatiofi.
  • the secure messaging module is configured to enable sending a voice message between the patient and the clinician.
  • a social connection module is configured to pair the patient with a mentor for Becure interaction via the secure patient portal.
  • the network disease management module and the disease management database are configured to: compare, to a target, selected patient health data collected from the one or more connected health devices; end send a notification to a loyalty rewards system to award loyalty rewards to the patient in response to the selected patient health data meeting or exceeding the target
  • the one or more connected health devices are selected from the group consisting of a fitness tracker, a weight scale, a glucometer, spirometer, and s disease-specific data collection apparatus.
  • the network disease management module comprises the disease management database.
  • the network disease management .module is configured to handle activity from both the mobile disease management module and from a smart speaker configured to receive a voice command from the patient associated with the voice-based disease management services.
  • the network disease management module comprises a content management system configured to provide an omni-channel patient questionnaire to the patient via the voice interaction mode using the voice service application and/or the mobile communication mode using the mobile disease management application.
  • the network disease management module and the secure patient portal cooperate to generate and cause the display of a list display selector configured to alternate between displaying a list of all patients and a list of higher risk patients.
  • the network disease management module comprises an electronic medical record (EMR) interface configured to Intemperate with one or more EMR systems to access or update EMR data based on the spoken health data, the obtained stored health data and/or content of the context-sensitive voice feedback.
  • EMR electronic medical record
  • the system .further comprises a smart speaker Including a non-verbal wake condition detector configured to detect a non-verbal wake condition in the absence of a verbal wake word.
  • the smart speaker comprises: a motion sensor or cam era; and the non-verbal wake condition detector is configured to receive and process an output of the motion sensor or camera to determine occurrence of the non-verbal wake condition.
  • the non-verbal wake condition detector comprises a presence detection module and Is configured to, m response to detection of a non-verbal wake condition by the motion ⁇ or camera: detect a non-verbal wake condition; and activate operation of the smart speaker in response to the detected nonverbal wake condition and in the absence of a verbal 'wake word.
  • detection of the non-verbal wake condition comprises detection of an authenticated wake action to authorize a specific user, and wherein the non-verbal wake condition detector is configured to provide customized feedback to the patient based on a detected condition of the authenticated user.
  • providing customized feedback is performed on a detected facial expression, and wherein the system adjusts content of the feedback, a tone of the feedback, or both, in response to the detected condition.
  • the present disclosure provides a network disease management apparatus comprising: a processor; and one or more non-transitory machine readable memories storing statements and Instructions for execution by the processor to: receive voice data associated with a voice command generated by a patient; identify spoken patient health data In the received voice command; obtain stored health data related to the spoken patient health data; and generate voice feedback data for providing context-sensitive voice feedback for the patient based on the obtained stored health data and on the identified spoken patient health data, the context-sensitive voice feedback suggesting a course of action or requesting additional information based on the received voice command.
  • the present disclosure provides a processor- implemented method of voice-enabled disease management comprising: at a network device, receiving voice data associated with a voice command generated by a patient- identifying spoken patient health data In the received voice command; obtaining stored health data related to the spoken patient health data; and generating voice feedback data for providing context-sensitive-voice feedback to the patient based on Ihe stored health data and on the identified spoken patient health data the context-sensitive voice feedback suggesting a course of action or requesting additional information based on the received voice command.
  • the present disclosure provides a non-transitory machine readable medium having stored thereon statements and instructions for execution by a processor to perform a method as described and illustrated herein.
  • the present disclosure provides an apparatus comprising: at least one processor, arid memory storing. computer-readable instructions that, when executed by the at least one processor, cause the apparatus to perform a method as described and illustrated herein.
  • FIG. 1 Is a block diagram of a system 100 for voice-enabled disease management aocording to an embodiment of the present disclosure.
  • the system 100 comprises a network disease management module 120 and a mobile disease management module 130.
  • the network disease management module 130 comprises a voice Bervice application 122 configured to run on a network device to provide voice-based disease management services to a patient in a voice interaction mode.
  • the mobile disease management module 130 Is in communication with the voice service application 122, for example via a. voice service platform 140 and is configured to provide graphical or text-based disease management services to the patient in a mobile Interaction mode.
  • the mobtie disease management module 130 comprises a mobile service application 132.
  • the voice interaction mode comprises a mode in which voice Is used to enable interaction between the patient and the system.
  • the voice interaction mode comprises a voice communication mode in which voice content or audio content is used to enable communication between the patient and the system.
  • the mobile interaction mode comprises a mode In which one or more of graphical, viaual and text Interaction is used to enable interaction between the patient and the system.
  • the mobile interaction mode comprises a mobile communication mode In which graphical content or text-based content is used to enable communication between the patient and the system.
  • the graphical content comprises one or more of: still Images; moving images, such as animated GIFs; video clips; and longer videos.
  • the mobile interaction mode additionally comprises and enables haptic feedback, suoh as causing a mob He device to vibrate; the haptic feedback can include one or more of tactile feedback or kinesthetic feedback.
  • the mobile interaction mode can additionally comprise any form of non-visual interaction and non-voice interaction as known to one of ordinary skill in the ' art
  • Both the network disease management module 120 and the mobile disease management module 130 access the same up-to-date data by using the same back-end database, or disease management database 124.
  • the disease management database 124 is configured to provide a common set of data accessible by the voice service application 122 and the mobile service application such' that the voice- based disease management services provided by the voice service application in the voice Interaction mode are integrated with the visual or text-based disease management eervfcee provided by the mobile disease management application in the mobile interaction mode.
  • the 12 year old boy with diabetes is an example of a patient 102, as shown in Figure 1, diagnosed with a disease, chronic Illness or medical condition that needs to be managed.
  • the Amazon EchoTM device ie an example of a smart speaker 110 enabling voice interaction, for example via the Amazon AiexaTM cloud-based voice service platform 140, with a system 100 for voice-enabled disease management according to an embodiment of the present disclosure.
  • the Samsung GalaxyTM phone Is an example of a mobile device which runs a mobile disease management module 130 according to an embodiment of the present disclosure.
  • the network disease management module 120 is a voice-based module according to an embodiment of the present disclosure implemented In a network such as the Internet, the 'cloud", or a private internet/cloud, on suitable hardware.
  • the network disease management module 120 is Implemented on one or more network devices, or one or more cloud devices.
  • the mobile disease management module 130 runs on a mobile device, such as a smartphone or tablet.
  • the patient or user 102 can Interact with (he system 100 via voice communication using a voice assistant-enabled smart speaker 110, Such as in Amazon EchoTM, Google HomeTM, Apple HomePodTM, a Nuances-enabled device, or the like.
  • the smart speaker 110 includes a mioraphone to receive a voice command from a patient, with processing of the voice command typically being performed in the network, or in the cloud. In an implementation, the smart speaker 110 operatee in conjunction with .the voice service platform 140.
  • a voice command received at the smart speaker is a voice command received at the smart speaker
  • the voice service platform 140 is a virtual assistant, or voice assistant, such as an intelligent personal assistant or audio control interface.
  • Voice service platforms such as Amazon AlexaTM, Google Home 1 ". Apple's SiriTM and Microsoft's CortanaTM are capable of voice interaction and completion of tasks based on such voice interaction, such as music playback, setting alarms, creating and updating lists, smart home tasks, and obtaining information such as news, weather and traffic.
  • the voice service platform 140 comprises voice services (which can include a third party application programming interface, or API) that allow the voice commands from the patient 102 received by the smart speaker 110 to be processed, understood and acted upon.
  • voice services can be considered similar to an operating system for voice services, where the voice services are available via a smart speaker 110 or similar device.
  • both the network disease management module 120 and the mobile disease management module 130 access the same up-to-date data from the disease management database 124.
  • the disease management database 124 Is configured to provide long-term storage of patient data, and short-term storage of context and variables; in an example embodiment, the disease management database 124 ia configured to address modem data privacy and regulatory requirements around patient information, for example by supporting encryption of fields and dynamic modification of stored data.
  • a patient's voice command is recordings are sent to a network server for processing, and then sent to the voioa application 122 for further action.
  • a typical voice services user would have a voice services account associated with their device, which account would include personal details about the user (e.g., name, address, payment information, buying history, etc.) Accordingly, if a person with a typical smart speaker or other volce-servicee enabled device shares that they have a high blood pressure reading, someone at the voice service platform provider could associate that health data to the user associated with the account, which is not compliant with HIPAA (Health Insurance Portability and Accountability Act).
  • HIPAA Health Insurance Portability and Accountability Act
  • a user can create an anonymous user account (e.g. mdc.p8ttent02ie2Qhospital-name.com) on a first dedicated smart speaker 110.
  • the voice service platform provider would have no other information on that individual other than this e-mail address.
  • the voice service platform 140 contacts the voice service application 122 with the patient ID.
  • the voice service application 122 according to an embodiment of the present disclosure is configured to associate that ID with a named user within a HIPAA compliant platform.
  • the patient uses a first dedicated smart speaker associated with an anonymous account to enable HIPAA compliant interaction . with a disease management system that includes or interacts with non-HIPAA compliant elements, or where the system has one or more voice services or components thereof that , are non-HIPAA compliant.
  • the patient uses a second general purpose smart speaker associated with a patient's regular voice services account, for example with which the patient could order a ride with a car sharing service, or request that music in their library be streamed.
  • the patient's anonymous account is associated with the first dedicated smart speaker
  • the patient's voice services account is associated with the second general purpose smart speaker
  • the dedicated first smart speaker is not used with the patient's voice services account
  • the first dedicated smart speaker is enabled, according to an embodiment of the present disclosure, to interact in a HIPAA-oompliant manner with a non-HIPAA compliant platform or with a platform that Includes at least one non-HIPAA compliant element [0065]
  • the disease management database 124 is provided as part of the network disease management module 120.
  • the disease management module 120 is implemented using a mleroservloes architecture or mlora services implementation, and Is geographically distributed such that Its components need not be co-tocated.
  • the disease management database 124 Is. accessible by, and shared between, the network disease management module 120 and the mobile disease management module 130.
  • the disease management database 124 Is provided In communication with the network disease management module 120 and with the voice service platform 140 to enable communication with the mobile disease management module 130, and in an embodiment Is not part of the network disease management module 120.
  • the .disease management database 124 can be located anywhere in the system as long as it can establish and maintain communication with, and accessibility by, the network disease management module 120 and the mobile disease management module 130, for example via the voice service platform 140.
  • the network disease management module 120 handles, via the voice service application 122, activity from both the smart speaker 110 and the mobile disease management module 130; data related to such activity is stored in the disease management database 124, which is shared by the voice service application 122 in the network disease management module 120, end the mobile disease management module 130.
  • the network disease management module 120 is configured to, at 202, receive voice data associated with a voice command generated by a patient, or to receive the voice command itself.
  • the voice command is a command spoken by the patient in another example embodiment, the voice command is an audio file played by the patient, which may have been generated and recorded by the patient, or by someone else.
  • the voice command is received using a microphone, for example at the smart speaker 110.
  • the network disease management module receives the voice command as the same audio recorded at the smart speaker 110.
  • the network disease management module 120 receives voice data associated with the received voice command; for example, m an implementation, the voice data is a compressed or reformatted representation of the audio recorded at the smart speaker 110 when the smart speaker received the voice command from the patient. [ ⁇ ⁇ ⁇ ] Referring to the example discussed earlier, the phrase "Alexa, ask MyCoach whet my blood glucose targets are" Is an example of a voice command. In an example embodiment, as shown in Figure 2 at 204, the network disease management module 120 is configured to identify spoken patient health data in the received voice command, or in received voice data associated with the voice command.
  • the spoken patient health data is identified based on parsing content of the received voice command, or received voice data associated with the voice command, for example using fl processor to parse the voice data/voice command and identify the expression "blood glucose targets" as spoken health data from the received voice data/voice command.
  • identifying the spoken patient health data comprises comparing the received voice command with health data or labels associated with health data stored in a database, such as a health information portal or electronic medical record (EMR), to assist in the parsing or other identification of the health data contained in the received voice command.
  • EMR electronic medical record
  • the network disease management module 120 Is configured to obtain stored health data related to the spoken patient health data.
  • the stored health data comprises patient health data stored in the disease management database 124.
  • the stored health data comprises health portal data 160 obtained from one or more health information portals, such as HealthwiseTM, WebMDTM, Health NavigatorTM or similarhealth Information portals.
  • the stored health data comprises a combination of patient health data stored in the disease management database 124 and health portal data 160 stored in one or more health Information portals.
  • the network disease management module Is configured to provide context-sensitive voice feedback to the patient based on the obtained stored health data and on the identified spoken patient health data.
  • the context-sensitive voice feedback comprises a suggested course of action or a request for addftlonal information, with the suggestion/request being based on the received voice command.
  • the phrase "Your blood glucose target before breakfast is between 4 and 7 millimoleB per liter and before dinner between 4 and 10 miliimoies per liter. 90% of your readings are within target Keep up the good workr Is an example of the context-sensitive voice feedback provided by the network disease management module .120.
  • This example of context-sensitive voice feedback Includes: providing the patient's target In two different scenarios/times of day; advising the patient of how the measured readings compare to the target; and providing a related encouragement
  • the stored health data relating to the targets it obtained from the disease management database 124, and the determination of the percentage of readings within target is performed bated on stored health data in the disease management database 124 and/Dr from gathered data from one or more connected health devices 134, to be described later.
  • the network disease management module 120 comprises a voice service application 122.
  • the voice service application 122 is configured to perform natural language processing on the received voice command to identify the spoken patient health data in the received voice command, for example in association with 204 or 206 in Figure 2.
  • the voice service platform 140 is the API through which the voice service application 122 communicates.
  • the vok» service application 122 is also configured to format the received spoken patient health data in a format similar to the stored health data, to facilitate obtaining the stored health data related to the voice command.
  • voice service application converts the received spoken patient health data to recorded patient health data in a first format, which Is similar to a second format of the stored health data.
  • the network disease management module 120 creates recorded health data based on the received spoken patient health data, in the first format which is similar to the second format of the stored health data.
  • the stored health data comprises patient health data stored in the disease management database 124; the patient health data comprises glucose readings in the format of mmol/L; but the spoken health data comprises: "My blood sugar is 140 miligrams per decilitre', which is a different format, In this case a different unit of measurement.
  • the network dise&se management module creates recorded health data by converting, based on stored relationships to convert from one format to another, the value of 140 mg/dL from the received spoken health data to the equivalent of 7.8 mmol/L (the recorded health. data), which is in a format similar to the format of the health portal data.
  • While this example Illustrates converting one unit of measure to another unit of measure based on a known ' conversion, other example embodiments comprise other types of format conversion, such as converting from spoken health data in French to the equivalent recorded health data in English,, when the health information database stores health data in English.
  • the network disease management module 120 comprises storage, such as.a computer-readable memory or a database.
  • the network disease management module comprises a memory storing statements and instructions for execution by a processor to perform natural language processing and provide functionality such as artificial intefligence relating to disease management, tor example to implement features ae described herein.
  • the network disease management module 120 comprises one or more non- transitory computer-readable media storing statements and instructions for execution by a processor to provide functionality associated With the voice services application 122.
  • the voice service application 122 augments the work of a health professional and improves pabent outcomes, by acting as an Independent third party coach, which can also help to avoid parent-child conflict when the patient Is a child.
  • the voice service application 122 gathers information in real time from different devices or Inputs.
  • the voice service application 122 comprises statements and instructions which, when executed by a processor, provide an Alexa Skill.
  • An Alexa Skill is a capability or set of functionalities for the Alexa cloud-based voice service.
  • an Alexa Skill is a third-parry-developed voice experience, or "voice app", that adds to the capabilities of an Alexa-enabled device, such as an Amazon EchoTM. All Alexa Skills run in the cloud, in contrast to an app for a smartphone or tablet, which typically runs on the device itself.
  • the mobile disease management module 130 is in communication with the network disease management module 120 via the voice service platform 140.
  • the mobile disease management module 130 comprises statements and instructions which, when executed by a processor, provide a mobile application configured to run on a mobHe device, such as a smartphone or a tablet, which includes a microphone and a transceiver.
  • the mobile disease management module 130 is also configured to receive health data from one or more connected health devices 134, such as for example a glucometer, a weight scale, or a wearable device such as qn activity tracker or smart watch.
  • the one or more connected health devices 134 are selected from the group consisting of a fitness tracker, a weight scale, a glucometer, spirometer, and a disease-specific data collection apparatus.
  • the connected health devices 134 communicate with the mobile disease management module 130 using one or more communication means.
  • the connected health care devices 134 communicate with the mobile disease management module 130 using a wireless Communication protocol, such as BluetoothTM, Wi-Fi, NFC (Near-Field Communication) or the like
  • the connected health care devices 134 communicate with the mobile disease management module 130 using hardware configured to communicate using a particular protocol or technology, suoh as a transceiver or transmitter/receiver for wireless or near-field communication.
  • the connected health devices 134 communicate with the mobile disease management module 130 using hardware for wired communication, such as a data cable, for example using the Universal Serial Bus (USB) standard, such as USB 1.x, 2.x. or 3.x.
  • USB Universal Serial Bus
  • the connected health care devices 134 and the mobile disease management module 130 are provided with suitable connectors/receptacleB/BocketB. such as those compatible with U6B type A, B, C and variations thereon, or Ethernet ports, or the like.
  • the connected health devices 134 provide third generation data, which is the data of life, such as: geolocation, activity level, sleep data, food intake data, blood glucose level.
  • the mobile application receives gathered health data from one or more connected health devices.
  • the mobile disease management module 130 receives personal or patient-specific health data selected from the group consisting of: number of steps walked, heart rate, quality of sleep, steps climbed, and other personal metrics Involved in fitness.
  • the mobile application receives disease-specifio health data, such as glucometer data, spirometer data, etc.
  • the mobile disease management application 130 is also configured to receive, from tne network disease management module 120, disease-related patient health data associated with the context- sensitive voice feedback.
  • disease-related patient health data associated with the context-sensitive voice feedback.
  • the mobile disease management application.130 is in direct communication with the network disease management module 120, without having to use the voice service platform 140, for example to receive the disease-related patient health data associated wHh the context-sensitive voice feedback, or to provide data to the network disease management module 120.
  • WhHe some embodiments of the present disclosure take advantage of a smart speaker 110 to receive a voice command, or voice data associated with the voice command, at the network disease management module 120, tn other embodiments a smart speaker 110 is not required to receive a voice command, or voice data associated with a voice Qommand.
  • a microphone associated with a mobile device running the mobile disease management module 130 can act as a replacement of the smart speaker.
  • the mobile disease management module 130 receives, via a microphone on a mobile device on which the mobile disease management module is provided, a voice commend from a patient 102, or voice data associated with the voice . command.
  • the voice command is output by the mobHe disease management module 130 and interpreted by the voice service platform 140 in a manner similar to that described above, but without needing a smart speaker 110.
  • This enables voice Interaction with the system when the patient is not at home or near their own smart speaker, for example using a "push-to-talk* type of functionality enabled by the mobHe disease management modender 130, or an 'elwaye listening' implementation that uses a wake word. ThW makes it easier for a child or young adult to easily "enter* patient data simply by talking to either the smart speaker 110, or to the mobile device running the mobile disease management module 130 when the smart speaker 110 is not available.
  • the system according to an embodiment of the present disclosure provides the voice command to the network disease management system 120, for example via the voice service platform 140, from either the smart speaker 110 or the mobile disease management module 130.
  • Providing an architecture with a common back-end database, such as the disease management database 124. enables the voice service application 122 in the network disease management module 120 to interact with .the smart speaker 110 and with the mobile disease management module 130, using a common set of data, which provides an advantage over other known approaches that may attempt to "add on' a voice application, such as an Alexa skill, to interact with a mobile app.
  • a voice application such as an Alexa skill
  • the related mobile app may not become aware of the hew data until, for example, the app is closed down and opened later, and a clinician may not be aware of the new data until they log in the next day, which could adversely affect patient care.
  • fn such non-Integrated approaches having siloed data without a common back-end there are challenges of one system over the other, when APIs have to try to share data without a common back-end.
  • the common back-end disease management database 124 enables the system to be both voice-first and multi-modal, such that all functionality capable of being performed using a first mode of voice communication via the smart speaker 110 and using the network disease management module 120 can also be performed using a second mode of mobile communication vie the companion mobile disease management module 130.
  • the system in response to a user asking, via voice communication using the smart speaker 110, "Can type 2 diabetes be cured?", the system can provide an audio or voice answer via the network disease management module 120 through the smart Speaker 110, and also provide the text of the same answer verbatim to the mobile disease management module 130.
  • the mobile disease management module 130 can enhance the voice experience and provide complementary functionality.
  • the text provided to the mobile disease management module 130 can include the verbatim text of the answer provided via voice communication, plus additional media content which can Include links to other references, a short video, and other content.
  • This multi-modal aspect provides an advantage of using a patient's preferred medium, but also enabling a user to start a task on one device and complete the task on another device, or provide enhanced experience vie a different medium.
  • This provides an improvement over known non-integrated approaches that lack a common back-end database and may attempt to patch together data from voice and mobile applications that are not designed to integrate seamlessly with one another with respect to updating underlying data.
  • a user can generally prefer using a first mode of voice interaction, but a system according to an embodiment of the present disclosure is configured to provide additional reference material (image, video) via a second mode of mobile media interaction to supplement the content provided by the first mode of voice feedback.
  • the multi-modal functionality is enabled by a single common backend, for example including the disease management database 124, for each of the front end components, such as the mobHe disease management module 130 and the voioe service application 122.
  • Information is transmitted so as to appear at all modalities at substantially the same time, for example via a voice communication mode and a mobile communication mode.
  • the disease management database 124 is event-triggered, so for example a glucose reading submitted by voice to the database 124 triggers a message to any system component or service thet manages glucose readings to then be aware of the event and take an action on the event.
  • the system in response to receipt or entry of new data, the system reacts and can act on the reactions, as opposed to a known system that has to "pull" the information.
  • the system components that are interested in certain types of readings can "subscribe" to certain data or events, in response to a trigger associated with a type of data or event.
  • the multi-modal or omnl-channel aspect can be implemented in a voice-first approach.
  • Conelder a situation in which a patient is notified on a mobile device and responds to the notification through a smart speaker.
  • the system sends a notification to the patient via the mobile disease management module 130 as a reminder to take their medication, such as Metformin to lower blood sugar.
  • the patient can ask the smart speaker to confirm how many pils to take, and then provide a voice confirmation via the smart speaker that they have taken the medication.
  • the system can associate the voice confirmation of the medication being taken with the text notification provided via the mobile disease management module 130, and update the database 124 based on the medication having been taken, and clear the notification so that no further reminders are provided.
  • the mobile disease management module 130 causes display of a graphical user interface via which patient data can be entered manually, by entering text or choosing from drop-down menus, selecting radio buttons, check boxes, or any other suitable input method.
  • the mobile disease management module 130 includes or provides ' the graphical user interface.
  • the network disease management module 120 collects all data, and is responsible for collecting all data, regardless of how the information was entered, or whether It was originally received via. the voice service platform 140, or via the mobile disease management module 130.
  • the disease management database 124 is provided at or in association with the network disease management module 120 and makes data stored in the database available to the patient 102
  • the data stored In the disease management database 124 can be provided to the patient 102 via voice feedback at the smart speaker 110, voice feedback at a speaker of a mobile device running the mobile disease management module 130, or via visual, auditory, tactile or oiher feedback at the mobile disease management module 130.
  • the system when a notification is provided to the patient 102 via one means, for example via the smart speaker 110, the system ensures that a duplicate notification is not provided by a different means, for example by clearing or deleting a notification on one mode of interaction or communication, In response to a notification having been read or acknowledged via another mode of Interaction or communication.
  • the system further comprises a secure patient portal 160, In communication with the network disease management module 120. which is configured to provide authorization-based access to patient data.
  • a first limited level of authorization can be provided to the patient or a relative 162
  • a second more comprehensive level of authorization can be provided to a clinician 164.
  • the relative can be a parent or guardian, which helps the parent or guardian keep an eye on the child's or youth's progress with disease management.
  • similar access could be provided to a specified famiry caregiver such as a son or daughter.
  • the disease management database 124 is in communication with the secure patient portal 160 in Figure 1, enabling access, either directly or via the network disease management module 120, to a portal used by a clinician, or a parent/relative/guardian subscribed to a portal.
  • the secure portal 160 enables access not just to a single user, but any authorized user who Is entitled to see that data.
  • the network disease management module 120 comprises a content management system (CMS) 126 that supports the development of omnl-channel patient questionnaires.
  • CMS content management system
  • the omni-channel patient questionnaires can be answered by a patient 102 via a number of channele or modes, such as via a voice mode using smart speaker 110, or a mobile mode at a mobile phone using the mobile disease management module 130, or via a desktop PC or laptop, via the secure patient portal 160.
  • a notification of the fact that the patient has responded to the questionnaire,- and optionally a summary of results is provided back to the clinician via the secure patient portal 160.
  • the content management system 126 enables a non-developer to authorAranslale these questionnaires into a format suitable for delivery across ail mediums or modes (e.g. voice, mobile, and desktop).
  • the CMS 126 stores the format of the questionnaire, and the user interface for creating the questionnaire is delivered through the secure patient portal 160.
  • the network disease management module 120 and the secure patient portal 160 cooperate to triage patients according to risk level.
  • the system 100, and in particular the network disease management module 120 Is configured to score the patients according to compliance and risk.
  • the network disease management module 120 Is configured In one embodiment to cause the display of a list of at) patients, and In another embodiment to cause the display of a prioritized list of patients according to patients at greatest risk.
  • the system when the list of all patients is displayed, the system causes display of a list display selector, for example a button labelled "At Risk", permitting the clinician to easily display the higher risk patients.
  • a list display selector for example a button labelled "At Risk"
  • the list display selector comprises one or more of buttons, indicators, a toggle switch, or other visual means enabling the clinician to switch between an "all patients' view and an 'at risk patients" view.
  • generating and causing the display of the prioritized list of patientB according to patients at greatest risk is performed based on one or more of non-compliance to goals, for example not taking medications or readings as required; and readings that fall outside of set targets, for example blood glucose readings are too high or too low too often.
  • the list of all patients generated for display comprises: patient name; measurement and target data, and a risk Indicator.
  • the system in conjunction with the secure patient portal 160, comprises a secure messaging module, enabling a user to securely interact with a clinician or other caregiver or concerned individual without having to share either patient or caregiver personal contact Information.
  • the secure messaging module enables clinicians to provide direct interaction with patients, without having to provide a clinician's direct contact Information, so that, for example, the clinician doesn't have to worry about a patient texting unnecessarily.
  • the system comprises a voice messaging module, which enables a user to use voice to send a message. While some known mobile voice assistants allow a user to send a message using a native application, such functionality is not available for third party applications.
  • an Improvement is provided that enables a user to instruct the voice service application 122 via a voice command through a smart speaker 110 to send their clinician a message, by way of the secure patient portal 160;
  • the system parses the voice command In a manner similar to the approach described In relation to Figure 1 and Figure 2, for example to provide information to the recipient on the topic or relative urgency of the message.
  • the secure patient portal 160 provides a social component, whereby the patient 102 is provided with social connections to assist with disease management. For example, children or youth with a disease can be disadvantaged, and often cany a stigma, for example around .Type 2 diabetes.
  • the secure patient portal 160 provides the ability to pair, through a secure platform, a newly diagnosed patient with a mentor or natural leader who is managing their . care well, and who may have a similar diagnosis and/or have worked through a similar diagnosis earlier on in life,
  • a parent and clinician can approve a "match" and be granted permission to observe the Interaction.
  • the network disease management module 120 comprises an electronic medical record (EMR) interface 128 configured to intemperate with one or -more EMR systems to access or update EMR data 170.
  • EMR electronic medical record
  • the network disease management module 120 securely exchanges patient data via the EMR interface 128 with the EMR system(s) Via a standards-based FHIR/HL7 (Fast Healthcare Interoperability Resources/ Health Level Seven International) interface. This avoids data silos with providers (hospitals) by allowing vital sign readings obtained in the home using embodiments of the present disclosure to be populated In the hospitals' existing electronic medical record system.
  • Figure 3 Is a block diagram of components of a system for voice-enabled disease management according to an example embodiment of the present disclosure.
  • the network disease management module 120 comprises: a disease parameter tracker 322, a pattern tracker 324, a prompt content generator 326, and . an audio prompt generator 326.
  • the audio prompt generator 326 is In communication with the voice service application 122.
  • the voice service application 122 comprises one or more of the disease parameter tracker 322, the pattern tracker 324, the prompt content generator 326. and the audio prompt generator 328.
  • Figure 4 Is a flowchart Illustrating steps in a method for voice- enabled disease management according to an embodiment of the present disclosure, and related to Figure 3.
  • the disease parameter tracker 322 as shown In Figure 3 and as illustrated in Figure 4 at 402, is configured to- receive and store the gathered health data received from the one or more connected health devices 134, for example at the mobile disease management module 130.
  • gathered health data It provided directly from the connected health device 134 to the disease parameter tracker 130 or to the disease management database 124, for example using a Wi-Fi Interface via ' an API call to a vendor's private cloud backend.
  • the patient's fitness tracker (connected health device) is configured to automatically upload running data to the network disease management module 120.
  • the disease parameter tracker 322 comprises an activity /blood sugar level tracker.
  • the one or more connected health devices are configured to receive gathered health data selected from the group consisting of: blqod sugar level; exercise data; weight; amount of sleep; food intake; geolocatjon data; and time of day.
  • a pattern tracker 324 as shown in Figure 3 is In communication with the disease parameter tracker 322 and, as shown In Figure 4 at 404, is configured to generate one or more a cores based on how far a patients measured disease parameter is from a Clinician-set target disease parameter for the patient.
  • the pattern tracker 324 performs pattern tracking based on analysis of disease parameter tracker data. To help perform the pattern tracking and analysis, the pattern tracker 324 also uses data received from the disease parameter tracker 322 as training data to train the pattern tracker to improve operation.
  • a prompt content generator 326 as shown in the embodiment of Figure 3 is in communication with the pattern tracker 324 and configured to, as shown in Figure 4 at 406, generate patient-specific content Identifying one or more targeted disease parameters that require attention to improve the patient's health condition.
  • generating the patient-specific content comprises developing time-based success scenarios.
  • the prompt content generator 326 provides disease- specific business logic, for example based on a knowledge base or stored rules, to provide automated functionality of a virtual coach for disease management
  • the prompt content generator 326 is implemented as a generative artificial intelligence (Al) module either including or configured to communicate with disease-specific business logic or a disease-specific knowledge base.
  • the prompt content generator 326 determines what data is normal or abnormal/outlier for a particular patient, and then correlates or finds correlations tar abnormal data to provide associated feedback to improve the patient 1 a health condition.
  • the prompt content generator 326 comprises a lookup table.
  • the pattern tracker determines, based on observed sleep data, that the patient 102 is not meeting sleep goals.
  • the prompt content generator 326 based on an identification of abnormal or off-target steep data relating to sleep goals, provides the following feedback to the patient: Try to get to bed 1 hour earlier tonight and you'll find you'll have significantly more energy tomorrow. 1 '
  • the prompt content generator can provide the following subsequent feedback: "Congratulations! How are you feeling this. morning?*
  • the prompt content generator 326 as it gets more Inteligent, notices that every Tuesday a patient has low blood sugar, then can inquire directly with the patient when no correlation is found, if con-elation is found, for example observing on Monday night only 4 hours of sleep vs. regular 8 hours, the prompt content generator 326 can provide a voice prompt/message to the patient that they seem to have low blood sugar when they havent slept well.
  • the prompt content generator 326 stores business logic to, based on observed patterns and on data captured in the pattern tracker 324, create data or content for use in providing or delivering audio prompts, as a human coarjh might convey during a coaching session, based on information known and stored in the prompt content generator 326.
  • the prompt content generator 326 stores and codifies; for example in a machine-readable memory, knowledge of the interactions between good glycemic control, exercise, nutrition and sleep to provide a virtual diabetes coaching functionality.
  • knowledge gleaned from a clinical team and/or from health portal data is coded in the prompt content generator 326 to develop' the virtual coaching functionality.
  • the pattern tracker 324 provides an activity success score to the prompt content generator 326, for example to determine which coaching prompt has the highest likelihood of improving health, outcomes; in an example embodiment, the higher the score, the better the correlation.
  • Ihe prompt content generator 326 provides an activity simulation to the pattern tracker 324, for example baaed on building time based success scenarios constrained by a rule set
  • An audio prompt generator 328 M shown In figure 3 is in communication with the prompt content generator 326.
  • the audio prompt -generator 328 receives activity recommendations from the prompt content generator 326.
  • the audio prompt generator 328- is configured to convert data or content from the prompt content generator Into actionable audio or voice prompts for delivery via the voice service platform 140.
  • the actionable voice prompt comprises an audio file generated by the audio prompt generator and provided as an output to a speaker or other audio output device, such as the smart Speaker 110 or a speaker of a mobile device running the mobile Olseaae management module 130.
  • the actionable voice prompts comprise a conversational -style prompt including: data tor improvement of the patient * s health condition; and a personalized motivational message associated with the data for improvement of the patient's health condition, in ah example embodiment, the personalized motivational message comprises an encouragement selected based on the relationship of the data for improvement of the patient's health condition to an associated target.
  • the audio prompt generator 328 is provided as part of the network disease management module 120; comprises a very sophisticated skill; and does not simply replicate what can be done on a mobile app.
  • the audio prompt generator 328 is configured to generate an actionable voice prompt, where the content of the actionable voice prompt Is generated in a manner such that the voice prompt encourages or generates a behavior change in the patient.
  • the prompt content generator 326 generates health data content
  • the audio prompt generator 328 generates context for the health data content to yield the greatest chance or impact and success.
  • the audio prompt generator 328 provides an encouraging and congratulatory personalized motivational message when patient is achieving their target
  • the audio prompt generator 326 provides a more empathetic personalized motivational message and draws on other resources, for example trying to connect the patient to their coach or mentor. How the audio prompt generator 328 responds to the patient depends on how the patient Is doing, which Is determined based on the received voice command and on health target data established by the patient* t clinician, and optionally additionally based on Ihe health portal data.
  • the audio prompt generator 328 incorporates empathy and responds to the patient In a unique way based on context.
  • the manner In which the audio prompt generator 328 Incorporate* empathy and responds to the patient is tailored to the age of the patient, as different demographics are motivated In different ways.
  • the audio prompt generator 328 is configured to Incorporate empathy and respond to the patient according to a first interaction framework when the patient is a chid, according to a second interaction framework when the patient Is en adult, and according to a third Interaction framework when the patient is a senior.
  • the voice service application 122 processes patient queries and/or responses end determines a patient context in the absence of asking ' the patient context-related questions.
  • the voice service application 122 Is configured to provide voice samples of patient queries and/or responses to a voice analytics engine and to receive, from the volce.analytics engine, data on patient state of mind and/or Other emotional health parameters, based only on analysis of the patient's voice by the voice analytics engine.
  • the operation of the voice analytics engine, or speech analytics software is outside the scope of this disclosure and related solutions are known to one of ordinary skill in the art.
  • a mobile prompt generator 329 is provided as part of the network disease management module 120 and configured to generate an actionable mobile prompt, for example a text, audio or other media prompt
  • the mobile prompt is delivered to the pabent via the mobile disease management module 130 at a mobile device to encourage or generate a behavior change in the patient, In a manner similar to the audio prompt generator 328 as described above.
  • a detailed discussion of the operation of the mobile prompt generator 329 is omitted for the sake of brevity, with, the implementation details being apparent to one of ordinary skill in the art based on the earlier detailed description herein of the functionality of the similarly functioning audio prompt generator 328.
  • the audio prompt generator 328 and/or the optional ' mobile prompt generator 319 is/are configured to enable behavior change in a user or patient.
  • FDA Food and Drug Administration
  • Embodiments of the present disclosure are ideally suited to enable the science behind behavior modification. It Is accepted globally that changes in lifestyle will Improve the condition of a patient suffering from a disease, and there are even some who believe that a patient could be medioation-free through behavior change alone.
  • the disease parameter tracker 332 the pattern tracker 324, prompt content generator 326 and audio prompt generator 328 cooperate to provide an understanding of a user or patient to the point of providing meaningful incentives, for example including gamiflcation encouragement, to drive successful behavior change.
  • meaningful incentives for example including gamiflcation encouragement
  • the ability to deliver some of the behavior change functionality via voice has been found to be more effective than only delivering via mobile communication.
  • the behavior change prompts are delivered over a plurality of modes or modalities, and can involve one or more of a clinician; a parent/guardian/mentor or a patient population relevant to the patient. Ail of these improve the chances of impacting patient behavior.
  • the behavior change prompt comprises a reminder, such as an appointment reminder to attend an appointment, or a testing reminder, for example a reminder to take a blood pressure measurement, or measure blood sugar levels.
  • the behavior change prompt comprised a treatment instruction, such as an instruction to take a medication or group of medications at a certain time, or around a certain event, e.g. with an upcoming meal.
  • the system 100 of the present disclosure is Integrated into a broader healthcare system to drive new opportunities for patient engagement and compliance, providing enhanced behavior change functionality.
  • the system 100 of an embodiment of the present disclosure Is in communication with a loyalty rewards system such that stored health data for a user, based on data collected from the disease management database and/or from the one or more connected health devices 134, is compared .to a target; loyalty rewards are awarded in response to meeting or exceeding a target.
  • a loyalty rewards system such that stored health data for a user, based on data collected from the disease management database and/or from the one or more connected health devices 134, is compared .to a target; loyalty rewards are awarded in response to meeting or exceeding a target.
  • the loyalty points can be health-related, for example providing points that can be used to offset costs of purchasing medication or other health-rotated products or services.
  • the system is configured to award one or more types of loyalty points, and to perform any necessary point value conversion, to best encourage patient engagement and compliance; for example, for youth trying to meet diabetes-related targets, the loyalty points can relate to music purchases or mobile device app purchases, or can be direct accumulation of dollar amounts towards gift cards for general or specific purchases.
  • the system of the present disclosure Is Integrated with a prescription fulfilment system and configured to request a prescription refill, for example through integration with a Health related loyalty rewards system or another type of direct Integration.
  • Many known health-related apps connected to a glucometer are siloed; the glucometer manufacturer hae access to data, out there are complaints around not being able to share data with a clinician.
  • Embodiments of the present disclosure integrate with a clinician or provider electronic medical records (EMR), for example using the EMR interface 128 as shown in Figure 1, which can Integrate securely with payers and provider systems, pharmacy systems, etc. to deliver this benefit.
  • EMR electronic medical records
  • Such a solution according to an embodiment of the present disclosure provides one or more of the following value enhancements: ability to measure and track compliance on the system, In a better way than other systems; can be tied to the behavior modification and is a type of incentive; can have hearth-related incentive, but other incentives as well depending on partnerships; if provider paying health insurance can be confident that a user is doing what a clinician has Instructed, and knows this win ultimately save costs for the insurance, savings can be given back to the user in form of incentives, as many Insurance companies have enough statistics to know how much non-compliance will cost them In the end.
  • Figure 5 is a block diagram of components of a system for voice-enabled disease management according to another example-embodiment Of the present disclosure.
  • the embodiment In Figure S receives specific health queries/commands from a patieht via a voice command, and provides answers.
  • the embodiment of Figure 5 leverages health content that is available to digital health services via APIs, but which health content is almost exclusively designed to be presented through a desktop experience, with only some considering formatting for mobile, and none having considered how to make the content available via voice interface.
  • An example embodiment associated with Figure 5 comprises real-time formatting or translation of a question Into a query property formatted for a third party portal, the result of which is then formatted Into ⁇ a suitable voice response, and optionally additionally delivered via other modalities, such as via text/mobile feedback.
  • the network disease management module 120 comprises: a question intent converter 522, a pre-processor 524, and a post processor 526.
  • the question intent converter 522 and the post processor 526 are in communication with the voice service application 122.
  • each of the question intent converter 522 and the post processor 526 is in communication with the voice service application 122.
  • the voice Bervice application 122 comprises one or more of the question Intent converter 622 and the post processor 526.
  • Figure 6 Is a flowchart illustrating steps in a method related to Figure 5 for voice- enabled disease management according to an embodiment of the present disclosure.
  • the question intent converter 522 determines , or selects the health information portal to be used as the source of data to answer a question in the received voice command, and the pre-processor 624 manages the interface to. and handles communication with, the selected health information portal.
  • the question intent converter 522 as shown in Figure 5 and as Illustrated In Figure 6 at 602, is configured to convert the patient's voice command to a patient inquiry including additional patient context data In a format compatible with the health portal data in the particular health Information portal to be accessed.
  • the question intent converter 622 in Figure 5 is configured to identify that a patient's voice command is a general health query beat answered using health data from a health Information portal, rather than a patient-epeclflc query as discussed in relation to Figure 3:
  • Examples of patient-specific queries associated with Figure 3 include: “What are my targets?", “I just took a glucose reading: how am I doing?”
  • Examples of general health queries associated with Figure 5 include: "What is the relationship between diabetes and exercise?" or "Can type 2 diabetes be cured?”
  • the voice command comprises an explicit.queetion, such as would normally conclude with a question mark when written, such as:. "What can I eat?", and which is most often characterized by the speaker's voice rising In intonation at the end of the voice command
  • the voice command comprises an implicit question embedded in a statement such as "I want to know what I can eat”
  • the question intent converter 522 comprises semantic processing means configured to identify components of a query within a voice command that presents as a statement
  • the question intent converter 522 comprises semantic processing means, or semantic processing functions, and is configured to determine, based on an analysis of the patient's voice command, that the voice command is a general hearth query best answered using health data from a health information portal.
  • the question intent converter 522 adds patient information known from a patient record, such as the patient's age and disease status, or other information such as current weight and other health data gathered from the one or more connected health devices.
  • the question intent converter 522 is configured to convert a patient query, or received voice command, of 'I'm eick; what can I eat?” to the following reformatted question that it suitable for obtaining health data from a health information portal: "Youth aged 12 with type 2 diabetes end a temperature of 102 degrees Fahrenheit looking for what they can eat. "
  • a pre-processor 524 is, as shown In Figure 5 and as Illustrated in Figure 6 at 604. in communication with the question intent converter 522 and with the health portal data 150.
  • the pre-processor considers the content of the question/Intent, and matches the question to one of a plurality of hearth information portals based on the content of the question/intent, in an embodiment the pre-processor 524 comprises different logic for portal communication based on which portal is to be communicated with.
  • the pre-processor 524 specifies a data formal, or Bchema, for health portal data.
  • the health portal data form at.
  • the health portal data format can inolude more specific content blocks such, as Symptoms and Diagnosis.
  • the pre-processor 524 can provide the Health NavigatorTM portal with 2 symptoms, and the pre-processor 524 will handle the expected dialogue/interaction with the Health NavigatorTM health portal.
  • the preprocessor 524 directs the question to the Health NavigatorTM database/portal based on determination of a symptom-based query and handles the Interaction/communlcatfon through to the recommendation of whether to treat at home, call a physician, or visit an emergency room.
  • the pre-processor 524 directs the question to the Health Wise 7 * database based on analysis of the voice command content and obtains the required data from that database.
  • the pre-processor 524 is configured to convert data based on a patient's voice command tb a health portal data-formatted query to facilitate health portal data content lookup In a specific health information portal having a health portaj data format
  • the pre-processor 524 crawls through the health portal data 150 to develop a schema for the supported content portal(e) and formats a lookup table in a way that facilitates accurate content lookup.
  • the pre-processor 524 uses pattern matching and machine learning to perform pre-processing and health portal data format malchlng/mapplng In real time, for example by analyzing titles of article* and parsing the question to match the titles of articles.
  • a post-processor 526 as shown In Figure 5 le In communication with the health portal data 150 and with the voice service application 122.
  • the post-processor 520 as Illustrated In Figure 6 at 606, is configured to reformat or modify the obtained health portal data prior to providing the context-sensitive voice feedback to the patient in an example embodiment, if there are 3 pages of content available, the. post-processor 526 determines only to provide or read a synopsis of the article as the context-sensitive voice feedback, and/or to ask whether to deliver the whole article to the patient, such as via the mobile disease management module.
  • the post-processor 526 is configured to make determinations on how to del ⁇ ver content, for example based on a comparison of the length of the content, the format of the content, or both, to stored thresholds for voice delivery to patients in general, or to a particular patient. For example, If the content includes multimedia content, the post-processor 526 can determine to provide or speak a subset of the content through the virtual assistant, and/or to provide some or all of the content to the patient In rich text/lmages/video on their mobile device via the mobile disease management module, for example dividing the content Into smaller content portions when appropriate.
  • An example embodiment will now. be described with respect to an example implementation for diabetes management It is to be understood that other example embodiments provide similar functionality tailored to management of a particular disease, for example a lung disease such as chronic obstructive pulmonary disease (COPD), asthma, pulmonary fibrosis or cystic fibrosis.
  • COPD chronic obstructive pulmonary disease
  • asthma asthma
  • pulmonary fibrosis cystic fibrosis
  • This example embodiment comprises a connected health solution for patients with Type 2 diabetes that connects a patient to their circle of care.
  • a natural language Interface enables a patient to inquire about their targets to increase compliance and comfort. Insights from data collected allow the system to encourage the patient to be successful and demonstrate empathy when the patient Is struggling.
  • Some example interactions are provided below, with reference to dement* shown in Figure 1, in which a patjent 102 issues a voice command via a smart speaker 110, which is received via a voice service platform 140, which tn this example is Amazon Alexa.
  • the network disease management system 120 receives the voice command, or voice data associated with the voice command, from the patient and obtains stored health data from a disease management database 124 and/or a health information portal 150, to provide context-sensitive voice feedback to the patient based on the received voice command and on the stored health data.
  • the one or more connected health devices 134 comprise a blood pi urometer.
  • This example embodiment provides a holistic approach to patient care.
  • vital sign data Is obtained from one or more connected health devices 134 and the patient Is encouraged towards achieving their goals.
  • the example embodiment leverages a wealth of patient information from a health information portal, such as HealthwiseTM, that is transformed through conversational exchange.
  • a health information portal such as HealthwiseTM
  • Answer 5 begins with an em pathetic response, which encourages further interaction with the system and increases the likelihood of compliance with instructions to meet targets.
  • the system transparently obtains health data from a selected health information portal based on the received question/command, and also baaed. on patient-specific data such as knowledge of one or more of the patient's age, weight, blood glucose targets, etc. from the disease management database 124.
  • the system also proactlvery prompts for follow-up, which again encourages Interaction and increases the likelihood of the patient being able- to take the required action to address the questjQn/command.
  • command 7 is similar to command 3
  • the system according to the example embodiment not only recognizee that the reading Is above target, but Is also able to make use of exercise and sleep data ' gathered frorri' connected health devices 134, and not entered via voice command or the current command, to provide additional Insight Into why the blood glucose level is above target. Integration with wearable devices provides deeper Insights when a patient is struggling, and provide context when vital sign readings are outside of desired targets.
  • the system is configured to request additional information, to provide additional data points to the system, which can then be incorporated by a clinician into modified goals, or can simply help to provide an explanation for a vital sign reading. ' such as a blood glucose reading, that may otherwise have been worrisome.- .
  • the above example embodiment addresses one or more of the challenges associated with patient engagement, adherence and compliance wfthln the context of diabetes care management.
  • Other example embodiments of the present disclosure are tailored to support patients with other diseases to drive Improved outcomes.
  • the one or more connected health devices 134 comprise a Bluetooth enabled spirometer.
  • the present disclosure provides a method and system for voice-enabled management of a lung disease such as chronic obstructive pulmonary disease (COPD) asthma, pulmonary fibrosis and cystic fibrose.
  • COPD chronic obstructive pulmonary disease
  • Such example embodiments can encourage children with cystic fibroala to exercise and complete specfflo breathing exercises on their spirometer, which would nave a positive Impact In slowing the rate of decline in lung function, helping to clear mucus from the lungs, allowing , for easier breathing, and creating more reserve for the whole body to rely on, during periods of lung infection.
  • FIG. 7 Is a block diagram of a system for voice-enabled disease management according to another embodiment of the present disclosure.
  • a portion of the system 100 of Figure 1 Is shown with a different embodiment of a smart speaker 710.
  • the smart speaker 710 is operable In two modes: a wake word detection mode; and a non-verbal wake condition detection mode.
  • the smart speaker 710 comprises a wake word detector 712, or wake word detection module, configured to detect, via a microphone 714, a wake word (such as "AJexa”, "Hey Slri”, "OK, Google", or a user-defined wake word) end to begin processing commands following the ' wake word In a wake word detection mode.
  • the smart speaker 710 further comprises a non-verbal wake condition detector 716, or non-verbal wake word detection module, and a motion sensor or camera 716.
  • the non-verbal wake condition detector 716 which can be a presence/user detection module, is configured in an embodiment to. in response to detection of a non-verbal wake condition by the motion sensor or camera 718: detect a non-verbal wake condition, such as an action or gesture; and to activate operation of the smart speaker in response to the detected non-verbal wake condition and
  • the detected non-verbal wake condition comprises detection of presence of a user, for example within a detection distance from the smart speaker.
  • the detected non-verbal wake condition comprises detection of a non-authenticated wake action or gesture, such as a hand wave or the presence of a face.
  • the detected non-verbal wake condition comprises detection of an authenticated wake action, such as face detection to authorize a specific user.
  • the system can advantageously provide customized feedback, such as based on stored data relating to the authenticated user.
  • the system provides, customized feedback based on a detected condition (e.g. a detected emotional oondition, a detected physical or physiological condition) of the authenticated user, such as based on a detected facial expression;
  • the system adjusts the content of the feedback, the tone of the feedback, or both, In response to the detected condition.
  • the system, .of Figure 7 Is configured lo proactJvely provide a voice prompt to the user In response to. a detected non-verbal wake condition and in the absence of a verbal wake word.
  • the system is configured to provide a voice prompt such as a pre-recorded audio file (e.g. an MP3 that say* 'Good morning, [user name]").
  • the system in response to the detected non-verbal wake condition, is configured to provide prompts such as asking a user about appointment or goal reminders, reminding about taking medication before appointment, and proecttvely asking a user to confirm whether a medication has been taken so that the system can check it off a user's medication list.
  • Embodiments of the present disclosure associated with Figure 7 provide not just the functionality of a non-verbal wake condition, but also a mechanism to enable a third party voice services-enabled voice device (e.g. smart speaker) to Interact with a user on its own schedule, without a user having to do anything other than be present
  • the system determines whether to provide a proactive verbal prompt in response to both detection of a non-verbal wake condition and an appropriate time of day: For example, if an Alexa skill is triggered at 3am and a user goes for a midnight snack, the system is configured not to send a voice prompt right away, and to wait until a combination of both user presence and an appropriate time of day.
  • the user presence and time of day functtonafity is provided at the smart speaker itself, or in the network/cloud.
  • Existing approaches have no way for a smart speaker to initiate interaction with the user; it is always the user- who initiates interaction with the known ' device.
  • Embodiments of the present disclosure provide an improvement over known approaches by providing a smart speaker 710 with a motion sensor or camera 718 configured to enable the voice service application to initiate interaction, for example in conjunction with the smart speaker 710, with a user in response to detection of a non-verbal wake condition, for example using the non-verbal wake condition detector 716.
  • Embodiments of the present disclosure provide a solution that integrates a plurality of components that are aware of each other and contribute to enhanced disease management.
  • Known wearable devices currently show some information on an app, but embodiments of the present disclosure provide an improvement that supports disease . management for a chronic illness that incorporates being able to perform one, many or all of the following: pull In data from an existing hearth portal (e.g.
  • Embodiments of the present disclosure provide a voice-first cloud-baaed diaene management system Including a voice service application which is integrated with a mobile application to facilitate easy patient interaction and data entry, and can provide an em pathetic 'coach* via context-eenikive voice feedback.
  • a method and system are provided for voice-enabled disease management.
  • the system includes a network disease management module having a voice service application configured to run on a network device to provide voice-based disease management services to a patient in a voice interaction mode.
  • a mobile disease management module includes a mobile service application configured to run on a mobile device to provide graphical or text-based disease management services to the patient In a mobile interaction mode.
  • a disease management database is configured to provide a common set of data accessible by the voice service application and the mobile service application such that the voice-based disease management services provided in the voice interaction mode are integrated with the visual or text-based disease management services provided in the mobile interaction mode.
  • the system allows a patient to inquire about health targets and Increase compliance and comfort
  • embodiments of the present disclosure provide a non-abstract Improvement In the functioning of a computer or to computer technology or a related technical field, including related systems and methods.
  • Embodiments of the present disclosure provide specific details on how the system accomplishes a result that realizes an improvement in computer functionality including providing integrated voice and mobPe applications that both provide services based on a common set of data. This represents an improvement in the way a system with integrated components store and retrieve data In a common memory in communication with a voice services application and with a mobile services application, and is a specific implementation of a solution to a problem in the computer and software arte.
  • Embodiments of the present disclosure provide a non-conventional and non -generic arrangement of computer components to achieve a non-abstract improvement in computer technology.
  • Embodiments of the disclosure can be represented as a computer program product stored tn a machine-readable medium (also referred to as a computer-readable medium, a processor-readable medium, or a computer usable medium having a computer- readable program code embodied therein).
  • the machine-readable medium can be any suitable tangible, non-transitory medium, including magnetic, optical, or electrical storage medium including a diskette, compact disk read only memory (CD-ROM), memory device (volatile or non-volatile), or similar storage mechanism.
  • the machine-readable medium can contain various sets of instructions, code sequences, configuration information, or other data, which, when executed, cause a processor to perform eteps In a method according lo an embodiment of the disclosure.

Abstract

A method and system are provided for voice-enabled disease management. The system includes a network disease management module having a voice service application configured to run on a network device to provide voice-based disease management services to a patient in a voice interaction mode. A mobile disease management module includes a mobile service application configured to run on a mobile device to provide graphical or text-based disease management services to the patient in a mobile interaction mode. A disease management database is configured to provide a common set of data accessible by the voice service application and the mobile service application such that the voice-based disease management services provided in the voice interaction mode are integrated with the visual or text-based disease management services provided in the mobile interaction mode. The system allows a patient to inquire about health targets and increase compliance and comfort.

Description

SYSTEM AMD UETHnn FOR VOK.P.FMAW frn nLqgASE MAMAQEMEMT
CROSS-REFERENCE
[0001] This applitntion claims the benefit of priority of United States Patent Application Serial No. Θ2/5Θ1.349, filed on November 28, 2017, the contents or which is hereby incorporated by reference.
FIELD
[0002] The present disclosure relates to systems and methods for managing data, including but not limited to eystems and methods for disease management.
BACKGROUND
[0003] Devices can be used to assist a patient in tracking and achieving specific goals and objectives to mitigate effects -of a disease, chronic illness, or medical condition with which the patient has been diagnosed. For example, a fitness tracker or other connected device can be used to measure a particular parameter with respect to Improving patient health and/or mitigating the effects of the disease.
[0004] However, such devices are limited in scope and do not provide a full view of multiple aspects affecting management of the disease or condition. Moreover, for diseases such as diabetes that affect children and youth, the devices can be difficult for the patient or their adult relatives to use property and effectively.
[0005] Improvements in devices, systems and/or methods for disease management are desirable,
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] Embodiments of the present disclosure will now be described, by way of example only, with reference to the attached Figures.
[0007] Figure 1 is a block diagram of .a system for voice-enabled disease management according to an embodiment of the present disclosure.
[0008] Figure 2 is a flowchart illustrating steps in a method for voice-enabled disease management according to an embodiment of the present disclosure.
[0009] Figure 3 is a block diagram of components of a system for voice-enabled disease management according to an example embodiment of the present disclosure.
[0010] Figured is a flowchart illustrating steps in another method for voice-enabled disease management according to en embodiment of the present disclosure. [0011] Figure 5 is a block diagram of components of a system for voice-enabled disease management according to another example embodiment of the present disclosure.
[0012] Figure 6 is a flowchart illustrating steps in a further method for voice-enabled disease management according to an embodiment of the present disclosure.
[0013] Figure 7 Is a block diagram of a system for voice-enabled disease management according to another embodiment of the present disclosure.
BRIEF DESCRIPTION
[0014] A method and system are provided for voice-enabled disease management In an Implementation, the system provides a network module In communication with a mobile device module, both of which communicate with a common database to enable multi-model (voice, text, etc.) disease management for a patient
[0015] In an embodiment, the present disclosure provides a system for voice- enabled disease management comprising: a network disease management module comprising a voice service application configured to run on a network device to provide voice-be sed disease management services to a patient in a voice interaction mode; a mobile disease management module comprising a mobile service application configured to run on a mobile device to provide graphical or text-based disease management services to the patient in a mobile interaction mode, the mobile disease management module being in communication with the network disease management module; and a* disease management database in communication with the voice service application and the mobile service application, the disease management database configured to provide a common set of data accessible by the voice service application and the mobile service application such that the voice-based disease management services provided by the voice service application In the voice Interaction mode are integrated with the visual or text-based disease management services provided by the mobile disease management application in the mobile interaction mode.
[0016] In an example embodiment, the voice-based disease management services provided by the voice service application and the graphical or text-based disease management services provided by the mobile service application are both based on the common set of data provided by the disease management database.
[0017] In an example embodiment,, the voice service application is configured by a first non'transitory memory storing statements and instructions for execution by a first processor to: receive voice data associated with a voice command generated by a patient; identify spoken patient health data in the received voice command; obtain stored health data related to the spoken patient health data; and generate voice feedback data for providing context-sensitive voice feedback to the patient based on the obtained stored health data and on the identified spoken patient health data, the context-sensitive voice feedback suggesting a course of action or requesting additional Information based on the received voice command.
[0018] In an example embodiment, the mobile service application is configured by a second non-traneltory memory storing statements and instructions for execution by a second processor to: receive gathered health data from one or more connected health devices; and provide the gathered health data to the network disease management module.
[0019] In an example embodiment, the mobile service application Is configured by a second non-transitory memory storing statements and instructions for execution by a' second processor to : receive, from the network disease management module, disease- related patient health data associated with the context-sensitive voice feedback; and cause display of the disease-related patient health data to the patient.
[0020] In an example embodiment, the network disease management module is configured to: obtain a first data sat associated with the voice command, and provide the first data set to the disease management database, the voice command received at the network disease management module; and obtain a second data set associated with the gathered health data, and provide the second data set to the disease management database, the gathered health data received at the mobile disease management module.
[0021] In an example embodiment, the disease management database, the network disease management module, and the mobile disease management module cooperate to: provide a first disease management function using the voice-based disease management service in the voice interaction mode; and provide the same first disease management function using the mobile disease management service in the mobile interaction mode.
[0022] In an example embodiment, the disease management database, the network disease management module, and the mobile disease management module cooperate to: provide a Becond disease management function using the mobile disease management service, in the mobile interaction 'mode to supplement a first disease management function provided using the voice-based disease management service in the voice Interaction mode.
[0023] In an example embodiment, the disease management database, the . network disease management module, and the mobile disease management module cooperate to: provide a notification via both the voice interaction mode and the mobile interaction mode; and clear the notification from one mode of communication in response to an indication that the notification hae been acknowledged via the other mode of communication.'
[0024] in an example embodiment, the voice service application it configured to: perform · natural language processing on voice data associated with a received voice command to Identify spoken patient health data in the received voice command; and create recorded patient health portal data, based on the received spoken patient health data, In a first format similar to a second format of the stored health data in the disease management database.
[0025] in an example embodiment, the network disease management module comprises: an audio prompt generator configured to create an actionable voice prompt for delivery to the patient from the voice service application via a voice service platform, the actionable voice prompt being created based on the gathered health data received from the one or more connected health devices.
[0028] In an example embodiment, the network disease management module comprises: a disease parameter tracker configured to communicate wtth one or more connected health devices to receive gathered health data; a pattern tracker, In communication with the disease parameter tracker, configured to generate one or more scores based on how far a patient's measured disease parameter is from a clintelan-eet target disease parameter; a prompt content generator, in communication with the pattern tracker, configured to generate patient-speclfia content identifying one or more targeted disease parameters that require attention to improve the patient's health condition; and an audio prompt generator, in communication with the prompt content generator, configured to convert the generated patient-specific content from the prompt content generator into actionable voice prompts for delivery to the patient via a voice service platform.
[0027] In an example embodiment, the voice service application is configured to perform natural language processing on a received voice command to identify spoken patient health data In the received voice command, and to format the spoken patient health data similar to stored health data to facilitate obtaining the stored health data related to the voice command, and the audio prompt generator Is tn communication with the voice service application.
[0028] in an example embodiment, the actionable voice prompt comprises a conversational-style prompt including: data for Improvement. of the patient's health condition; and a behavior change component associated with the data for Improvement of the patient's health condition and customized to the patient. [D02S] in an example embodiment, the behavior change component is created based on the relationship of the data for improvement of the patient's health condition with an associated target.
[0030] In an example embodimerit, the disease parameter tracker comprises an activity/blood sugar level tracker, and wherein the one or more connected health devices are configured to receive the gathered health data selected from the group consisting of: blood sugar level; exercise data; weight; amount of sleep; food intake; geotocation data; and time of day.
[0031] In an example embodiment, the network disease management module is in communication with a health information portal storing health portal data, and the network disease management module comprises: a question intent converter configured to convert voice data associated with a patient's voice command to a patient inquiry including additional patient context data in a format compatible with the health portal data; a preprocessor, in communication with the question Intent converter and with the health portal data, configured to convert, using a lookup table, data in the patient inquiry to a health portal query to facilitate health portal data content lookup; and a post-prooessor, in communication with the health portal data and the voice service platform, configured to modify the obtained health portal data prior to providing the context-sensitive voice feedback to the patient.
[0032] In an example embodiment, the voice service application is configured to perform natural language processing on voice data associated with the received voice command to identify the spoken patient hearth data in the received voice data, and to format the spoken patient health data similar to stored health data in the disease management database to facilitate obtaining the stored heafth data related to the voice command, and one or mora of the question intent converter and the poet-processor is in communication with the voice service application.
[0033] In an example embodiment, the system further comprises a secure patient portal, in communication with the network disease management module, configured to provkta. authorization-based access to patient data to a relative or clinician.
[0034] In an example embodiment, the system further comprises a secure messaging module, configured to enable the patient to securely interact with the clinician without either the patient or the oilniclan having to share personal contact Informatiofi.
[0035] In an example embodiment, the secure messaging module is configured to enable sending a voice message between the patient and the clinician. [0036] In an example embodiment, a social connection module is configured to pair the patient with a mentor for Becure interaction via the secure patient portal.
[0037] In an example embodiment, the network disease management module and the disease management database are configured to: compare, to a target, selected patient health data collected from the one or more connected health devices; end send a notification to a loyalty rewards system to award loyalty rewards to the patient in response to the selected patient health data meeting or exceeding the target
[0038] In an example embodiment, the one or more connected health devices are selected from the group consisting of a fitness tracker, a weight scale, a glucometer, spirometer, and s disease-specific data collection apparatus.
[0039] in an example embodiment, the network disease management module comprises the disease management database.
[0040] In an example embodiment, the network disease management .module is configured to handle activity from both the mobile disease management module and from a smart speaker configured to receive a voice command from the patient associated with the voice-based disease management services.
[0041] In an example embodiment, the network disease management module comprises a content management system configured to provide an omni-channel patient questionnaire to the patient via the voice interaction mode using the voice service application and/or the mobile communication mode using the mobile disease management application.
[0042] In an example embodiment, the network disease management module and the secure patient portal cooperate to generate and cause the display of a list display selector configured to alternate between displaying a list of all patients and a list of higher risk patients.
[0043] In an example embodiment, the network disease management module comprises an electronic medical record (EMR) interface configured to Intemperate with one or more EMR systems to access or update EMR data based on the spoken health data, the obtained stored health data and/or content of the context-sensitive voice feedback.
[0044] · . · In an example embodiment, the system .further comprises a smart speaker Including a non-verbal wake condition detector configured to detect a non-verbal wake condition in the absence of a verbal wake word.
[0045] In an example embodiment, the smart speaker comprises: a motion sensor or cam era; and the non-verbal wake condition detector is configured to receive and process an output of the motion sensor or camera to determine occurrence of the non-verbal wake condition.
[004β] in an example embodiment, the non-verbal wake condition detector comprises a presence detection module and Is configured to, m response to detection of a non-verbal wake condition by the motion ββπβοτ or camera: detect a non-verbal wake condition; and activate operation of the smart speaker in response to the detected nonverbal wake condition and in the absence of a verbal 'wake word.
[0047] In an example embodiment, detection of the non-verbal wake condition
-comprises a step selected from the group consisting of detection of presence of a user; detection of a non-authenticated wake action or gesture; and detection of an authenticated wake action to authorize a specific user.
[0048] in an example embodiment, detection of the non-verbal wake condition comprises detection of an authenticated wake action to authorize a specific user, and wherein the non-verbal wake condition detector is configured to provide customized feedback to the patient based on a detected condition of the authenticated user. In an example embodiment, providing customized feedback is performed on a detected facial expression, and wherein the system adjusts content of the feedback, a tone of the feedback, or both, in response to the detected condition.
[0049] In another embodiment, the present disclosure provides a network disease management apparatus comprising: a processor; and one or more non-transitory machine readable memories storing statements and Instructions for execution by the processor to: receive voice data associated with a voice command generated by a patient; identify spoken patient health data In the received voice command; obtain stored health data related to the spoken patient health data; and generate voice feedback data for providing context-sensitive voice feedback for the patient based on the obtained stored health data and on the identified spoken patient health data, the context-sensitive voice feedback suggesting a course of action or requesting additional information based on the received voice command.
[0000] in another embodiment, .the present disclosure provides a processor- implemented method of voice-enabled disease management comprising: at a network device, receiving voice data associated with a voice command generated by a patient- identifying spoken patient health data In the received voice command; obtaining stored health data related to the spoken patient health data; and generating voice feedback data for providing context-sensitive-voice feedback to the patient based on Ihe stored health data and on the identified spoken patient health data the context-sensitive voice feedback suggesting a course of action or requesting additional information based on the received voice command.
[0051] In -another embodiment, the present disclosure provides a non-transitory machine readable medium having stored thereon statements and instructions for execution by a processor to perform a method as described and ilustrated herein.
[0052] In another embodiment, the present disclosure provides an apparatus comprising: at least one processor, arid memory storing. computer-readable instructions that, when executed by the at least one processor, cause the apparatus to perform a method as described and illustrated herein.
DETAILED DESCRIPTION
[0063] For simplicity and clarity of illustration, reference numerals may be repeated among the figures to Indicate corresponding or analogous elements. Numerous details are set forth to -provide an understanding of the embodiments described herein. The embodiments may be practiced without these details. In other instances, well-known methods, procedures, and compdnents have not been described in detail to avoid obscuring the embodiments described.
[0054] Figure 1 Is a block diagram of a system 100 for voice-enabled disease management aocording to an embodiment of the present disclosure. The system 100 comprises a network disease management module 120 and a mobile disease management module 130. In the example embodiment of Figure 1, the network disease management module 130 comprises a voice Bervice application 122 configured to run on a network device to provide voice-based disease management services to a patient in a voice interaction mode. The mobile disease management module 130 Is in communication with the voice service application 122, for example via a. voice service platform 140 and is configured to provide graphical or text-based disease management services to the patient in a mobile Interaction mode. In an example embodiment, the mobtie disease management module 130 comprises a mobile service application 132.
[0058] The voice interaction mode comprises a mode in which voice Is used to enable interaction between the patient and the system. In an example embodiment, the voice interaction mode comprises a voice communication mode in which voice content or audio content is used to enable communication between the patient and the system. Similarly, the mobile interaction mode comprises a mode In which one or more of graphical, viaual and text Interaction is used to enable interaction between the patient and the system. In an example embodiment, the mobile interaction mode comprises a mobile communication mode In which graphical content or text-based content is used to enable communication between the patient and the system. In an example embodiment, the graphical content comprises one or more of: still Images; moving images, such as animated GIFs; video clips; and longer videos. In an example embodiment, the mobile interaction mode additionally comprises and enables haptic feedback, suoh as causing a mob He device to vibrate; the haptic feedback can include one or more of tactile feedback or kinesthetic feedback. In addition to the graphical or text-based services/Interaction, the mobile interaction mode can additionally comprise any form of non-visual interaction and non-voice interaction as known to one of ordinary skill in the'art
[0006] Both the network disease management module 120 and the mobile disease management module 130 access the same up-to-date data by using the same back-end database, or disease management database 124. In an embodiment, the disease management database 124 is configured to provide a common set of data accessible by the voice service application 122 and the mobile service application such' that the voice- based disease management services provided by the voice service application in the voice Interaction mode are integrated with the visual or text-based disease management eervfcee provided by the mobile disease management application in the mobile interaction mode.
[0057] Consider an example implementation of an embodiment of the present disclosure in which a 12 year old boy with diabetes uses an Amazon Echo™ device and a Samsung Galaxy™ phone to help manage his diabetes. The boy asks: "Aiexa, ask MyCoach what my blood glucose targets are", in response to which the Echo device, ' powered by an embodiment of the present disclosure, responds: "Your blood glucose target before breakfast is between 4 and 7 mllllmoles per liter and before dinner between 4 and 10 millimoles per liter.-90% of your readings are within target. Keep up the good work!" The system of the embodiment is also configured to provide related text-based feedback to the boy's phone, for example including a detailed listing of his blood glucose readings for the last day or two.
[0068] In this example implementation, the 12 year old boy with diabetes is an example of a patient 102, as shown in Figure 1, diagnosed with a disease, chronic Illness or medical condition that needs to be managed. The Amazon Echo™ device ie an example of a smart speaker 110 enabling voice interaction, for example via the Amazon Aiexa™ cloud-based voice service platform 140, with a system 100 for voice-enabled disease management according to an embodiment of the present disclosure. The Samsung Galaxy™ phone Is an example of a mobile device which runs a mobile disease management module 130 according to an embodiment of the present disclosure. [0089] The network disease management module 120 is a voice-based module according to an embodiment of the present disclosure implemented In a network such as the Internet, the 'cloud", or a private internet/cloud, on suitable hardware. In an embodiment, the network disease management module 120 is Implemented on one or more network devices, or one or more cloud devices. The mobile disease management module 130 runs on a mobile device, such as a smartphone or tablet In an embodiment, the patient or user 102 can Interact with (he system 100 via voice communication using a voice assistant-enabled smart speaker 110, Such as in Amazon Echo™, Google Home™, Apple HomePod™, a Nuances-enabled device, or the like. In an implementation, the smart speaker 110 includes a mioraphone to receive a voice command from a patient, with processing of the voice command typically being performed in the network, or in the cloud. In an implementation, the smart speaker 110 operatee in conjunction with .the voice service platform 140.
[0060] In an example embodiment, a voice command received at the smart speaker
110 from the patient 102, or voice data associated with the voice command received at the smart speaker, is provided via the voice service platform 140 to a voice service application 122 In the network disease management module 120. In an example embodiment, the vpioB service application 122 parses and interprets the received voice command or the received voice data associated with the voice, command. In an embodiment, the voice service platform 140 is a virtual assistant, or voice assistant, such as an intelligent personal assistant or audio control interface. Voice service platforms, such as Amazon Alexa™, Google Home1". Apple's Siri™ and Microsoft's Cortana™ are capable of voice interaction and completion of tasks based on such voice interaction, such as music playback, setting alarms, creating and updating lists, smart home tasks, and obtaining information such as news, weather and traffic. In an example embodiment, the voice service platform 140 comprises voice services (which can include a third party application programming interface, or API) that allow the voice commands from the patient 102 received by the smart speaker 110 to be processed, understood and acted upon. A voice service platform 140 can be considered similar to an operating system for voice services, where the voice services are available via a smart speaker 110 or similar device.
[0061] In an example Implementation, as mentioned earlier, both the network disease management module 120 and the mobile disease management module 130 access the same up-to-date data from the disease management database 124. In an example embodiment, the disease management database 124 Is configured to provide long-term storage of patient data, and short-term storage of context and variables; in an example embodiment, the disease management database 124 ia configured to address modem data privacy and regulatory requirements around patient information, for example by supporting encryption of fields and dynamic modification of stored data.
[006?] In an example implementation, a patient's voice command is recordings are sent to a network server for processing, and then sent to the voioa application 122 for further action. A typical voice services user would have a voice services account associated with their device, which account would include personal details about the user (e.g., name, address, payment information, buying history, etc.) Accordingly, if a person with a typical smart speaker or other volce-servicee enabled device shares that they have a high blood pressure reading, someone at the voice service platform provider could associate that health data to the user associated with the account, which is not compliant with HIPAA (Health Insurance Portability and Accountability Act).
[0063] In an implementation In which data is sent to non-HIPAA compliant servers, according to an embodiment of the present disclosure a user can create an anonymous user account (e.g. mdc.p8ttent02ie2Qhospital-name.com) on a first dedicated smart speaker 110. The voice service platform provider would have no other information on that individual other than this e-mail address. When the voice service platform 140 contacts the voice service application 122 with the patient ID. the voice service application 122 according to an embodiment of the present disclosure is configured to associate that ID with a named user within a HIPAA compliant platform.
[0064] In an example implementation, the patient uses a first dedicated smart speaker associated with an anonymous account to enable HIPAA compliant interaction . with a disease management system that includes or interacts with non-HIPAA compliant elements, or where the system has one or more voice services or components thereof that , are non-HIPAA compliant. In that example Implementation, the patient uses a second general purpose smart speaker associated with a patient's regular voice services account, for example with which the patient could order a ride with a car sharing service, or request that music in their library be streamed. In such an example implementation, the patient's anonymous account is associated with the first dedicated smart speaker, and the patient's voice services account is associated with the second general purpose smart speaker; the dedicated first smart speaker is not used with the patient's voice services account, and the first dedicated smart speaker is enabled, according to an embodiment of the present disclosure, to interact in a HIPAA-oompliant manner with a non-HIPAA compliant platform or with a platform that Includes at least one non-HIPAA compliant element [0065] Ift the embodiment shown in Figure 1, the disease management database 124 is provided as part of the network disease management module 120. In another embodiment, the disease management module 120 is implemented using a mleroservloes architecture or mlora services implementation, and Is geographically distributed such that Its components need not be co-tocated. In an example embodiment the disease management database 124 Is. accessible by, and shared between, the network disease management module 120 and the mobile disease management module 130. In an example embodiment, the disease management database 124 Is provided In communication with the network disease management module 120 and with the voice service platform 140 to enable communication with the mobile disease management module 130, and in an embodiment Is not part of the network disease management module 120.
[0066] The .disease management database 124 can be located anywhere in the system as long as it can establish and maintain communication with, and accessibility by, the network disease management module 120 and the mobile disease management module 130, for example via the voice service platform 140. In an example embodiment, the network disease management module 120 handles, via the voice service application 122, activity from both the smart speaker 110 and the mobile disease management module 130; data related to such activity is stored in the disease management database 124, which is shared by the voice service application 122 in the network disease management module 120, end the mobile disease management module 130.
[0067] In an example embodiment, as shown In Figure 2; the network disease management module 120 is configured to, at 202, receive voice data associated with a voice command generated by a patient, or to receive the voice command itself. In an example embodiment, the voice command is a command spoken by the patient in another example embodiment, the voice command is an audio file played by the patient, which may have been generated and recorded by the patient, or by someone else. In an example embodiment, the voice command is received using a microphone, for example at the smart speaker 110. For example, in one embodiment, after the voice command is received at the smart speaker, the network disease management module receives the voice command as the same audio recorded at the smart speaker 110. In another embodiment, the network disease management module 120 receives voice data associated with the received voice command; for example, m an implementation, the voice data is a compressed or reformatted representation of the audio recorded at the smart speaker 110 when the smart speaker received the voice command from the patient. [ΟΟββ] Referring to the example discussed earlier, the phrase "Alexa, ask MyCoach whet my blood glucose targets are" Is an example of a voice command. In an example embodiment, as shown in Figure 2 at 204, the network disease management module 120 is configured to identify spoken patient health data in the received voice command, or in received voice data associated with the voice command. In an example embodiment, the spoken patient health data is identified based on parsing content of the received voice command, or received voice data associated with the voice command, for example using fl processor to parse the voice data/voice command and identify the expression "blood glucose targets" as spoken health data from the received voice data/voice command. In an example empodhnent, identifying the spoken patient health data comprises comparing the received voice command with health data or labels associated with health data stored in a database,, such as a health information portal or electronic medical record (EMR), to assist in the parsing or other identification of the health data contained in the received voice command.
[0069] In an example embodiment, at 206, the network disease management module 120 Is configured to obtain stored health data related to the spoken patient health data. In an embodiment, the stored health data comprises patient health data stored in the disease management database 124. In another embodiment, the stored health data comprises health portal data 160 obtained from one or more health information portals, such as Healthwise™, WebMD™, Health Navigator™ or similarhealth Information portals. In a further embodiment, the stored health data comprises a combination of patient health data stored in the disease management database 124 and health portal data 160 stored in one or more health Information portals. At 206 in Figure 2, the network disease management module Is configured to provide context-sensitive voice feedback to the patient based on the obtained stored health data and on the identified spoken patient health data. In an embodiment, the context-sensitive voice feedback comprises a suggested course of action or a request for addftlonal information, with the suggestion/request being based on the received voice command.
[0070] Referring to the example discussed earlier, the phrase "Your blood glucose target before breakfast is between 4 and 7 millimoleB per liter and before dinner between 4 and 10 miliimoies per liter. 90% of your readings are within target Keep up the good workr Is an example of the context-sensitive voice feedback provided by the network disease management module .120. This example of context-sensitive voice feedback Includes: providing the patient's target In two different scenarios/times of day; advising the patient of how the measured readings compare to the target; and providing a related encouragement In this example, the stored health data relating to the targets it obtained from the disease management database 124, and the determination of the percentage of readings within target is performed bated on stored health data in the disease management database 124 and/Dr from gathered data from one or more connected health devices 134, to be described later.
[0071] As shown in Figure 1, in an example embodiment, the network disease management module 120 comprises a voice service application 122. In an embodiment, the voice service application 122 -is configured to perform natural language processing on the received voice command to identify the spoken patient health data in the received voice command, for example in association with 204 or 206 in Figure 2. In an example embodiment, the voice service platform 140 is the API through which the voice service application 122 communicates. The vok» service application 122 is also configured to format the received spoken patient health data in a format similar to the stored health data, to facilitate obtaining the stored health data related to the voice command. In an example implementation, voice service application converts the received spoken patient health data to recorded patient health data in a first format, which Is similar to a second format of the stored health data. In an example embodiment, the network disease management module 120 creates recorded health data based on the received spoken patient health data, in the first format which is similar to the second format of the stored health data.
[0072] Consider a scenario In which: the stored health data comprises patient health data stored in the disease management database 124; the patient health data comprises glucose readings in the format of mmol/L; but the spoken health data comprises: "My blood sugar is 140 miligrams per decilitre', which is a different format, In this case a different unit of measurement. In this example Implementation, the network dise&se management module creates recorded health data by converting, based on stored relationships to convert from one format to another, the value of 140 mg/dL from the received spoken health data to the equivalent of 7.8 mmol/L (the recorded health. data), which is in a format similar to the format of the health portal data. While this example Illustrates converting one unit of measure to another unit of measure based on a known' conversion, other example embodiments comprise other types of format conversion, such as converting from spoken health data in French to the equivalent recorded health data in English,, when the health information database stores health data in English.
[0073] In an example embodiment the network disease management module 120 comprises storage, such as.a computer-readable memory or a database. In an example embodiment, the network disease management module comprises a memory storing statements and instructions for execution by a processor to perform natural language processing and provide functionality such as artificial intefligence relating to disease management, tor example to implement features ae described herein. In another embodiment, the network disease management module 120 comprises one or more non- transitory computer-readable media storing statements and instructions for execution by a processor to provide functionality associated With the voice services application 122.
[0074] In an example Implementation, the voice service application 122 augments the work of a health professional and improves pabent outcomes, by acting as an Independent third party coach, which can also help to avoid parent-child conflict when the patient Is a child. In an example embodiment, the voice service application 122 gathers information in real time from different devices or Inputs.
[0078] ' In an example embodiment in which the voice service platform 140 is Amazon Alexa™, the voice service application 122 comprises statements and instructions which, when executed by a processor, provide an Alexa Skill. An Alexa Skill is a capability or set of functionalities for the Alexa cloud-based voice service. In an Implementation, an Alexa Skill is a third-parry-developed voice experience, or "voice app", that adds to the capabilities of an Alexa-enabled device, such as an Amazon Echo™. All Alexa Skills run in the cloud, in contrast to an app for a smartphone or tablet, which typically runs on the device itself.
[0076] As shown in Figure 1, the mobile disease management module 130 is in communication with the network disease management module 120 via the voice service platform 140. In an embodiment, the mobile disease management module 130 comprises statements and instructions which, when executed by a processor, provide a mobile application configured to run on a mobHe device, such as a smartphone or a tablet, which includes a microphone and a transceiver. The mobile disease management module 130 is also configured to receive health data from one or more connected health devices 134, such as for example a glucometer, a weight scale, or a wearable device such as qn activity tracker or smart watch. In an example embodiment, the one or more connected health devices 134 are selected from the group consisting of a fitness tracker, a weight scale, a glucometer, spirometer, and a disease-specific data collection apparatus.
[0077] In an embodiment, the connected health devices 134 communicate with the mobile disease management module 130 using one or more communication means. In an example, the connected health care devices 134 communicate with the mobile disease management module 130 using a wireless Communication protocol, such as Bluetooth™, Wi-Fi, NFC (Near-Field Communication) or the like In an example, the connected health care devices 134 communicate with the mobile disease management module 130 using hardware configured to communicate using a particular protocol or technology, suoh as a transceiver or transmitter/receiver for wireless or near-field communication. In an embodiment, the connected health devices 134 communicate with the mobile disease management module 130 using hardware for wired communication, such as a data cable, for example using the Universal Serial Bus (USB) standard, such as USB 1.x, 2.x. or 3.x. or Ethernet, or the like, and wherein the connected health care devices 134 and the mobile disease management module 130 are provided with suitable connectors/receptacleB/BocketB. such as those compatible with U6B type A, B, C and variations thereon, or Ethernet ports, or the like. In an embodiment, the connected health devices 134 provide third generation data, which is the data of life, such as: geolocation, activity level, sleep data, food intake data, blood glucose level.
[0078] In an embodiment as shown at 212 in Figure 2, the mobile application, or mobile-disease management module 130, receives gathered health data from one or more connected health devices. In an example embodiment, the mobile disease management module 130 receives personal or patient-specific health data selected from the group consisting of: number of steps walked, heart rate, quality of sleep, steps climbed, and other personal metrics Involved in fitness. In another embodiment, the mobile application receives disease-specifio health data, such as glucometer data, spirometer data, etc.
[0079] As shown at 214 in Figure 2, in an embodiment the mobile disease management application 130 is also configured to receive, from tne network disease management module 120, disease-related patient health data associated with the context- sensitive voice feedback. Referring to the example discussed earlier, the detailed listing of the 12 year old boy's blood glucose readings for the last day or two is an example of disease-related patient health data associated with the context-sensitive voice feedback, where the voice feedback provided basic details of the bo/s blood glucose readings and targets. In an example embodiment, the mobile disease management application.130 is in direct communication with the network disease management module 120, without having to use the voice service platform 140, for example to receive the disease-related patient health data associated wHh the context-sensitive voice feedback, or to provide data to the network disease management module 120.
[0080] WhHe some embodiments of the present disclosure take advantage of a smart speaker 110 to receive a voice command, or voice data associated with the voice command, at the network disease management module 120, tn other embodiments a smart speaker 110 is not required to receive a voice command, or voice data associated with a voice Qommand. For example, a microphone associated with a mobile device running the mobile disease management module 130 can act as a replacement of the smart speaker. For example, In an embodiment, the mobile disease management module 130 receives, via a microphone on a mobile device on which the mobile disease management module is provided, a voice commend from a patient 102, or voice data associated with the voice . command. In an implementation, the voice command is output by the mobHe disease management module 130 and interpreted by the voice service platform 140 in a manner similar to that described above, but without needing a smart speaker 110. This enables voice Interaction with the system when the patient is not at home or near their own smart speaker, for example using a "push-to-talk* type of functionality enabled by the mobHe disease management modiile 130, or an 'elwaye listening' implementation that uses a wake word. ThW makes it easier for a child or young adult to easily "enter* patient data simply by talking to either the smart speaker 110, or to the mobile device running the mobile disease management module 130 when the smart speaker 110 is not available. In the two cases, the system according to an embodiment of the present disclosure provides the voice command to the network disease management system 120, for example via the voice service platform 140, from either the smart speaker 110 or the mobile disease management module 130.
[0081] Providing an architecture with a common back-end database, such as the disease management database 124. enables the voice service application 122 in the network disease management module 120 to interact with .the smart speaker 110 and with the mobile disease management module 130, using a common set of data, which provides an advantage over other known approaches that may attempt to "add on' a voice application, such as an Alexa skill, to interact with a mobile app. For example, in contrast to embodiments of the present disclosure, in non-integrated approaches, if data is provided via voice, the related mobile app may not become aware of the hew data until, for example, the app is closed down and opened later, and a clinician may not be aware of the new data until they log in the next day, which could adversely affect patient care. Additionally, fn such non-Integrated approaches having siloed data without a common back-end, there are challenges of one system over the other, when APIs have to try to share data without a common back-end.
[0082] In an example Implementation, the common back-end disease management database 124 enables the system to be both voice-first and multi-modal, such that all functionality capable of being performed using a first mode of voice communication via the smart speaker 110 and using the network disease management module 120 can also be performed using a second mode of mobile communication vie the companion mobile disease management module 130. For example, in response to a user asking, via voice communication using the smart speaker 110, "Can type 2 diabetes be cured?", the system can provide an audio or voice answer via the network disease management module 120 through the smart Speaker 110, and also provide the text of the same answer verbatim to the mobile disease management module 130. In another embodiment, the mobile disease management module 130 can enhance the voice experience and provide complementary functionality. In the example above, the text provided to the mobile disease management module 130 can include the verbatim text of the answer provided via voice communication, plus additional media content which can Include links to other references, a short video, and other content.
[0083] This multi-modal aspect provides an advantage of using a patient's preferred medium, but also enabling a user to start a task on one device and complete the task on another device, or provide enhanced experience vie a different medium. This provides an improvement over known non-integrated approaches that lack a common back-end database and may attempt to patch together data from voice and mobile applications that are not designed to integrate seamlessly with one another with respect to updating underlying data. For example, a user can generally prefer using a first mode of voice interaction, but a system according to an embodiment of the present disclosure is configured to provide additional reference material (image, video) via a second mode of mobile media interaction to supplement the content provided by the first mode of voice feedback. In an embodiment, the multi-modal functionality is enabled by a single common backend, for example including the disease management database 124, for each of the front end components, such as the mobHe disease management module 130 and the voioe service application 122.
[0084] in an embodiment, Information is transmitted so as to appear at all modalities at substantially the same time, for example via a voice communication mode and a mobile communication mode. In an example embodiment, which can be described as a reactive system, the disease management database 124 is event-triggered, so for example a glucose reading submitted by voice to the database 124 triggers a message to any system component or service thet manages glucose readings to then be aware of the event and take an action on the event. In an embodiment, in response to receipt or entry of new data, the system reacts and can act on the reactions, as opposed to a known system that has to "pull" the information. According to an example embodiment of the present disclosure, the system components that are interested in certain types of readings can "subscribe" to certain data or events, in response to a trigger associated with a type of data or event.
[0086] In an example embodiment the multi-modal or omnl-channel aspect can be implemented in a voice-first approach. Conelder a situation in which a patient is notified on a mobile device and responds to the notification through a smart speaker. For example, the system sends a notification to the patient via the mobile disease management module 130 as a reminder to take their medication, such as Metformin to lower blood sugar. In association with the mobile notification, the patient can ask the smart speaker to confirm how many pils to take, and then provide a voice confirmation via the smart speaker that they have taken the medication. Even though the notification came via the mobile disease management module 130, since the system uses a common disease management database 124, the system can associate the voice confirmation of the medication being taken with the text notification provided via the mobile disease management module 130, and update the database 124 based on the medication having been taken, and clear the notification so that no further reminders are provided.
[0088] In an embodiment, the mobile disease management module 130 causes display of a graphical user interface via which patient data can be entered manually, by entering text or choosing from drop-down menus, selecting radio buttons, check boxes, or any other suitable input method. In an example embodiment, the mobile disease management module 130 includes or provides' the graphical user interface. Advantageously, according to an embodiment of the present disclosure, the network disease management module 120 collects all data, and is responsible for collecting all data, regardless of how the information was entered, or whether It was originally received via. the voice service platform 140, or via the mobile disease management module 130.
[0087] In an example embodiment, the disease management database 124 is provided at or in association with the network disease management module 120 and makes data stored in the database available to the patient 102 The data stored In the disease management database 124 can be provided to the patient 102 via voice feedback at the smart speaker 110, voice feedback at a speaker of a mobile device running the mobile disease management module 130, or via visual, auditory, tactile or oiher feedback at the mobile disease management module 130. In an example embodiment, when a notification is provided to the patient 102 via one means, for example via the smart speaker 110, the system ensures that a duplicate notification is not provided by a different means, for example by clearing or deleting a notification on one mode of interaction or communication, In response to a notification having been read or acknowledged via another mode of Interaction or communication.
[0088] In an example embodiment, as shown in Figure 1, the system further comprises a secure patient portal 160, In communication with the network disease management module 120. which is configured to provide authorization-based access to patient data. For example, a first limited level of authorization can be provided to the patient or a relative 162, and a second more comprehensive level of authorization can be provided to a clinician 164. When the patient is a child, the relative can be a parent or guardian, which helps the parent or guardian keep an eye on the child's or youth's progress with disease management. When the patient is an elderly adult, similar access could be provided to a specified famiry caregiver such as a son or daughter.
[0089] In an example embodiment, the disease management database 124 is in communication with the secure patient portal 160 in Figure 1, enabling access, either directly or via the network disease management module 120, to a portal used by a clinician, or a parent/relative/guardian subscribed to a portal. The secure portal 160 enables access not just to a single user, but any authorized user who Is entitled to see that data.
[0090] In an example embodiment, the network disease management module 120 comprises a content management system (CMS) 126 that supports the development of omnl-channel patient questionnaires. The omni-channel patient questionnaires can be answered by a patient 102 via a number of channele or modes, such as via a voice mode using smart speaker 110, or a mobile mode at a mobile phone using the mobile disease management module 130, or via a desktop PC or laptop, via the secure patient portal 160. In an embodiment, a notification of the fact that the patient has responded to the questionnaire,- and optionally a summary of results, is provided back to the clinician via the secure patient portal 160. There are many standardized patient health questionnaires such as PHQ-9 to screen for depression that are ideally delivered through a platform such as a system 100 of an embodiment of the present disclosure. The content management system 126 enables a non-developer to authorAranslale these questionnaires into a format suitable for delivery across ail mediums or modes (e.g. voice, mobile, and desktop). In an example embodiment, the CMS 126 stores the format of the questionnaire, and the user interface for creating the questionnaire is delivered through the secure patient portal 160.
[0091] In an example embodiment, the network disease management module 120 and the secure patient portal 160 cooperate to triage patients according to risk level. Based on the amount of data being collected and an understanding of the targets and goals set by clinician, the system 100, and in particular the network disease management module 120, Is configured to score the patients according to compliance and risk. As a result, when presenting a list of patient* to the clinician, the network disease management module 120, Is configured In one embodiment to cause the display of a list of at) patients, and In another embodiment to cause the display of a prioritized list of patients according to patients at greatest risk. In an exemple embodiment, when the list of all patients is displayed, the system causes display of a list display selector, for example a button labelled "At Risk", permitting the clinician to easily display the higher risk patients. In another example embodiment, the list display selector comprises one or more of buttons, indicators, a toggle switch, or other visual means enabling the clinician to switch between an "all patients' view and an 'at risk patients" view.
[0092] In an embodiment, generating and causing the display of the prioritized list of patientB according to patients at greatest risk is performed based on one or more of non-compliance to goals, for example not taking medications or readings as required; and readings that fall outside of set targets, for example blood glucose readings are too high or too low too often. In an example embodiment, the list of all patients generated for display comprises: patient name; measurement and target data, and a risk Indicator. For example, consider an entry for a patient comprising the following Information: '(!) Janiya Nicolas, 8 of .12 Blood sugar readings out of range (8 low, 4 normal)." In this example, the patient name Is 'Janiya Nicolas", the measurement and target data is '8 of 12 Blood sugar readings out of range (8 low, 4 normal)* and the risk indicator is "(!)". In another embodiment, if the patient is not higher risk, then' the patient entry has no risk indicator, or the risk Indicator field Is blank, or otherwise indicates a normal state. .
[0093] in a further embodiment, in conjunction with the secure patient portal 160, the system comprises a secure messaging module, enabling a user to securely interact with a clinician or other caregiver or concerned individual without having to share either patient or caregiver personal contact Information. The secure messaging module enables clinicians to provide direct interaction with patients, without having to provide a clinician's direct contact Information, so that, for example, the clinician doesn't have to worry about a patient texting unnecessarily. In another embodiment, the system comprises a voice messaging module, which enables a user to use voice to send a message. While some known mobile voice assistants allow a user to send a message using a native application, such functionality is not available for third party applications. (For example, a user can use Siri™ on an i Phone™ to send a message or an email via the native messaging applications, but a user cannot use Siri™ to send a message within the Uber™ app.) In an example embodiment, an Improvement is provided that enables a user to instruct the voice service application 122 via a voice command through a smart speaker 110 to send their clinician a message, by way of the secure patient portal 160; In an example Implementation, the system parses the voice command In a manner similar to the approach described In relation to Figure 1 and Figure 2, for example to provide information to the recipient on the topic or relative urgency of the message.
[00*4] In another example embodiment,' the secure patient portal 160 provides a social component, whereby the patient 102 is provided with social connections to assist with disease management. For example, children or youth with a disease can be disadvantaged, and often cany a stigma, for example around .Type 2 diabetes. In an embodiment, the secure patient portal 160 provides the ability to pair, through a secure platform, a newly diagnosed patient with a mentor or natural leader who is managing their . care well, and who may have a similar diagnosis and/or have worked through a similar diagnosis earlier on in life, In an example embodiment, a parent and clinician can approve a "match" and be granted permission to observe the Interaction.
[0095] As shown in Figure 1 , In an embodiment the network disease management module 120 comprises an electronic medical record (EMR) interface 128 configured to intemperate with one or -more EMR systems to access or update EMR data 170. In an example embodiment, the network disease management module 120 securely exchanges patient data via the EMR interface 128 with the EMR system(s) Via a standards-based FHIR/HL7 (Fast Healthcare Interoperability Resources/ Health Level Seven International) interface. This avoids data silos with providers (hospitals) by allowing vital sign readings obtained in the home using embodiments of the present disclosure to be populated In the hospitals' existing electronic medical record system.
[0096] Figure 3 Is a block diagram of components of a system for voice-enabled disease management according to an example embodiment of the present disclosure. In the example embodiment of Figure 3, the network disease management module 120 comprises: a disease parameter tracker 322, a pattern tracker 324, a prompt content generator 326, and . an audio prompt generator 326. In an example embodiment, for example when communicating via the voice service platform 140, the audio prompt generator 326 is In communication with the voice service application 122. In another embodiment, the voice service application 122 comprises one or more of the disease parameter tracker 322, the pattern tracker 324, the prompt content generator 326. and the audio prompt generator 328. Figure 4 Is a flowchart Illustrating steps in a method for voice- enabled disease management according to an embodiment of the present disclosure, and related to Figure 3. [0097] The disease parameter tracker 322, as shown In Figure 3 and as illustrated in Figure 4 at 402, is configured to- receive and store the gathered health data received from the one or more connected health devices 134, for example at the mobile disease management module 130. Referring back to Figure 1, in another example embodiment, gathered health data It provided directly from the connected health device 134 to the disease parameter tracker 130 or to the disease management database 124, for example using a Wi-Fi Interface via' an API call to a vendor's private cloud backend. For example, in an Implementation, at the end of a patient going for a run, the patient's fitness tracker (connected health device) is configured to automatically upload running data to the network disease management module 120. Referring back to Figure 3, In an example embodiment relating to diabetes, the disease parameter tracker 322 comprises an activity /blood sugar level tracker. In such an example embodiment, the one or more connected health devices are configured to receive gathered health data selected from the group consisting of: blqod sugar level; exercise data; weight; amount of sleep; food intake; geolocatjon data; and time of day.
[0098] A pattern tracker 324 as shown in Figure 3 is In communication with the disease parameter tracker 322 and, as shown In Figure 4 at 404, is configured to generate one or more a cores based on how far a patients measured disease parameter is from a Clinician-set target disease parameter for the patient. In an embodiment, the pattern tracker 324 performs pattern tracking based on analysis of disease parameter tracker data. To help perform the pattern tracking and analysis, the pattern tracker 324 also uses data received from the disease parameter tracker 322 as training data to train the pattern tracker to improve operation.
[0099] A prompt content generator 326 as shown in the embodiment of Figure 3 is in communication with the pattern tracker 324 and configured to, as shown in Figure 4 at 406, generate patient-specific content Identifying one or more targeted disease parameters that require attention to improve the patient's health condition. In an example embodiment, generating the patient-specific content comprises developing time-based success scenarios. In an example embodiment, the prompt content generator 326 provides disease- specific business logic, for example based on a knowledge base or stored rules, to provide automated functionality of a virtual coach for disease management In an example embodiment, the prompt content generator 326 is implemented as a generative artificial intelligence (Al) module either including or configured to communicate with disease-specific business logic or a disease-specific knowledge base. In an implementation, the prompt content generator 326 determines what data is normal or abnormal/outlier for a particular patient, and then correlates or finds correlations tar abnormal data to provide associated feedback to improve the patient1 a health condition. In an example embodiment, the prompt content generator 326 comprises a lookup table.
[00100] Consider an example embodiment in Which the pattern tracker determines, based on observed sleep data, that the patient 102 is not meeting sleep goals. The prompt content generator 326, based on an identification of abnormal or off-target steep data relating to sleep goals, provides the following feedback to the patient: Try to get to bed 1 hour earlier tonight and you'll find you'll have significantly more energy tomorrow.1' In response to a subsequent determination that the observed sleep data is Improving, the prompt content generator can provide the following subsequent feedback: "Congratulations! How are you feeling this. morning?*
[0040Ί] In another example embodiment, the prompt content generator 326, as it gets more Inteligent, notices that every Tuesday a patient has low blood sugar, then can inquire directly with the patient when no correlation is found, if con-elation is found, for example observing on Monday night only 4 hours of sleep vs. regular 8 hours, the prompt content generator 326 can provide a voice prompt/message to the patient that they seem to have low blood sugar when they havent slept well. The prompt content generator 326 stores business logic to, based on observed patterns and on data captured in the pattern tracker 324, create data or content for use in providing or delivering audio prompts, as a human coarjh might convey during a coaching session, based on information known and stored in the prompt content generator 326.
[00102] In an example embodiment, in the case of diabetes management, the prompt content generator 326 stores and codifies; for example in a machine-readable memory, knowledge of the interactions between good glycemic control, exercise, nutrition and sleep to provide a virtual diabetes coaching functionality. In an example embodiment knowledge gleaned from a clinical team and/or from health portal data is coded in the prompt content generator 326 to develop' the virtual coaching functionality. In an embodiment, the pattern tracker 324 provides an activity success score to the prompt content generator 326, for example to determine which coaching prompt has the highest likelihood of improving health, outcomes; in an example embodiment, the higher the score, the better the correlation. The score is used within the system to identify preferred content to deliver to the patient, and is not shown or provided to the patient In an-embodiment, Ihe prompt content generator 326 provides an activity simulation to the pattern tracker 324, for example baaed on building time based success scenarios constrained by a rule set [00103] An audio prompt generator 328 M shown In figure 3 is in communication with the prompt content generator 326. In an example embodiment, the audio prompt -generator 328 receives activity recommendations from the prompt content generator 326. As illustrated in Figure 4 at 408, the audio prompt generator 328- is configured to convert data or content from the prompt content generator Into actionable audio or voice prompts for delivery via the voice service platform 140. In an embodiment, the actionable voice prompt comprises an audio file generated by the audio prompt generator and provided as an output to a speaker or other audio output device, such as the smart Speaker 110 or a speaker of a mobile device running the mobile Olseaae management module 130. In an example embodiment, the actionable voice prompts comprise a conversational -style prompt including: data tor improvement of the patient* s health condition; and a personalized motivational message associated with the data for improvement of the patient's health condition, in ah example embodiment, the personalized motivational message comprises an encouragement selected based on the relationship of the data for improvement of the patient's health condition to an associated target.
[00104] In an example embodiment, the audio prompt generator 328: is provided as part of the network disease management module 120; comprises a very sophisticated skill; and does not simply replicate what can be done on a mobile app. In an embodiment, the audio prompt generator 328 is configured to generate an actionable voice prompt, where the content of the actionable voice prompt Is generated in a manner such that the voice prompt encourages or generates a behavior change in the patient. In an embodiment, the prompt content generator 326 generates health data content, and the audio prompt generator 328 generates context for the health data content to yield the greatest chance or impact and success.
[00105] * In an example embodiment the audio prompt generator 328 provides an encouraging and congratulatory personalized motivational message when patient is achieving their target In another example embodiment, the audio prompt generator 326 provides a more empathetic personalized motivational message and draws on other resources, for example trying to connect the patient to their coach or mentor. How the audio prompt generator 328 responds to the patient depends on how the patient Is doing, which Is determined based on the received voice command and on health target data established by the patient* t clinician, and optionally additionally based on Ihe health portal data. In an embodiment, the audio prompt generator 328 incorporates empathy and responds to the patient In a unique way based on context. [00106] In in example embodiment, the manner In which the audio prompt generator 328 Incorporate* empathy and responds to the patient is tailored to the age of the patient, as different demographics are motivated In different ways. In an example Implementation, the audio prompt generator 328 is configured to Incorporate empathy and respond to the patient according to a first interaction framework when the patient is a chid, according to a second interaction framework when the patient Is en adult, and according to a third Interaction framework when the patient is a senior.
[00107] in an example implementation, the voice service application 122 processes patient queries and/or responses end determines a patient context in the absence of asking' the patient context-related questions. For example, In an embodiment, the voice service application 122 Is configured to provide voice samples of patient queries and/or responses to a voice analytics engine and to receive, from the volce.analytics engine, data on patient state of mind and/or Other emotional health parameters, based only on analysis of the patient's voice by the voice analytics engine. The operation of the voice analytics engine, or speech analytics software, is outside the scope of this disclosure and related solutions are known to one of ordinary skill in the art.
[00108] In another example embodiment, a mobile prompt generator 329 is provided as part of the network disease management module 120 and configured to generate an actionable mobile prompt, for example a text, audio or other media prompt The mobile prompt is delivered to the pabent via the mobile disease management module 130 at a mobile device to encourage or generate a behavior change in the patient, In a manner similar to the audio prompt generator 328 as described above. A detailed discussion of the operation of the mobile prompt generator 329 is omitted for the sake of brevity, with, the implementation details being apparent to one of ordinary skill in the art based on the earlier detailed description herein of the functionality of the similarly functioning audio prompt generator 328.
[00109] In an example embodiment, the audio prompt generator 328 and/or the optional' mobile prompt generator 319 is/are configured to enable behavior change in a user or patient., There Is a move towards digital therapeutics, as evidenced by the Food and Drug Administration (FDA) in the United States having approved a mobile app for smoking cessation. Embodiments of the present disclosure are ideally suited to enable the science behind behavior modification. It Is accepted globally that changes in lifestyle will Improve the condition of a patient suffering from a disease, and there are even some who believe that a patient could be medioation-free through behavior change alone. In an example embodiment, the disease parameter tracker 332 the pattern tracker 324, prompt content generator 326 and audio prompt generator 328 cooperate to provide an understanding of a user or patient to the point of providing meaningful incentives, for example including gamiflcation encouragement, to drive successful behavior change. The ability to deliver some of the behavior change functionality via voice has been found to be more effective than only delivering via mobile communication.
[00110] According to an example embodiment,' behavior change prompts are delivered over a plurality of modes or modalities, and can involve one or more of a clinician; a parent/guardian/mentor or a patient population relevant to the patient. Ail of these improve the chances of impacting patient behavior. In ah example embodiment, the behavior change prompt comprises a reminder, such as an appointment reminder to attend an appointment, or a testing reminder, for example a reminder to take a blood pressure measurement, or measure blood sugar levels. In another example embodiment, the behavior change prompt comprised a treatment instruction, such as an instruction to take a medication or group of medications at a certain time, or around a certain event, e.g. with an upcoming meal.
[00111] Patient engagement, adherence, and compliance are common challenges across the healthcare sector. In another embodiment, the system 100 of the present disclosure is Integrated into a broader healthcare system to drive new opportunities for patient engagement and compliance, providing enhanced behavior change functionality. In an example embodiment, the system 100 of an embodiment of the present disclosure Is in communication with a loyalty rewards system such that stored health data for a user, based on data collected from the disease management database and/or from the one or more connected health devices 134, is compared .to a target; loyalty rewards are awarded in response to meeting or exceeding a target. In an example Implementation, If 00% of a user's readings are on target, then the patient is awarded loyalty points. The loyalty points can be health-related, for example providing points that can be used to offset costs of purchasing medication or other health-rotated products or services. In another implementation, the system is configured to award one or more types of loyalty points, and to perform any necessary point value conversion, to best encourage patient engagement and compliance; for example, for youth trying to meet diabetes-related targets, the loyalty points can relate to music purchases or mobile device app purchases, or can be direct accumulation of dollar amounts towards gift cards for general or specific purchases.
[00112] In a further, embodiment, the system of the present disclosure Is Integrated with a prescription fulfilment system and configured to request a prescription refill, for example through integration with a Health related loyalty rewards system or another type of direct Integration. Many known health-related apps connected to a glucometer are siloed; the glucometer manufacturer hae access to data, out there are complaints around not being able to share data with a clinician. Embodiments of the present disclosure integrate with a clinician or provider electronic medical records (EMR), for example using the EMR interface 128 as shown in Figure 1, which can Integrate securely with payers and provider systems, pharmacy systems, etc. to deliver this benefit. Such a solution according to an embodiment of the present disclosure provides one or more of the following value enhancements: ability to measure and track compliance on the system, In a better way than other systems; can be tied to the behavior modification and is a type of incentive; can have hearth-related incentive, but other incentives as well depending on partnerships; if provider paying health insurance can be confident that a user is doing what a clinician has Instructed, and knows this win ultimately save costs for the insurance, savings can be given back to the user in form of incentives, as many Insurance companies have enough statistics to know how much non-compliance will cost them In the end.
PM)11¾ Figure 5 is a block diagram of components of a system for voice-enabled disease management according to another example-embodiment Of the present disclosure. In contrast .to the embodiment of Figure 3 which obtains and takes action related to gathered patient-specific health data. from connected health devices, the embodiment In Figure S receives specific health queries/commands from a patieht via a voice command, and provides answers. The embodiment of Figure 5 leverages health content that is available to digital health services via APIs, but which health content is almost exclusively designed to be presented through a desktop experience, with only some considering formatting for mobile, and none having considered how to make the content available via voice interface. An example embodiment associated with Figure 5 comprises real-time formatting or translation of a question Into a query property formatted for a third party portal, the result of which is then formatted Into <a suitable voice response, and optionally additionally delivered via other modalities, such as via text/mobile feedback.
[00114] The embodiment of Figure 5 describes fuzzy logic/pattern matching between free form questions and content available via one or more health information portals. In this example embodiment, the network disease management module 120 comprises: a question intent converter 522, a pre-processor 524, and a post processor 526. In an example embodiment, for example when communicating via the voice service platform 140, one or more of the question intent converter 522 and the post processor 526 is in communication with the voice service application 122. in another example embodiment, each of the question intent converter 522 and the post processor 526 is in communication with the voice service application 122. In a further embodiment, the voice Bervice application 122 comprises one or more of the question Intent converter 622 and the post processor 526. In an embodiment, the pre-processor 524 and the post-processor 526 interface directly with the health portal data 150 stored In one or more health information portals. Figure 6 Is a flowchart illustrating steps in a method related to Figure 5 for voice- enabled disease management according to an embodiment of the present disclosure.
[00115] In an embodiment, the question intent converter 522 determines, or selects the health information portal to be used as the source of data to answer a question in the received voice command, and the pre-processor 624 manages the interface to. and handles communication with, the selected health information portal. The question intent converter 522, as shown in Figure 5 and as Illustrated In Figure 6 at 602, is configured to convert the patient's voice command to a patient inquiry including additional patient context data In a format compatible with the health portal data in the particular health Information portal to be accessed. Additionally, the question intent converter 622 in Figure 5 is configured to identify that a patient's voice command is a general health query beat answered using health data from a health Information portal, rather than a patient-epeclflc query as discussed in relation to Figure 3: Examples of patient-specific queries associated with Figure 3 include: "What are my targets?", "I just took a glucose reading: how am I doing?" Examples of general health queries associated with Figure 5 include: "What is the relationship between diabetes and exercise?" or "Can type 2 diabetes be cured?"
[00116] In an embodiment, the voice command comprises an explicit.queetion, such as would normally conclude with a question mark when written, such as:. "What can I eat?", and which is most often characterized by the speaker's voice rising In intonation at the end of the voice command, in another embodiment, the voice command comprises an implicit question embedded in a statement such as "I want to know what I can eat", and the question intent converter 522 comprises semantic processing means configured to identify components of a query within a voice command that presents as a statement In an example embodiment,, the question intent converter 522 comprises semantic processing means, or semantic processing functions, and is configured to determine, based on an analysis of the patient's voice command, that the voice command is a general hearth query best answered using health data from a health information portal.
[00117] In an example embodiment, the question intent converter 522 adds patient information known from a patient record, such as the patient's age and disease status, or other information such as current weight and other health data gathered from the one or more connected health devices. For example the question intent converter 522 is configured to convert a patient query, or received voice command, of 'I'm eick; what can I eat?" to the following reformatted question that it suitable for obtaining health data from a health information portal: "Youth aged 12 with type 2 diabetes end a temperature of 102 degrees Fahrenheit looking for what they can eat. "
[00118] A pre-processor 524 is, as shown In Figure 5 and as Illustrated in Figure 6 at 604. in communication with the question intent converter 522 and with the health portal data 150. The pre-processor considers the content of the question/Intent, and matches the question to one of a plurality of hearth information portals based on the content of the question/intent, in an embodiment the pre-processor 524 comprises different logic for portal communication based on which portal is to be communicated with. In an embodiment, the pre-processor 524 specifies a data formal, or Bchema, for health portal data. The health portal data form at. can be as simple as Article Title and Article Content, where the content can Include Abstract, full article content, and additional media, in more sophisticated portals, the health portal data format can inolude more specific content blocks such, as Symptoms and Diagnosis. For example, the pre-processor 524 cart provide the Health Navigator™ portal with 2 symptoms, and the pre-processor 524 will handle the expected dialogue/interaction with the Health Navigator™ health portal.
[00119] In a first example, consider If the question is symptom based, e.g. "Alexa, tell MyCoach I'm feeling dizzy and nauseous. What should I do?" In that scenario, the preprocessor 524 directs the question to the Health Navigator™ database/portal based on determination of a symptom-based query and handles the Interaction/communlcatfon through to the recommendation of whether to treat at home, call a physician, or visit an emergency room.
[00120] In a second example, consider when the patient asks: 'Alexa, ask MyCoach: Can diabetes be cured?" In this case, the pre-processor 524 directs the question to the Health Wise7* database based on analysis of the voice command content and obtains the required data from that database. The pre-processor 524 is configured to convert data based on a patient's voice command tb a health portal data-formatted query to facilitate health portal data content lookup In a specific health information portal having a health portaj data format In an example embodiment, the pre-processor 524 crawls through the health portal data 150 to develop a schema for the supported content portal(e) and formats a lookup table in a way that facilitates accurate content lookup. In another embodiment vvhich ensures use of up-to-date data, the pre-processor 524 uses pattern matching and machine learning to perform pre-processing and health portal data format malchlng/mapplng In real time, for example by analyzing titles of article* and parsing the question to match the titles of articles.
[00131] A post-processor 526 as shown In Figure 5 le In communication with the health portal data 150 and with the voice service application 122. The post-processor 520, as Illustrated In Figure 6 at 606, is configured to reformat or modify the obtained health portal data prior to providing the context-sensitive voice feedback to the patient in an example embodiment, if there are 3 pages of content available, the. post-processor 526 determines only to provide or read a synopsis of the article as the context-sensitive voice feedback, and/or to ask whether to deliver the whole article to the patient, such as via the mobile disease management module. The post-processor 526 is configured to make determinations on how to del{ver content, for example based on a comparison of the length of the content, the format of the content, or both, to stored thresholds for voice delivery to patients in general, or to a particular patient. For example, If the content includes multimedia content, the post-processor 526 can determine to provide or speak a subset of the content through the virtual assistant, and/or to provide some or all of the content to the patient In rich text/lmages/video on their mobile device via the mobile disease management module, for example dividing the content Into smaller content portions when appropriate.
[00122] [Example- Diabetes Management
[00123] An example embodiment will now. be described with respect to an example implementation for diabetes management It is to be understood that other example embodiments provide similar functionality tailored to management of a particular disease, for example a lung disease such as chronic obstructive pulmonary disease (COPD), asthma, pulmonary fibrosis or cystic fibrosis.
[00124] Most currently available diabetes solutions are mobile blood glucose diaries, where the data is siloed and not-readHy available to clinicians. Such approaches also do not integrate with other patient data that could provide valuable insights when patients struggle to manage their care. An example embodiment of the present disclosure seeks to provide a better way to bring th'is all together that considers the needs of the patients first and delivers It to the platform of choice. °
[00125] This example embodiment comprises a connected health solution for patients with Type 2 diabetes that connects a patient to their circle of care. A natural language Interface enables a patient to inquire about their targets to increase compliance and comfort. Insights from data collected allow the system to encourage the patient to be successful and demonstrate empathy when the patient Is struggling. [00126] Some example interactions are provided below, with reference to dement* shown in Figure 1, in which a patjent 102 issues a voice command via a smart speaker 110, which is received via a voice service platform 140, which tn this example is Amazon Alexa. The network disease management system 120 receives the voice command, or voice data associated with the voice command, from the patient and obtains stored health data from a disease management database 124 and/or a health information portal 150, to provide context-sensitive voice feedback to the patient based on the received voice command and on the stored health data. In this embodiment, the one or more connected health devices 134 comprise a blood pi urometer.
[00127] - Patient Command 1 : "Alexa, ask My Coach what my blood glucose targets are." ,
[00128] - Answer 1 via smart speaker "Your blood glucose target before breakfast Is between 4 and 7 mlllimolea per Iter and before dinner between 4 and 10 millimoiee per liter. 90% of your readings are within target. Keep up the good workl"
[00128] The above answer demonstrate!! how the example embodiment parses the query, Identifies stored blood glucose targets for the patient for two different scenarios, and can proacHvely provide feedback on recent performance with respect to the targets. The answer is supplemented by an encouragement associated with the feedback on the patient's recent performance with respect to the targets.
[00130] This example embodiment provides a holistic approach to patient care. In an example implementation, vital sign data Is obtained from one or more connected health devices 134 and the patient Is encouraged towards achieving their goals.
[00131] - Patient Command 2: "Alexa, tell MyCoach my weight is 188.8 pounds."
[00132] - Answer 2 via smart speaker "Congratulations Saml You are now wHhin your target Weight range. Keep up the good workl"
[00133] This simple interaction provides an example embodiment In which the answer provides an Implicit confirmation that the weight reading has been recorded and Implicitly provides high level data oh meeting the target, in this example without having to specifically recite the target itself.
[00134] - Patient Command 3: "Alexa, tell MyCoach that my blood glucose reading is 90 mg/dL after eating."
. [00138] - Answer 3 via smart speaker "Your blood glucose reading of 90 mg/dL has been recorded. This is within your target range, keep up the good workl"
[00138] The above answer demonstrates how patent data can easily and reliably be entered into the system. Immediate feedback and encouragement can also be. provided regarding the spoken data Itself, and a retationahlp between the spoken data and stored target* for the patient. This example provides a rnore explicit acknowledgment that the reading has been entered, compared to the previous example which provided a more implicit acknowledgment.
[00137] - Patient Command 4: "Alexa, ask MyCoach how I am doing."
[00138] - Answer 4 via smart speaker "Since last month, you have lost 4 lbs and are hal way to your goal weight Your blood glucose is within target 70% of the time. Talk to your clinician on how you can improve this."
[00130] The answer above shows how the system can be provided with sophisticated stored queries that can be initiated in response to an apparently simple command. In this case, asking "how am I doing?" resulted in the system querying the disease management database 124, and possibly the health portal data 160, and obtaining and providing information on weight loss gain and relationship to target weight, blood glucose levels relating to targets and how often they are within target For different patient profiles, and for different diseases or stages of the disease, the system can store different disease-specific queries, or sets of queries, to be associated with the ..same general command.
[00140] The example embodiment leverages a wealth of patient information from a health information portal, such as Healthwise™, that is transformed through conversational exchange.
[00141] - Patient Command 5: "Alexa, tell MyCoach I am sick today. What can I eatT
[00142] - Answer 5 via' smart speaker "I am sorry to hear that Sam, you can eat or drink 30-50 grams of carbohydrates every 3-4 hours. That will keep your body nourished and prevent your blood sugar from dropping too low. If you are having trouble eating, try bland foods. Would you Ilka some suggestions?"
[00143] - Patient Command 6 (in response to Answer 5): "Yes please. "
[00144] - Answer 6 via smart speaker "Try one of the following; each equals one carbohydrate choice: one cup of lear soup, half a cup of regular soft drink, half a popslcle, half a cup of unsweetened apple sauce."
[00146] The interactions above demonstrate the conversational yet thorough and useful Interactions enabled by the example embodiment For example. Answer 5 begins with an em pathetic response, which encourages further interaction with the system and increases the likelihood of compliance with instructions to meet targets. The system transparently obtains health data from a selected health information portal based on the received question/command, and also baaed. on patient-specific data such as knowledge of one or more of the patient's age, weight, blood glucose targets, etc. from the disease management database 124. The system also proactlvery prompts for follow-up, which again encourages Interaction and increases the likelihood of the patient being able- to take the required action to address the questjQn/command.
[00146] - Patient Command 7: "Alexa, tefl MyCoach my blood glucose reading is 15 mllllmoles per liter."
[00147] - Answer 7 via smart speaker "Good, morning Sam. Your blood glucose of 15 millimoies per Iter is above your target You didn't sleep well last night and you have not been able to exercise as much this week. Sleep and exercise are both Important to -help you. maintain proper sugar levels. Would you Ike to share anything else about this?"
[00148] - Patient Command -8 (In response to Answer 7): "Yes. I am (n the middle of exams."
[00149] - Answer B via smart speaker Thank you. I have noted it and will provide this feedback to your clinician. Have a great day Saml"
[00160] The interactions above in commands/answers 7 and 8 demonstrate some of 'the power provided by the example' embodiment While command 7 is similar to command 3, the system according to the example embodiment not only recognizee that the reading Is above target, but Is also able to make use of exercise and sleep data' gathered frorri' connected health devices 134, and not entered via voice command or the current command, to provide additional Insight Into why the blood glucose level is above target. Integration with wearable devices provides deeper Insights when a patient is struggling, and provide context when vital sign readings are outside of desired targets. Additionally, the system is configured to request additional information, to provide additional data points to the system, which can then be incorporated by a clinician into modified goals, or can simply help to provide an explanation for a vital sign reading.' such as a blood glucose reading, that may otherwise have been worrisome.- .
[00161] The above example embodiment addresses one or more of the challenges associated with patient engagement, adherence and compliance wfthln the context of diabetes care management. Other example embodiments of the present disclosure are tailored to support patients with other diseases to drive Improved outcomes. For instance, in an example embodiment the one or more connected health devices 134 comprise a Bluetooth enabled spirometer. In such an example embodiment, the present disclosure provides a method and system for voice-enabled management of a lung disease such as chronic obstructive pulmonary disease (COPD) asthma, pulmonary fibrosis and cystic fibrose. Such example embodiments can encourage children with cystic fibroala to exercise and complete specfflo breathing exercises on their spirometer, which would nave a positive Impact In slowing the rate of decline in lung function, helping to clear mucus from the lungs, allowing, for easier breathing, and creating more reserve for the whole body to rely on, during periods of lung infection.
[00162] Figure'. 7 Is a block diagram of a system for voice-enabled disease management according to another embodiment of the present disclosure. In Figure 7, a portion of the system 100 of Figure 1 Is shown with a different embodiment of a smart speaker 710. In the'embodiment of Figure 7, the smart speaker 710 is operable In two modes: a wake word detection mode; and a non-verbal wake condition detection mode. The smart speaker 710 comprises a wake word detector 712, or wake word detection module, configured to detect, via a microphone 714, a wake word (such as "AJexa", "Hey Slri", "OK, Google", or a user-defined wake word) end to begin processing commands following the' wake word In a wake word detection mode.
[00163] In the embodiment of Figure 7, the smart speaker 710 further comprises a non-verbal wake condition detector 716, or non-verbal wake word detection module, and a motion sensor or camera 716. The non-verbal wake condition detector 716, which can be a presence/user detection module, is configured in an embodiment to. in response to detection of a non-verbal wake condition by the motion sensor or camera 718: detect a non-verbal wake condition, such as an action or gesture; and to activate operation of the smart speaker in response to the detected non-verbal wake condition and In the absence of a verbal wake word, in an example embodiment, the detected non-verbal wake condition comprises detection of presence of a user, for example within a detection distance from the smart speaker. In another example embodiment, the detected non-verbal wake condition comprises detection of a non-authenticated wake action or gesture, such as a hand wave or the presence of a face.
[00154] In a further example embodiment, the detected non-verbal wake condition comprises detection of an authenticated wake action, such as face detection to authorize a specific user. Optionally, in association with the detection of an authenticated wake action, the system can advantageously provide customized feedback, such as based on stored data relating to the authenticated user. For example, in an embodiment, the system provides, customized feedback based on a detected condition (e.g. a detected emotional oondition, a detected physical or physiological condition) of the authenticated user, such as based on a detected facial expression; In an example embodiment, the system adjusts the content of the feedback, the tone of the feedback, or both, In response to the detected condition.
[001 W] In an example embodiment, the system, .of Figure 7 Is configured lo proactJvely provide a voice prompt to the user In response to. a detected non-verbal wake condition and in the absence of a verbal wake word. In an example embodiment, the system is configured to provide a voice prompt such as a pre-recorded audio file (e.g. an MP3 that say* 'Good morning, [user name]"). In an example embodiment, in response to the detected non-verbal wake condition, the system is configured to provide prompts such as asking a user about appointment or goal reminders, reminding about taking medication before appointment, and proecttvely asking a user to confirm whether a medication has been taken so that the system can check it off a user's medication list.
[00156] Embodiments of the present disclosure associated with Figure 7 provide not just the functionality of a non-verbal wake condition, but also a mechanism to enable a third party voice services-enabled voice device (e.g. smart speaker) to Interact with a user on its own schedule, without a user having to do anything other than be present In an example . implementation, the system determines whether to provide a proactive verbal prompt in response to both detection of a non-verbal wake condition and an appropriate time of day: For example, if an Alexa skill is triggered at 3am and a user goes for a midnight snack, the system is configured not to send a voice prompt right away, and to wait until a combination of both user presence and an appropriate time of day. In an example embodiment, the user presence and time of day functtonafity is provided at the smart speaker itself, or in the network/cloud. Existing approaches have no way for a smart speaker to initiate interaction with the user; it is always the user- who initiates interaction with the known' device. Embodiments of the present disclosure provide an improvement over known approaches by providing a smart speaker 710 with a motion sensor or camera 718 configured to enable the voice service application to initiate interaction, for example in conjunction with the smart speaker 710, with a user in response to detection of a non-verbal wake condition, for example using the non-verbal wake condition detector 716.
[00167] Embodiments of the present disclosure provide a solution that integrates a plurality of components that are aware of each other and contribute to enhanced disease management. Known wearable devices currently show some information on an app, but embodiments of the present disclosure provide an improvement that supports disease . management for a chronic illness that incorporates being able to perform one, many or all of the following: pull In data from an existing hearth portal (e.g. Healthwise™); integrate with wearables;, be set up by a clinician who sets up targets and a care plan; provides portal access; and is voice first Embodiments of the present disclosure provide a voice-first cloud-baaed diaene management system Including a voice service application which is integrated with a mobile application to facilitate easy patient interaction and data entry, and can provide an em pathetic 'coach* via context-eenikive voice feedback.
[001 B81 An embodiment of the present disclosure is described as follows. A method and system are provided for voice-enabled disease management. The system includes a network disease management module having a voice service application configured to run on a network device to provide voice-based disease management services to a patient in a voice interaction mode. A mobile disease management module includes a mobile service application configured to run on a mobile device to provide graphical or text-based disease management services to the patient In a mobile interaction mode. A disease management database is configured to provide a common set of data accessible by the voice service application and the mobile service application such that the voice-based disease management services provided in the voice interaction mode are integrated with the visual or text-based disease management services provided in the mobile interaction mode. The system allows a patient to inquire about health targets and Increase compliance and comfort
[00158] As described in sufficient detail above, embodiments of the present disclosure provide a non-abstract Improvement In the functioning of a computer or to computer technology or a related technical field, including related systems and methods. Embodiments of the present disclosure provide specific details on how the system accomplishes a result that realizes an improvement in computer functionality including providing integrated voice and mobPe applications that both provide services based on a common set of data. This represents an improvement in the way a system with integrated components store and retrieve data In a common memory in communication with a voice services application and with a mobile services application, and is a specific implementation of a solution to a problem in the computer and software arte. Embodiments of the present disclosure provide a non-conventional and non -generic arrangement of computer components to achieve a non-abstract improvement in computer technology.
[00160] In the preceding description, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the embodiments. However, it will be apparent to one skilled in the art that these specific details are not required. In other instances, well-known electrical structures and circuits are shown in block diagram form in order hot to obscure the understanding. For example, specific details are not provided as to whether trie embodiments described herein are implemented as a software routine, hardware circuit, firmware, or a combination thereof.
[00161] Embodiments of the disclosure can be represented as a computer program product stored tn a machine-readable medium (also referred to as a computer-readable medium, a processor-readable medium, or a computer usable medium having a computer- readable program code embodied therein). The machine-readable medium can be any suitable tangible, non-transitory medium, including magnetic, optical, or electrical storage medium including a diskette, compact disk read only memory (CD-ROM), memory device (volatile or non-volatile), or similar storage mechanism. The machine-readable medium can contain various sets of instructions, code sequences, configuration information, or other data, which, when executed, cause a processor to perform eteps In a method according lo an embodiment of the disclosure. Those of ordinary skill in the art will appreciate that other Instructions and operations necessary to implement, the described implementations can also be stored on the machine-readable medium. The instructions stored on the machine- readable medium can be executed by a processor or other suitable processing device, and can interface with circuitry to perform the described tasks.
[00Ί62] The above-described embodiments are intended to be examples only. Alterations, modifications and variations can' be effected to the particular embodiments by those of skill in the art without departing from the scope, which is defined solely by the claims appended hereto.

Claims

WHAT IS CLAIMED IS:
1. A system for voice-enabled disease management comprising:
a network disease management module comprising a voice service application configured to run on a network device to provide voice-based disease management services to a patient in a voice Interaction mode; a mobile disease management module comprising a mobile service application configured to run on a mobile device to provide graphical or text-based disease management services to the patient in a mobile interaction mode, the mobile disease management module being in communication with the network disease management module; and
a disease management database in communication with the voice service application and the mobile service application, the disease management database configured to provide a common set of data accessible by the voice service application and the mobile service application euch that the voice- based disease management services provided by the voice service application in the voice interaction mode are integrated with the visual or text- based disease management services provided by the mobile disease management application in the mobile interaction mode.
2. The system of claim 1 wherein the votce-baeed disease management services provided by the voice service application and the graphical or text-based disease management services provided by the mobile service application are both based on the common set of data provided by the disease management database.
3. The system of claim 1 wherein:
the voice service application Is configured by a first non-transitory memory storing . statements and instructions for execution by a first processor to: receive voice data associated with a voice command generated by a patient;
identify spoken patient health data in the received voice command; obtain stored health data related to the spoken patient health data; and generate voice feedback data for providing context-sensitive voice Feedback to the patient based on the obtained stored health data and on the identified spoken patient health data, the context-sensitive voice feedback suggesting a course of action or requesting additional information based on the received voice command.
4. The Byetem of claim 1 wherein:
the mobile service application is configured by a second non-transitory memory storing statements and Instructions for execution by a second processor to : receive gathered health data from one or more connected health devices; and
provide the gathered health data to the network disease management module.
5. The system of claim 1 wherein:
the mobile service application is configured by a second non-transitory memory stonng statements and instructions for execution by a second processor to : receive, from the network disease management module, disease- related patient health data associated with the context- sensitive voice feedback; and
cause display of the disease-related patient health data to the patient.
6. The system of claim 4 or daim 5.wherein the network disease management module Is configured to;
obtain a first data set associated with the voice command, end provide the first data set to the disease management database, the voice command received at the network disease management module; and
obtain a second data set associated with the gathered health data, and provide the second data set to the disease management' database, the gathered health data received at the mobile disease management module.
7. The system of claim 1 wherein the disease management database, the network disease management module, and the mobile disease management module cooperate to:
provide a first disease management function using the voice-based disease management service in the voice Interaction mode; and
provide the same first disease management function using the mobile disease management service in the mobile interaction mode.
6. The system of claim 1 wherein the disease management database, the network disease management module, and the mobile disease management module cooperate to:
provide a second disease management function using the mobile disease management service. In the mobile interaction mode to supplement a first disease management function provided using the voice-based disease management service in the voice interaction mode.
9. - The system of claim 1 wherein the disease management database, the network disease management module, and Ihe mobile disease management module cooperate to:
provide a notification via both the voice interaction mode and the mobile interaction mode; and
clear the notification from one mode of communication In response to an indication that the notification has been acknowledged via Ihe other mode of communication.'
10. The system of daim 1 wherein the voice service application is configured to: perform natural language processing on voice data associated with a received voice command to identify spoken patient health data in the received voice command: and
create recorded patient health portal data, based on the received spoken patient health data, in a first format similar to a second format of the stored health data in the disease management database.
11. The system of claim 1 wherein the network disease management module comprises:
an audio prompt generator configured to create an actionable voice prompt for delivery to the patient from the voice service application via a voice service platform, the actionable voice prompt being created based on the gathered health data received from the orte or more connected health devices.
12. The system of claim 1 wherein the network disease management module comprises:
a disease parameter tracker configured to communicate with one or more connected health devices to receive gathered health data;
a pattern tracker, in communication With the disease parameter tracker, configured to generate one or more scores based on how far a patient's measured disease parameter is from a clnician-set target disease parameter; , a prompt content generator, in communication With the pattern tracker, configured to generate patient-specific content identifying one or more targeted disease parameters that require attention to Improve the patient's health condition; and
an audio prompt generator, in communication with the prompt content generator, configured to convert the generated patient-specific content from the prompt content generator into actionable voice prompts for delivery to the patient via a voice service platform.
13. The system of claim 12 wherein:
the voice service application is configured to perform natural language processing on a received voice command to identify spoken patient health data in the received voice command, and to form at the spoken patient health data similar to stored health data to facilitate obtaining the stored health data related to the voice command, and
wherein the audio prompt generator is in communication with the voice service application.
14.. The system of claim 11 or daim 12 wherein the actionable voice prompt comprises a oonveraational-etyle prompt including: data for Improvement of the patient's health oondftion; end
a behavior change' component associated with the data for Improvement of the patient* a health condition and customized to the patient.
15. The system of claim 14 wherein the behavior change component is created based on the relationship of the data for improvement of the patient's hearth condition with an associated target
16. The system of claim 12 wherein the disease parameter tracker comprises an activity/blood sugar level tracker, and wherein the one or more connected health devices are configured to receive the gathered health data selected from the group consisting of: blood sugar level; exercise data; weight; amount of sleep; food intake; geolocation data; end time of day.
17. The system of claim 1 wherein the network disease management module Is In communication with a health information portal storing health portal data, and the network disease management module comprises:
a question intent converter configured to convert voice'data associated with a patient's voice command to a patient Inquiry including additional patient context data in a format compatible with the health portal data;
.a pre- processor, in communication with the question intent converter and with the health portal data, configured to convert, using a lookup table, data in the patient inquiry to a health portal query to facilitate health portal data content lookup; and
a post-processor, in communication with the health portal data and the voice service platform, configured to modify the obtained health portal data prior to providing the context-sensitive voice feedback to the' patient
18. The system Of claim 1 wherein:
the voice service application is configured to perform natural language processing on voice data associated with the .received voice command to identify ihe . spoken patient health data In the received voice data, and to format the spoken patient health data similar to stored hearth data in the disease management database to facilitate obtaining the stored health data related to the voice command, and
. wherein one or more of the question Intent converter and the postprocessor is in communication with the voice service application.
19. The system of Claim 1 further comprising:
a secure patient portal, in communication with the network disease management module, configured to provide authorization-based access to patient data to a relative or clinician.
20. The system of claim 19 further comprising:
a. secure messaging module, configured to enable the patient to securely interact with the clinician without either the patient or the clinician having to share personal contact information.
21 The system of claim 20 wherein the secure messaging module Is configured to enable sending a voice message between the patient and the clinician.
22. The system of claim 19 further comprising:
a social connection module configured to pair the patient with a mentor for secure interaction via the secure-patient portal.
23: The system of claim 4 wherein the network disease management module and the disease management database are configured to:
compare, to a target, selected patient health data collected from the one or more connected health devices; and
send a notification to a loyalty rewarde system to award loyalty rewards to the patient in response to the selected patient health data meeting or exceeding the target.
24. The system of claim 4 wherein the one or more connected hearth devices are selected from the group consisting of a fHness tracker, a weight scale, a glucometer, spirometer, and a disease-specific data collection apparatus.
26. The system of claim 1 wherein the network disease management module comprises the disease management database.
26. The system of claim 1 wherein the network disease management module is configured to handle activity from both the mobile disease management module and from a smart speaker configured to receive B voice command from the patient associated with the voice-based disease management services.
27. The system of claim 1 wherein the network disease management module comprises a content management system configured to provide an omni-channel patient questionnaire ID the patient via the voice interaction mode using the voice service application and/or the mobile communication mode using the mobile disease management application.
28. The system of claim 19 wherein the network disease management module and the secure patient portal cooperate to generate and cause the display of a list display selector configured to alternate between displaying a list of al patients and a list of
. higher risK patients.
29. The system of claim 1 wherein the network disease management module comprises an electronic. medical record (EMR) interface configured to Intemperate with one or more EMR Bystems to access or update EMR data based on the spoken health data, the obtained stored health data and/or content or the context-sensitive voice feedback.
30. The system of claim 1 further comprising:
a smart epeaker including a non-verbal wake condition detector configured to detect a nan-verbal wake condition in the absence of a verbal wake word.
31. The system of claim 30 wherein the smart speaker comprises:
a motion sensor or camera; and
trie non-verbal wake condition detector is configured to receive and process an output of the motion sensor or. camera to determine occurrence of the nonverbal wake condition.
32. The system of claim 31 wherein the non-verbal wake condition detector comprises a presence detection module and Is configured to. In response to detection of a nonverbal wake condition by the motion sensor or camera:
detect a non-verbal wake condition: and
activate operation of the smart speaker in response to the detected non-verbal wake condition and in the absence of a verbal wake word.
33. The system of claim 32 detection of the non-verbal wake condition comprises a step selected from the group consisting of detection of presence of a user; detection of a non-authenticated wake action or gesture; and detection of an authenticated wake action to authorize S specific user.
34. The system of claim 32 detection of the non-verbal wake condition comprises detection of an authenticated wake action to authorize a specific user, and wherein the non-verbal wake condition detector is configured to provide customized' feedback to the patient based on a detected condition of the authenticated uaer.
35. The system of claim 34 wherein providing customized feedback is performed on a detected facial expression, and wherein the system adjusts content of the. feedback, a tone of the feedback, or both, in response to the detected condition.
36. A network disease management apparatus comprising:
a processor; and
one or more non-transitory machine readable memories, storing statements and
Instructions for execution by the processor to:
receive voice data associated with a voice command generated by a patient, identify spoken patient health data In the received voice command;
obtain stored health data related to the spoken patient health data; and generate voice feedback data for providing context-sensitive voice feedback for the patient based on the obtained stored health data and on the identified spoken patient health data, the context-sensitive voice feedback suggesting a course of action or requesting additional information based on the received voice command. A processor-implem'ented method of voice-enabled disease management comprising:
at a network device,
receiving voice data associated with a voice command generated by a patient, identifying spoken patient health data In the received voice command;
obtaining stored health data related to the spoken patient health data; end generating voice feedback data for providing context-sensitive voice feedbaok to the patient based on the stored health data and on the Identified spoken patient health data, the context-sensitive voice feedback suggesting a course of action or requesting additional Information based on the received voice command.
38. A non-transitory machine readable medium having stored thereon statements and Instructions for execution by a processor to perform the method of claim 36.
39. An apparatus comprising:
at least one processor; and
memory storing computer-readable instructions that, when executed by the at least one processor, cause the apparatus to perform the method of claim 36.
PCT/CA2018/000180 2017-11-28 2018-09-27 System and method for voice-enabled disease management WO2019104411A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762591349P 2017-11-28 2017-11-28
US62/591,349 2017-11-28

Publications (1)

Publication Number Publication Date
WO2019104411A1 true WO2019104411A1 (en) 2019-06-06

Family

ID=66663717

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2018/000180 WO2019104411A1 (en) 2017-11-28 2018-09-27 System and method for voice-enabled disease management

Country Status (1)

Country Link
WO (1) WO2019104411A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112185590A (en) * 2020-10-28 2021-01-05 华中科技大学同济医学院附属协和医院 Patient intelligent warehousing return visit system and working method thereof
US11665118B2 (en) 2020-06-25 2023-05-30 Kpn Innovations, Llc. Methods and systems for generating a virtual assistant in a messaging user interface

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090088607A1 (en) * 2007-09-28 2009-04-02 Visual Telecommunications Network, Inc. Cell phone remote disease management
US20090270690A1 (en) * 2008-04-29 2009-10-29 University Of Miami System and method for using interactive voice-recognition to automate a patient-centered best practice approach to disease evaluation and management
US20100185063A1 (en) * 1999-06-03 2010-07-22 Cardiac Pacemakers, Inc. System and Method for Providing Voice Feedback for Automated Remote Patient Care
CN103605892A (en) * 2013-11-25 2014-02-26 方正国际软件有限公司 Health care system based on voice driving
WO2015021208A1 (en) * 2013-08-06 2015-02-12 Gamgee, Inc. Apparatus and methods for assisting and informing patients
US20160324464A1 (en) * 2015-05-08 2016-11-10 Pops! Diabetes Care, Inc. Blood glucose management system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100185063A1 (en) * 1999-06-03 2010-07-22 Cardiac Pacemakers, Inc. System and Method for Providing Voice Feedback for Automated Remote Patient Care
US20090088607A1 (en) * 2007-09-28 2009-04-02 Visual Telecommunications Network, Inc. Cell phone remote disease management
US20090270690A1 (en) * 2008-04-29 2009-10-29 University Of Miami System and method for using interactive voice-recognition to automate a patient-centered best practice approach to disease evaluation and management
WO2015021208A1 (en) * 2013-08-06 2015-02-12 Gamgee, Inc. Apparatus and methods for assisting and informing patients
CN103605892A (en) * 2013-11-25 2014-02-26 方正国际软件有限公司 Health care system based on voice driving
US20160324464A1 (en) * 2015-05-08 2016-11-10 Pops! Diabetes Care, Inc. Blood glucose management system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11665118B2 (en) 2020-06-25 2023-05-30 Kpn Innovations, Llc. Methods and systems for generating a virtual assistant in a messaging user interface
CN112185590A (en) * 2020-10-28 2021-01-05 华中科技大学同济医学院附属协和医院 Patient intelligent warehousing return visit system and working method thereof

Similar Documents

Publication Publication Date Title
US11056119B2 (en) Methods and systems for speech signal processing
Schifeling et al. Disparities in video and telephone visits among older adults during the COVID-19 pandemic: cross-sectional analysis
Aggarwal et al. Artificial intelligence–based chatbots for promoting health behavioral changes: systematic review
US20190043501A1 (en) Patient-centered assistive system for multi-therapy adherence intervention and care management
Baron et al. The impact of mobile monitoring technologies on glycosylated hemoglobin in diabetes: a systematic review
Hilliard et al. User preferences and design recommendations for an mHealth app to promote cystic fibrosis self-management
US20140257851A1 (en) Automated interactive health care application for patient care
Bell et al. Connecting patients and clinicians: the anticipated effects of open notes on patient safety and quality of care
Denecke et al. Self-anamnesis with a conversational user interface: concept and usability study
US20190066822A1 (en) System and method for clinical trial management
US20210327582A1 (en) Method and system for improving the health of users through engagement, monitoring, analytics, and care management
US20190228850A1 (en) Interactive pill dispensing apparatus and ecosystem for medication management
Jacelon et al. Computer technology for self‐management: A scoping review
US20200286603A1 (en) Mood sensitive, voice-enabled medical condition coaching for patients
Singh et al. IT-based reminders for medication adherence: systematic review, taxonomy, framework and research directions
Lee et al. Development of a mobile health intervention to promote Papanicolaou tests and human papillomavirus vaccination in an underserved immigrant population: a culturally targeted and individually tailored text messaging approach
Mondol et al. MedRem: An interactive medication reminder and tracking system on wrist devices
Gouda et al. Feasibility of incorporating voice technology and virtual assistants in cardiovascular care and clinical trials
WO2019104411A1 (en) System and method for voice-enabled disease management
Duffy et al. Therapeutic relational connection in telehealth: concept analysis
Davis et al. Closing the Care Gap with Wearable Devices: Innovating Healthcare with Wearable Patient Monitoring
Sobel et al. Trusted contraception information sources for individuals with opioid use disorder
Zeng et al. Evaluating the Gradual Delivery of Knowledge-focused and Mindset-focused Messages for Facilitating the Acceptance of COPD
Simmons An exploratory study of mHealth technology acceptance for type 2 diabetes self-management among adults in later life
US20230044000A1 (en) System and method using ai medication assistant and remote patient monitoring (rpm) devices

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18882319

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18882319

Country of ref document: EP

Kind code of ref document: A1