WO2019200158A1 - Systems and methods for improved communication with patients - Google Patents

Systems and methods for improved communication with patients Download PDF

Info

Publication number
WO2019200158A1
WO2019200158A1 PCT/US2019/027071 US2019027071W WO2019200158A1 WO 2019200158 A1 WO2019200158 A1 WO 2019200158A1 US 2019027071 W US2019027071 W US 2019027071W WO 2019200158 A1 WO2019200158 A1 WO 2019200158A1
Authority
WO
WIPO (PCT)
Prior art keywords
patient
processor
multiple choice
question
questions
Prior art date
Application number
PCT/US2019/027071
Other languages
French (fr)
Inventor
Ori BELSON
Original Assignee
Belson Ori
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Belson Ori filed Critical Belson Ori
Publication of WO2019200158A1 publication Critical patent/WO2019200158A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H80/00ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7465Arrangements for interactive communication between patient and care services, e.g. by using a telephone network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/0202Constructional details or processes of manufacture of the input device
    • G06F3/0219Special purpose keyboards
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/20ICT specially adapted for the handling or processing of patient-related medical or healthcare data for electronic clinical trials or questionnaires

Definitions

  • the field of the present disclosure relates to patient communication and more particularly to communication systems and methods designed for use with patients having impaired motor, visual and/or cognitive functions.
  • ICU intensive care unit
  • a plastic tube that is inserted into a patient’s throat via their mouth or a tube directly through the neck (i.e., a tracheotomy).
  • AAC Augmentative and Alternative Communication Systems
  • the system further comprises a searchable database containing a plurality of multiple choice questions.
  • the processor includes a conditional decision tree algorithm, application or program designed to choose one of the multiple choice questions from the database based on each response from the patient.
  • the processor may contain an artificial intelligence or machine-learning algorithm that chooses the questions from the database and/or creates new questions based on previous learning.
  • the conditional decision tree algorithm causes the processor to choose a specific series of multiple choice questions based on responses from the patient until the patient communicates his or her desires.
  • the series of multiple choice questions preferably start with a broad or generic question (e.g.,“Are you feeling pain?”) and progressively get more narrow based on the patient’s responses such that the patient can effectively communicate their wants or needs to a third-party, such as a nurse, doctor or family member.
  • a broad or generic question e.g.,“Are you feeling pain?”
  • the multiple choice questions are binary and therefore only have two possible responses (i.e., Yes/No or True/False). Having a limited number of choices makes it easier for the impaired or disabled patient to respond to the questions.
  • the binary questions allow the processor to move more quickly through the decision tree algorithm to help the patient communicate more effectively.
  • the multiple choice questions may be ternary and thus have three possible responses (i.e. Yes/No and “Maybe” or“I don’t know”). If the patient responds with the third choice, the processor is configured to transmit a follow-up ternary question with different wording and/or move to a different branch of the tree in an attempt to continue the communication process with the patient.
  • the processor may be configured to store information related to the patient, such as the patient’s overall condition, previous or recurrent complaints from the patient, time of day, surgery performed, medication dosage or schedule and the like.
  • the processor is configured to choose the multiple choice questions based on this information. For example, in certain embodiments, the processor will choose the first multiple choice question based on previous responses of the patient or recurrent issues that the patient has experienced during the hospital stay (e.g., pain in the left side). This allows the patient to move through the decision tree of questions and arrive at his or her communication goal more quickly and efficiently.
  • the communication system of the present disclosure may further comprise a portable device, such as a computer, tablet, mobile phone or the like, that houses the processor and is coupled to the input and output devices.
  • the output device comprises a speaker, which may optionally include a headset, configured to transmit an audible signal to the patient. This allows the patient to hear the questions from the processor without having to move or even open their eyes.
  • the portable device may further include a user interface directly or remotely (e.g., wirelessly) connected to the processor and configured to interact with the decision tree algorithm.
  • the user interface will allow a third party, such as the caregiver, to manually choose the first question or one or more of the succeeding questions (e.g., disrupt the automatic decision tree process and jump to another series of questions).
  • the user interface may also allow the caregiver or a family member to transmit a message directly to the patient.
  • the patient input device preferably comprises at least one contact surface removably engageable with a surface of the patient’s skin.
  • the input device comprises first and second sensors, each having a contact surface for contacting a different part of the patient’s body.
  • the first sensor is preferably associated with a positive response to a binary question and the second sensor is preferably associated with a negative response to the binary question (or vice versa).
  • the sensors are coupled to the portable device and configured to transfer the information to the processor such that the processor receives a yes/no response to each question transmitted to the patient.
  • one of the sensors will include a vibrating element to allow the patient to differentiate between the sensors (e.g., vibration means a positive or yes response).
  • the sensor on the input device may be configured to respond to movement, pressure or other stimuli.
  • the sensor comprises a finger sensor removably engageable with a finger of the patient and configured to sense pressure applied to its contact surface by the patient’s finger.
  • the finger sensor may have multiple controls for use with one finger, or it may include two sensors removably engageable with two of the patient’s fingers. This allows the patient to simply move one or more of his/her fingers to respond to the binary questions received through the input device.
  • the senor comprises an eyelid or pupil sensor removably engageable with a portion of the patient’s eye and configured to sense movement of the patient’s eyelid and/or pupil.
  • the eye sensor may be configured such that eye movement in response to a question (e.g., blinking) signals Yes and no eye movement or blinking signals No.
  • the sensor may include two contact surfaces, each attached to one of the patient’s eyes, with movement of each eyelid being associated with either a negative or positive response.
  • the senor is also configured to detect one or more physiological parameters of the patient, such as heart rate, body temperature, blood pressure or blood flow.
  • the sensor is preferably configured to automatically transmit the physiological parameter(s) to the processor to provide such information to the caregiver.
  • the processor is configured to choose one or more of the multiple choice questions based on the physiological parameters that have been measured or detected. For example, if the sensor detects an abnormally high heart rate, the processor may first ask the patient a question related to the heart rate (i.e., are you feeling stressed or anxious).
  • the system further comprises a controller coupled to the processor and configured to transmit an electronic signal to control a device within the patient’s room based on the patient’s response to one of the binary questions.
  • a controller coupled to the processor and configured to transmit an electronic signal to control a device within the patient’s room based on the patient’s response to one of the binary questions.
  • the processor may be wirelessly coupled to a nurse station such that the patient can call a nurse or other caregiver directly through the patient communication system.
  • a method for communicating with a patient comprises transmitting a first multiple choice question to a patient, detecting a response from the patient, processing the response and transmitting a second multiple choice question to the patient based on the response.
  • the second multiple choice question is preferably generated by retrieving it from a database of potential questions.
  • the method includes executing a conditional decision tree program or algorithm to choose a specific series of multiple choice questions from the database based on responses from the patient until the patient communicates an outcome.
  • the method may further comprise storing information related to the patient and choosing the first question or succeeding questions based on this information.
  • the information may comprise a specific condition of the patient, recurring issues with the patient, the time of day or any other information that may be relevant.
  • the information will allow the patient to move through the conditional decision tree more quickly and efficiently.
  • the detecting step comprises contacting an outer skin surface of the patient with a contact surface of a sensor and detecting pressure or movement by the patient. The detected movement or pressure is then relayed to the processor.
  • a first portion of the patient’s skin is contacted with a first sensor and a second portion of the patient’s skin is contacted with a second sensor.
  • the first sensor is associated with either a positive or negative response to a binary question by the patient and the second sensor is associated with the opposite response.
  • the method may further comprising vibrating one of the sensors so that the patient can differentiate between the two.
  • a system for communicating with a patient comprises a processor, an output device and a patient input device both coupled to the processor.
  • the output device is configured to transmit an audible signal to the patient from the processor.
  • the patient input device has a contact surface for contacting an outer skin surface of the patient and is configured to sense pressure or movement from the skin surface as a response to the audible signal.
  • FIG. 1 is a representative drawing of a system for communicating with a patient according to the present disclosure
  • FIG. 2 is a representative drawing of an alternative embodiment of a system for communicating with a patient according to the present disclosure
  • FIG. 3 illustrates an exemplary patient input device for the communication system of FIG. 2;
  • FIG. 4 is a flowchart of a conditional decision tree according to systems and methods of the present invention.
  • FIG. 5 is a flowchart of another conditional decision tree according to systems and methods of the present invention.
  • FIG. 1 illustrates an illustrative system 100 for advanced patient communications according to one embodiment of the present disclosure.
  • System 100 may be used in a variety of different healthcare settings, such as acute care or general hospitals, specialty care hospitals, nursing homes or long-term care facilities, ambulatory care centers, physicians’ offices, rehabilitation centers or as part of home healthcare systems.
  • System 100 is particularly useful in inpatient acute care or general hospital settings wherein patients have been admitted for acute care for a severe injury or illness or for longer-term care for several days or weeks at a time. In certain cases, these patients may be disabled, acutely ill or have learning or communication disabilities. In other cases, the patients may have been intubated, medicated and/or sedated and thus have severe difficulty speaking, moving or understanding the events around them.
  • system 100 comprises a patient communication device 102 that includes a central processing device or processor, such as a computer attached to a display monitor 104, an output device 106, a patient input device 108 and a second, optional input device 110.
  • a central processing device or processor such as a computer attached to a display monitor 104, an output device 106, a patient input device 108 and a second, optional input device 110.
  • communication device 102 may be any suitable mobile device capable of running a program and processing information, such as a mobile phone, tablet, laptop, or even a simple processing device that does not include a display monitor.
  • the patient communication device 102 comprises a Raspberry Pi computer connected to a Raspberry Pi display.
  • the Raspberry Pi is a single board computer, which contains a SOC (System On Chip) with a multicore processor, GPU, ROM and I/O Peripherals inside and DDR RAM memory, Ethernet port, USB host, and a micro HDMI.
  • SOC System On Chip
  • GPU GPU
  • ROM Read Only Memory
  • Ethernet port Ethernet port
  • USB host USB host
  • micro HDMI micro HDMI
  • any general- purpose computer having a suitable computer program, application or algorithm (as described below) that is stored in the computer and configured for use with the present disclosure.
  • Such a computer program may also be stored in a computer readable storage medium, such as, but not limited to, USB drives, internal or external hard disk drives, floppy disks, optical disks, CD- ROMs, magnetic-optical disks, read-only memories, random access memories, EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
  • a computer readable storage medium such as, but not limited to, USB drives, internal or external hard disk drives, floppy disks, optical disks, CD- ROMs, magnetic-optical disks, read-only memories, random access memories, EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
  • Display monitor 104 may be used to visually display information to the patient, family members, friends, nurses, doctors or other hospital staff.
  • patient communication device 102 allows display monitor 104 to sit directly on a flat surface, such as a bedside table.
  • the monitor is attached to a swiveling arm extending from the wall so that it can be extended to various distances from the wall and heights from the floor, thus adapting to the patient's position.
  • display monitor 104 is entirely optional given that the patient will preferably receive information via audio signals (discussed below) and the processor is designed to transmit information received from the patient to another remote wireless provider, processor or server, such as a computer system or server in the hospital (e.g., a nurse station) or a remote processor/server in another building, such as a computer in a physician’s office or family member’s home.
  • processor or server such as a computer system or server in the hospital (e.g., a nurse station) or a remote processor/server in another building, such as a computer in a physician’s office or family member’s home.
  • Output device 106 preferably includes a speaker (not shown) for transmitting audio signals to the patient.
  • the speaker may be built-in to the processor/computer or part of a separate standalone unit that is coupled to communication device 102.
  • the speaker is preferably connected to a pair of earphones 112, earbuds, canal phones, headsets or the like so that the patient can hear the audio signals without interference from the operating room environment.
  • Earphones 106 are shown connected to communication device 102 with wires, but it will be recognized that they may be wireless, Bluetooth, or any other suitable technology standard for wirelessly exchanging data between fixed and mobile devices over short distances.
  • communications system 100 may include a variety of suitable output devices for transmitting information to the patient.
  • information may be displayed on the display monitor as an alternative to or, or in addition to, the speaker and earphones. This would allow the patient to read questions or messages if, for example, he or she is capable of reading the monitor and would prefer this mode of communication, or if the patient has difficulty hearing.
  • the system 100 may further comprise hearing aids (not shown) coupled to the speaker or headset to augment the audible signals.
  • patient input device 108 comprises a pair of patient contact devices 120, 122 attached to communication device 102 with wires or other suitable connection expedients (e.g., wireless, Bluetooth or the like).
  • patient contact devices 120, 122 each include bands or straps sized for wrapping around one or more of the patient’s fingers.
  • the straps preferably comprise a fastening element, such as Velcro or the like, to attach contact devices 120, 122 to the patient’s fingers.
  • Each of the contact devices 120, 122 further comprise tactile sensors (not shown) on the surfaces of the straps. In the preferred embodiment, the tactile sensors are located on the outside surface of the straps, but they could also be located on the inside surfaces or both.
  • the tactile sensors are preferably designed to detect a threshold level of movement or pressure from the patient’s fingers and convert this movement or pressure into an electrical signal that is transmitted back to communication device 102.
  • Suitable movement or pressure sensors for use with the present invention include digital or analog pressure sensors that utilize piezoresistive, electromagnetic, capacitive, strain-gauge, or similar force or pressure measurements.
  • Contact devices 120, 122 are configured such that pressure or movement signifies a response by the patient.
  • one of the sensors will be designated by the processor as an affirmative or yes response and the other sensor designated as a negative or no response.
  • Contact devices 120, 122 relay this information through electrical signals to the processor in communication device 102.
  • the patient merely has to move his or her finger to communicate with device 102. This allows the patient to communicate without moving his or her hands, legs or arms (or any other body part) and without even opening his/her eyes.
  • the patient is capable of communicating with device 102 simply by listening to questions or messages from the speaker and moving his or her fingers in response.
  • contact devices 120, 122 will have other functionality designed to provide additional communication options for the patient.
  • one of the sensors may include a feature wherein if the patient moves his/her fingers twice (e.g., up and down or just twice in a row), this signals the processor to“go back” to the previous question, or to start all again.
  • the processor may ask the patient directly if he/she wants to go back to a previous question or start the process over and the patient can answer with an affirmative response (i.e., by moving the finger associated with a yes response).
  • patient input device 108 may include additional contact devices (not shown) that provide additional response options for the patient. For example, if the processor transmits a ternary question to the patient, patient input device 108 may include 3 contact devices, each associated with one of the response options. Alternatively, one of the contact devices 120, 122 in Fig. 1 may be designed to solicit more than one response (i.e., with two sensors or no movement signifying a third response option). [0045] One of the patient contact devices 120, 122 may include a vibrating element (not shown) suitably coupled to communication device 102. The vibrating element may be configured to vibrate when one of devices 120, 122 are initially placed into contact with the patient’s skin.
  • the vibrating element may vibrate each time a question is posed to the patient from communication device 102 (or each time the patient responds).
  • the vibrating element may be associated with either the positive or the negative response. In either case, the vibrating element allows the patient to differentiate the contact devices from each other so that he or she knows which contact device is associated with a positive answer and which is associated with a negative answer.
  • Patient contact devices 120, 122 may also include sensors for detecting physiological parameters of the patient, such as heart rate, skin temperature, blood pressure and the like.
  • contact devices 120, 122 include sensors designed to detect the patient’s heart rate to determine if, for example, the patient is anxious or withdrawing from their sedation medication. This information is preferably relayed automatically to the caregiver through communication device 102.
  • the processor is configured to use this information to determine one or more of the multiple choice or binary questions from the database (discussed in more detail below) to determine why the patient’s heart rate has changed (e.g.,“Are you feeling anxious?”).
  • the preferred physiological sensors will include ones ordinarily used for ambulatory monitoring.
  • the sensors may comprise those used in conventional Holter and bedside monitoring applications, for monitoring heart rate and variability, ECG, respiration depth and rate, core temperature, hydration, blood pressure, brain function, oxygenation, skin impedance, and skin temperature.
  • the optional second input device 110 preferably comprises a standard keyboard for use with communication device 102. As with the other components of the system, the keyboard may be directly attached, or wirelessly coupled to, device 102. Input device 110 allows the caregiver (or the patient’s friends or family) to input questions or messages directly into communication device 102. These questions or messages are then relayed to patient through output device 106, as discussed in more detail below.
  • FIGS 2 and 3 illustrate another embodiment of system 100 that includes a communication device 102, output device 106 and a patient input device 130.
  • patient input device 130 is a single finger contact device having a finger grip 140 with an opening to allow the user to slide his/her finger therethrough to hold input device 130.
  • Patient input device 130 also has a user interface including, but not limited to, a pair of user controls 132, 134.
  • Controls 132, 134 preferably each comprise a button that can be pressed with the finger’s thumb, although any suitable user control may be used, such as knobs, switches, dials, touch-screen and the like.
  • one of the controls 132, 134 is associated with a positive response and the other with a negative response.
  • Input device 130 may further include a vibrating element or some other mechanism to differentiate the two controls from each other for the patient.
  • patient input device 130 may include more than two controls to enable responses to multiple choice questions with more than two possible responses.
  • the present disclosure is not limited to the input devices described herein and may comprise other suitable skin contacting devices for detecting pressure or movement of the patient, such as toe sensors, pupil sensors, eyelid sensors and the like.
  • the input device may be attached to the patient’s toes and configured to sense toe movement to signify responses in a similar manner as described above with the finger sensors.
  • the input device may be attached to the patient’s eyelids and configured to sense movement of the eyelids (i.e., blinking).
  • the sensor(s) may be attached to both eyelids for positive/negative responses or the sensor(s) may be attached to only one eyelid.
  • the senor may be designed to signify an affirmative response when the eyelid blinks once and a negative response with two blinks.
  • the positive response can be associated with one blink and the negative response with no movement for a particular period of time (e.g., 2-5 seconds).
  • a simple pressure sensor can be used wherein one tap is associated with Yes, two taps with No and three taps with“go back” to return to a previous question or message.
  • the processor of the present disclosure is provided with software that includes a conditional“decision-tree” type algorithm, program or application and a database of multiple choice questions.
  • the multiple choice questions are closed-ended and thus designed to solicit a limited number of responses.
  • the multiple choice questions are binary and specifically designed to solicit a yes/no or true/false response from the patient.
  • the processor is designed to move through the database of questions and develop a decision tree based on the patient’s responses by breaking down the data into smaller and smaller sets while incrementally developing the associated decision tree.
  • the processor will select an initial broad or generic question based on various factors, including, but not limited to, general information about the patient or their injury or illness.
  • the algorithm will then utilize an If-Then type program step to move to a second question.
  • a certain response will progress the processor along the same“branch” of the decision tree, thereby further narrowing down the questions for the patient.
  • a response will cause the processor to jump to a completely different branch that involves a completely different set of questions.
  • the questions may be designed to solicit more than two options or responses.
  • the response options may include yes, no and“I don’t know”,“maybe”,“go back”,“start over” and the like.
  • the processor may be designed to interpret no response from the patient as“I don’t know”,“maybe” or“go back”.
  • the processor is preferably configured to transmit a follow-up question with different wording in the event that the patient was confused by the first question.
  • the processor may be configured to jump to a different branch of the decision tree to transmit another type of question until the patient answers either yes or no. If the third response is“go back”, then the processor will automatically go back to the immediately succeeding question (and continue to do so until it reaches the first question if the patient continues to response“go back”).
  • Figure 4 illustrates one example of the binary tree approach of the preferred embodiment of the present invention.
  • the first binary question to the patient may be “Are you in pain?”.
  • the question is transmitted to the patient through output device 106 and the patient can either answer yes or no by tapping his/her fingers, toes, eyelids, etc.
  • Patient input device 108 transmits this response back to the processor, which then moves to one limb of the decision tree based on whether the response was yes or no. For example, if the answer to the first question “Are you in pain?” was affirmative, the processor then selects a second binary question, such as“Is the pain under your waist?” and transmits this question to the patient.
  • a second binary question such as“Is the pain under your waist?”
  • the processor will switch to a different tree limb and ask a different question (e.g.,“Are you thirsty?”).
  • a different question e.g.,“Are you thirsty?”
  • the processor continues to follow these simple and strategically chosen yes/no questions until it eventually reaches a final response that communicates an outcome, which may include the patient’s wants or needs at that given time (i.e.,“My throat hurts”,“I need a drink or water” or “Where is my nurse?”).
  • the processor may include a machine learning or other suitable program or application of artificial intelligence that provides the processor with the ability to automatically learn and improve from experience without being explicitly programmed.
  • the machine learning application will accumulate observations or data from a plurality of patients and combine this learning to look for patterns in the data and make better decisions on which questions to select for each patient. For example, the machine learning application may find specific patterns of complaints associated with certain injuries, diseases or disorders and then use that learning to move questions related to those complaints forward in the decision tree.
  • the processor is capable of“learning” that patients with a certain injury often feel pain in a particular area of the body at certain points in time after the initial surgery or hospital admission. The processor will then ask questions about pain in those areas of the body earlier in the decision tree process to make the communication process quicker and easier for the patient and hospital staff.
  • communication device 102 transmits this outcome to the appropriate caregiver using various methods, such as SMS, email, printing at the nurse print station, WhatsApp, playing the message on a speaker, displaying the message on display monitor 104 or another monitor in this hospital or the like.
  • Device 102 may also communicate to the patient that his or her desires have been transmitted to the appropriate person.
  • the system may further allow the caregiver to directly send a message to the patient through communication device 102 (e.g.,“Received your message that you are thirsty. A nurse will be in your room shortly to get you a drink”). In this manner, the patient immediately knows that someone has heard his or her complaint and is responding.
  • the processor is programmed to change the sequence of questions in the decision tree based on information that is either inputted into the processor by a caregiver, or based on past information stored in the database.
  • This information may include the patient’s injuries, disease or disorder, the current condition of the patient, time of day, medication schedule, previous or recurrent needs of the patient and the like. For example, if the patient has previously experienced abdominal pain, that option/question will be positioned higher in the decision tree so that this need is identified with less questions.
  • the processor may ask that question first in the morning.
  • the processor might ask about the particular pain relieved by that medication to determine if the patient is in need of the medication.
  • the processor is programmed to receive and transmit messages to the patient through output device 106
  • the messages may be from a caregiver, or from outside friends or family members.
  • the messages can be received wirelessly by communication device 102 or inputted directly through keyboard 110 or another suitable input device.
  • Processor will preferably include any suitable text to speech software to convert text messages to sound so that the messages can be transmitted to the patient through output device 106
  • the processor may also be programmed to play music in the intervals between questions to ease the patient’s anxiety or it may be programmed to soothe the patient with, for example, repeated suggestions to calm down, to relax and/or to let the communication device 102 know what he or she needs.
  • the processor is preferably programmed to ask the patient if he/she wants to send a message to a caregiver, family or friends.
  • the database includes a plurality of potential messages that the patient can send.
  • the processor for example, ask the patient: “Do you want to send a message?”. If the answer is yes, then the processor will scroll through potential individuals to whom the patient may want to send that message. Once the patient has selected a recipient, the processor is then programmed to scroll through another set of possible messages until the patient has chosen the message that he or she wishes to send.
  • the processor may be configured to choose certain messages depending on certain information in the database, such as the time of day, condition of the patient, previous messages sent by the patient and the like. For example, when the patient wakes up, the processor could choose a message to family that says “Good morning. I just woke up and am feeling better.” or“Are you coming to visit me today?”.
  • the processor is programed to automatically provide information to the patient at periodic times (or the information may be manually inputted from the caregiver or family members). For example, the processor may automatically inform the patient of the day or time, where he or she is, how many days he or she has been there, who his or her nurse is, etc. Alternatively, the processor may be programmed to inform the patient of certain medication times or other scheduled events for that day. The processor may also have functionality that allows the patient to shut down all communication so that the patient can sleep or otherwise relax if he/she does not wish to be disturbed.
  • the processor includes suitable language translation software to communicate with a patient in different languages.
  • the decision tree may include initial questions to determine the patient’s language (if unknown) or to provide the patient with a language preference (i.e.,“would you prefer Spanish or English?”).
  • the processor is configured to speak to the patient in the patient’s preferred language and then to communicate the final message to hospital staff or other caregivers in their own language.
  • communication device 102 may be wireless or directly connected to one or more external electronic devices that, for example, control the patient’s environment, such as room temperature controls, shades/curtains, bed position controls, lights, television, sound systems, nurse call buttons, communication devices for contacting other guests or family members or the like.
  • the processor will include a number of multiple choice decision tree questions related to the control of these electronic devices. For example, the processor may ask the patient: “Do you want to control entertainment?”. If the answer is Yes, then the processor will move through the decision tree until it arrives at the patient’s desire (e.g., turn on the television or change the channel).
  • the decision tree may include options for contacting third- parties, such as other guests, the nurse contact station or visiting family members.
  • the processor will be configured to transmit a message to the third party (e.g., SMS, text, email or the like).
  • the decision tree may even include a number of possible messages/questions that the patient can transmit to the third-party.
  • the database will include a number of possible questions that are typically asked by patients in similar situations (e.g.,“Where am I?” or“How much longer will I be here?”). Simply by moving through the decision tree, the patient can select the message and the third party by answering multiple choice or yes/no questions.
  • the systems and methods of the present disclosure may be customizable and configured to suit the needs of a particular patient or hospital.
  • the patient is able to move through simple yes or no responses to communicate with hospital staff and/or control environmental conditions without the assistance of another person.
  • the disabled or acutely ill patient is able to traverse the binary tree to communicate pain in a particular part of their body, ask questions of hospital staff, turn on the television, control the room’s temperature, or adjust the bed’s elevation all through simple binary responses (i.e., by moving one or more of their fingers) .
  • the patient is less dependent on external assistance and gains more control of their everyday life. This offers the patient a better experience while in the hospital, and focuses on the quality of care for such patients while interacting with the healthcare system

Abstract

The present disclosure provides a system for communicating with a patient comprising an output device configured to transmit a first binary (i.e., yes or no) question to the patient, a sensor configured to receive a response from the patient to the binary question and a processor coupled to the input device and the sensor. The processor includes a conditional decision tree algorithm programmed to choose the binary questions from a searchable database based on each response from the patient. Responding to simple yes or no questions that are directly transmitted to a processor allows the patient to easily communicate without having to move their hands/arms or even open their eyes. The processor can then continue to ask follow-on questions based on the simple binary responses of the patient until the patient is able to communicate a specific want or need to the caregiver.

Description

SYSTEMS AND METHODS FOR IMPROVED COMMUNICATION WITH PATIENTS
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims the benefit of US Provisional Application Serial No. 62/657,762 filed April 14, 2018, the complete disclosure of which is hereby incorporated by reference in its entirety for all purposes.
BACKGROUND OF THE INVENTION
[0002] The field of the present disclosure relates to patient communication and more particularly to communication systems and methods designed for use with patients having impaired motor, visual and/or cognitive functions.
[0003] Many patients cannot communicate with health care professionals or family members due to different clinical conditions. For example, patients in an intensive care unit (ICU) are typically disabled and acutely ill. The effects of the drug treatments along with the acute nature of their medical condition renders the patient unable to breath for themselves and they may have a degree of physical weakness. Most patients in an ICU are intubated and, therefore, require mechanical ventilation to assist with breathing. Intubation typically involves either a plastic tube that is inserted into a patient’s throat via their mouth or a tube directly through the neck (i.e., a tracheotomy).
[0004] While patients require help with breathing, they are unable to communicate effectively using speech. This inability to speak often adds extra stress, which may require extra doses of sedation to calm the patients down. One of the reasons for this stress is that the patient feels that he or she has no control of their own situation and may be completely disoriented (i.e., they may have no idea of the date/time or where they are). This starts a vicious cycle of sedation and an inability to wean off ventilation. In addition, the patients are often too weak to move and/or they are tied to multiple tubes, drips, catheters and wires, making it very difficult, if not impossible, for them to write down their own wants and needs.
[0005] Without effective communication, the patient may not receive the standard of care he or she would otherwise receive if he or she were able to successfully communicate. The lack of communication also creates unnecessary levels of anxiety, which the patient must endure. Nurses and hospital staff ask many questions from the patient pertaining to their prognosis and progress, which may never get fully or even adequately answered. A doctor or nurse is not able to treat a symptom in which they know little or nothing about. In addition, other problems arise due to the insufficient communication from the patient. Localized areas of pain are often misdiagnosed, resulting in over-medication, or the medication of an area which is not the source of pain. Proper and essential treatment given in an adequate and timely manner will help resolve or prevent many post-operative complications and decrease the patient's length of stay in the hospital. This begins with providing the patient a clear and precise means of communication.
[0006] In the past, communication with critically ill patients primarily consisted of body language, such as lip reading, gestures and head nods. These methods of using body language to communicate, however, can be time consuming, inadequate to meet all communication needs and frustrating for both patients and nurses. Using an alternative method of communication can also be very difficult for an incubated ICU patient. Oral intubation, for example, makes it difficult for the patient to mouth words or for the health care practitioner to read the patient’s lips.
[0007] Over the years, there have been several approaches to improving communication with patients in the ICU. The most common methods used today are whiteboards and emoji boards, such as the EazyBoard. Using these simple systems, patients attempt to write down their wants and needs or they point to them when assisted by a nurse or doctor. One of the problems with these approaches is that they generally require patients to have reasonable motor function and some ability to see and understand the whiteboards or emoji boards. Patient visual and motor functions, however, are often impaired by the surgical procedure or by post-operative medication. Weakness brought on by their condition and/or medication make it difficult for patients to focus for an extended duration of time. In addition, this weakness can affect the movement of their hands and arms and make writing or gesturing more difficult. Moreover, hospital staff may not fully utilize these communication systems because the whiteboards and emoji boards require someone to be in the room to understand what the patient is trying to communicate. Unfortunately, time constraints and costs often make this prohibitive.
[0008] More advanced approaches to solving these communication problems include Augmentative and Alternative Communication Systems (AAC). These AAC systems typically involve a display monitor with a user interface designed specifically for patients. While these AAC systems solve some of the communication problems, they are typically expensive and difficult for ICU patients to use. For example, many of these systems require patients to focus on a screen, which can be difficult due to partial sedation and/or general weakness and disorientation from the procedure or condition of the patient. Moreover, even these more advanced systems require some basic motor function to control the user interface.
[0009] Accordingly, it would be desirable to provide improved patient communication systems that are more user friendly for patients with impaired motor function or patients that may experience difficulties focusing on, and understanding, whiteboards, monitors or other visual interfaces.
SUMMARY OF THE INVENTION
[0010] The following presents a simplified summary of the claimed subject matter in order to provide a basic understanding of some aspects of the claimed subject matter. This summary is not an extensive overview of the claimed subject matter. It is intended to neither identify key or critical elements of the claimed subject matter nor delineate the scope of the claimed subject matter. Its sole purpose is to present some concepts of the claimed subject matter in a simplified form as a prelude to the more detailed description that is presented later.
[0011] The present disclosure provides a system for communicating with a patient comprising an output device configured to transmit a multiple choice or closed-ended question to the patient, a patient input device configured to receive a response from the patient to the multiple choice question and a processor coupled to the input and output devices. The processor is configured to receive the response from the patient input device and transmit a second multiple choice question to the output device based on the response to the first multiple choice question. The processor is programmed to continue to ask follow-on questions requiring only a simple multiple choice response from the patient until the patient is capable of communicating an outcome (i.e., his/her wants or needs). Responding to simple questions that are directly transmitted to a processor allows a patient with impaired motor, visual and/or cognitive functions to easily communicate with a caregiver, family member or others.
[0012] In a preferred embodiment, the system further comprises a searchable database containing a plurality of multiple choice questions. The processor includes a conditional decision tree algorithm, application or program designed to choose one of the multiple choice questions from the database based on each response from the patient. Alternatively, the processor may contain an artificial intelligence or machine-learning algorithm that chooses the questions from the database and/or creates new questions based on previous learning. The conditional decision tree algorithm causes the processor to choose a specific series of multiple choice questions based on responses from the patient until the patient communicates his or her desires. The series of multiple choice questions preferably start with a broad or generic question (e.g.,“Are you feeling pain?”) and progressively get more narrow based on the patient’s responses such that the patient can effectively communicate their wants or needs to a third-party, such as a nurse, doctor or family member.
[0013] In a preferred embodiment, the multiple choice questions are binary and therefore only have two possible responses (i.e., Yes/No or True/False). Having a limited number of choices makes it easier for the impaired or disabled patient to respond to the questions. In addition, the binary questions allow the processor to move more quickly through the decision tree algorithm to help the patient communicate more effectively. Alternatively, the multiple choice questions may be ternary and thus have three possible responses (i.e. Yes/No and “Maybe” or“I don’t know”). If the patient responds with the third choice, the processor is configured to transmit a follow-up ternary question with different wording and/or move to a different branch of the tree in an attempt to continue the communication process with the patient.
[0014] In certain embodiments, the processor may be configured to store information related to the patient, such as the patient’s overall condition, previous or recurrent complaints from the patient, time of day, surgery performed, medication dosage or schedule and the like. The processor is configured to choose the multiple choice questions based on this information. For example, in certain embodiments, the processor will choose the first multiple choice question based on previous responses of the patient or recurrent issues that the patient has experienced during the hospital stay (e.g., pain in the left side). This allows the patient to move through the decision tree of questions and arrive at his or her communication goal more quickly and efficiently.
[0015] The communication system of the present disclosure may further comprise a portable device, such as a computer, tablet, mobile phone or the like, that houses the processor and is coupled to the input and output devices. In an exemplary embodiment, the output device comprises a speaker, which may optionally include a headset, configured to transmit an audible signal to the patient. This allows the patient to hear the questions from the processor without having to move or even open their eyes. The portable device may further include a user interface directly or remotely (e.g., wirelessly) connected to the processor and configured to interact with the decision tree algorithm. In certain embodiments, the user interface will allow a third party, such as the caregiver, to manually choose the first question or one or more of the succeeding questions (e.g., disrupt the automatic decision tree process and jump to another series of questions). The user interface may also allow the caregiver or a family member to transmit a message directly to the patient.
[0016] The patient input device preferably comprises at least one contact surface removably engageable with a surface of the patient’s skin. In one embodiment, the input device comprises first and second sensors, each having a contact surface for contacting a different part of the patient’s body. The first sensor is preferably associated with a positive response to a binary question and the second sensor is preferably associated with a negative response to the binary question (or vice versa). The sensors are coupled to the portable device and configured to transfer the information to the processor such that the processor receives a yes/no response to each question transmitted to the patient. In an exemplary embodiment, one of the sensors will include a vibrating element to allow the patient to differentiate between the sensors (e.g., vibration means a positive or yes response).
[0017] The sensor on the input device may be configured to respond to movement, pressure or other stimuli. In one embodiment, the sensor comprises a finger sensor removably engageable with a finger of the patient and configured to sense pressure applied to its contact surface by the patient’s finger. The finger sensor may have multiple controls for use with one finger, or it may include two sensors removably engageable with two of the patient’s fingers. This allows the patient to simply move one or more of his/her fingers to respond to the binary questions received through the input device.
[0018] In another embodiment, the sensor comprises an eyelid or pupil sensor removably engageable with a portion of the patient’s eye and configured to sense movement of the patient’s eyelid and/or pupil. The eye sensor may be configured such that eye movement in response to a question (e.g., blinking) signals Yes and no eye movement or blinking signals No. Alternatively, the sensor may include two contact surfaces, each attached to one of the patient’s eyes, with movement of each eyelid being associated with either a negative or positive response.
[0019] In yet another embodiment, the sensor is also configured to detect one or more physiological parameters of the patient, such as heart rate, body temperature, blood pressure or blood flow. The sensor is preferably configured to automatically transmit the physiological parameter(s) to the processor to provide such information to the caregiver. In certain embodiments, the processor is configured to choose one or more of the multiple choice questions based on the physiological parameters that have been measured or detected. For example, if the sensor detects an abnormally high heart rate, the processor may first ask the patient a question related to the heart rate (i.e., are you feeling stressed or anxious).
[0020] In yet another embodiment, the system further comprises a controller coupled to the processor and configured to transmit an electronic signal to control a device within the patient’s room based on the patient’s response to one of the binary questions. This allows the patient to directly control his or her environmental conditions (e.g., lighting, room temperature, television, sound system and the like) through the communication system. Alternatively, the processor may be wirelessly coupled to a nurse station such that the patient can call a nurse or other caregiver directly through the patient communication system.
[0021] In another aspect of the invention, a method for communicating with a patient comprises transmitting a first multiple choice question to a patient, detecting a response from the patient, processing the response and transmitting a second multiple choice question to the patient based on the response. The second multiple choice question is preferably generated by retrieving it from a database of potential questions. In certain aspects, the method includes executing a conditional decision tree program or algorithm to choose a specific series of multiple choice questions from the database based on responses from the patient until the patient communicates an outcome.
[0022] In one embodiment, the method may further comprise storing information related to the patient and choosing the first question or succeeding questions based on this information. The information may comprise a specific condition of the patient, recurring issues with the patient, the time of day or any other information that may be relevant. Preferably, the information will allow the patient to move through the conditional decision tree more quickly and efficiently.
[0023] In an exemplary embodiment, the detecting step comprises contacting an outer skin surface of the patient with a contact surface of a sensor and detecting pressure or movement by the patient. The detected movement or pressure is then relayed to the processor. In an exemplary embodiment, a first portion of the patient’s skin is contacted with a first sensor and a second portion of the patient’s skin is contacted with a second sensor. The first sensor is associated with either a positive or negative response to a binary question by the patient and the second sensor is associated with the opposite response. The method may further comprising vibrating one of the sensors so that the patient can differentiate between the two.
[0024] In another aspect of the invention, a system for communicating with a patient comprises a processor, an output device and a patient input device both coupled to the processor. The output device is configured to transmit an audible signal to the patient from the processor. The patient input device has a contact surface for contacting an outer skin surface of the patient and is configured to sense pressure or movement from the skin surface as a response to the audible signal. This system allows a patient to communicate with the processor simply by hearing sounds and then moving one or more body parts, such as his/her fmger(s), toe(s), eyelid(s) or the like.
[0025] It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Additional features of the disclosure will be set forth in part in the description which follows or may be learned by practice of the disclosure. BRIEF DESCRIPTION OF THE DRAWINGS
[0026] The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate several embodiments of the disclosure and together with the description serve to explain the principles of the disclosure.
[0027] FIG. 1 is a representative drawing of a system for communicating with a patient according to the present disclosure;
[0028] FIG. 2 is a representative drawing of an alternative embodiment of a system for communicating with a patient according to the present disclosure;
[0029] FIG. 3 illustrates an exemplary patient input device for the communication system of FIG. 2;
[0030] FIG. 4 is a flowchart of a conditional decision tree according to systems and methods of the present invention; and
[0031] FIG. 5 is a flowchart of another conditional decision tree according to systems and methods of the present invention.
DESCRIPTION OF THE EMBODIMENTS
[0032] This description and the accompanying drawings illustrate exemplary embodiments and should not be taken as limiting, with the claims defining the scope of the present disclosure, including equivalents. Various mechanical, compositional, structural, and operational changes may be made without departing from the scope of this description and the claims, including equivalents. In some instances, well-known structures and techniques have not been shown or described in detail so as not to obscure the disclosure. Like numbers in two or more figures represent the same or similar elements. Furthermore, elements and their associated aspects that are described in detail with reference to one embodiment may, whenever practical, be included in other embodiments in which they are not specifically shown or described. For example, if an element is described in detail with reference to one embodiment and is not described with reference to a second embodiment, the element may nevertheless be claimed as included in the second embodiment. Moreover, the depictions herein are for illustrative purposes only and do not necessarily reflect the actual shape, size, or dimensions of the system or illustrated components.
[0033] It is noted that, as used in this specification and the appended claims, the singular forms“a,”“an,” and“the,” and any singular use of any word, include plural referents unless expressly and unequivocally limited to one referent. As used herein, the term“include” and its grammatical variants are intended to be non-limiting, such that recitation of items in a list is not to the exclusion of other like items that can be substituted or added to the listed items. [0034] The methods presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
[0035] FIG. 1 illustrates an illustrative system 100 for advanced patient communications according to one embodiment of the present disclosure. System 100 may be used in a variety of different healthcare settings, such as acute care or general hospitals, specialty care hospitals, nursing homes or long-term care facilities, ambulatory care centers, physicians’ offices, rehabilitation centers or as part of home healthcare systems. System 100 is particularly useful in inpatient acute care or general hospital settings wherein patients have been admitted for acute care for a severe injury or illness or for longer-term care for several days or weeks at a time. In certain cases, these patients may be disabled, acutely ill or have learning or communication disabilities. In other cases, the patients may have been intubated, medicated and/or sedated and thus have severe difficulty speaking, moving or understanding the events around them.
[0036] As shown, system 100 comprises a patient communication device 102 that includes a central processing device or processor, such as a computer attached to a display monitor 104, an output device 106, a patient input device 108 and a second, optional input device 110. Although a computer is shown in Fig. 1, communication device 102 may be any suitable mobile device capable of running a program and processing information, such as a mobile phone, tablet, laptop, or even a simple processing device that does not include a display monitor.
[0037] In an exemplary embodiment, the patient communication device 102 comprises a Raspberry Pi computer connected to a Raspberry Pi display. The Raspberry Pi is a single board computer, which contains a SOC (System On Chip) with a multicore processor, GPU, ROM and I/O Peripherals inside and DDR RAM memory, Ethernet port, USB host, and a micro HDMI. However, it will be understood by those skilled in the art that a variety of other suitable computers or processors may be used with the present invention. For example, any general- purpose computer having a suitable computer program, application or algorithm (as described below) that is stored in the computer and configured for use with the present disclosure. Such a computer program may also be stored in a computer readable storage medium, such as, but not limited to, USB drives, internal or external hard disk drives, floppy disks, optical disks, CD- ROMs, magnetic-optical disks, read-only memories, random access memories, EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions.
[0038] Display monitor 104 may be used to visually display information to the patient, family members, friends, nurses, doctors or other hospital staff. One embodiment of patient communication device 102 allows display monitor 104 to sit directly on a flat surface, such as a bedside table. In another embodiment, the monitor is attached to a swiveling arm extending from the wall so that it can be extended to various distances from the wall and heights from the floor, thus adapting to the patient's position. In other embodiments, display monitor 104 is entirely optional given that the patient will preferably receive information via audio signals (discussed below) and the processor is designed to transmit information received from the patient to another remote wireless provider, processor or server, such as a computer system or server in the hospital (e.g., a nurse station) or a remote processor/server in another building, such as a computer in a physician’s office or family member’s home.
[0039] Output device 106 preferably includes a speaker (not shown) for transmitting audio signals to the patient. The speaker may be built-in to the processor/computer or part of a separate standalone unit that is coupled to communication device 102. As shown, the speaker is preferably connected to a pair of earphones 112, earbuds, canal phones, headsets or the like so that the patient can hear the audio signals without interference from the operating room environment. Earphones 106 are shown connected to communication device 102 with wires, but it will be recognized that they may be wireless, Bluetooth, or any other suitable technology standard for wirelessly exchanging data between fixed and mobile devices over short distances.
[0040] It will also be recognized that communications system 100 may include a variety of suitable output devices for transmitting information to the patient. For example, information may be displayed on the display monitor as an alternative to or, or in addition to, the speaker and earphones. This would allow the patient to read questions or messages if, for example, he or she is capable of reading the monitor and would prefer this mode of communication, or if the patient has difficulty hearing. The system 100 may further comprise hearing aids (not shown) coupled to the speaker or headset to augment the audible signals.
[0041] In the preferred embodiment, patient input device 108 comprises a pair of patient contact devices 120, 122 attached to communication device 102 with wires or other suitable connection expedients (e.g., wireless, Bluetooth or the like). As shown, patient contact devices 120, 122 each include bands or straps sized for wrapping around one or more of the patient’s fingers. The straps preferably comprise a fastening element, such as Velcro or the like, to attach contact devices 120, 122 to the patient’s fingers. Each of the contact devices 120, 122 further comprise tactile sensors (not shown) on the surfaces of the straps. In the preferred embodiment, the tactile sensors are located on the outside surface of the straps, but they could also be located on the inside surfaces or both. The tactile sensors are preferably designed to detect a threshold level of movement or pressure from the patient’s fingers and convert this movement or pressure into an electrical signal that is transmitted back to communication device 102. Suitable movement or pressure sensors for use with the present invention include digital or analog pressure sensors that utilize piezoresistive, electromagnetic, capacitive, strain-gauge, or similar force or pressure measurements.
[0042] Contact devices 120, 122 are configured such that pressure or movement signifies a response by the patient. In an exemplary embodiment, one of the sensors will be designated by the processor as an affirmative or yes response and the other sensor designated as a negative or no response. Contact devices 120, 122 relay this information through electrical signals to the processor in communication device 102. In this manner, the patient merely has to move his or her finger to communicate with device 102. This allows the patient to communicate without moving his or her hands, legs or arms (or any other body part) and without even opening his/her eyes. Thus, the patient is capable of communicating with device 102 simply by listening to questions or messages from the speaker and moving his or her fingers in response.
[0043] In certain embodiments, contact devices 120, 122 will have other functionality designed to provide additional communication options for the patient. For example, one of the sensors may include a feature wherein if the patient moves his/her fingers twice (e.g., up and down or just twice in a row), this signals the processor to“go back” to the previous question, or to start all again. Alternatively, the processor may ask the patient directly if he/she wants to go back to a previous question or start the process over and the patient can answer with an affirmative response (i.e., by moving the finger associated with a yes response). This allows the patient to interrupt the conditional decision tree algorithm (discussed in more detail below) if, for example, he or she accidently presses the wrong finger and answers a question incorrectly, or if the patient becomes confused or frustrated and simply wants to start the process over.
[0044] In other embodiments, patient input device 108 may include additional contact devices (not shown) that provide additional response options for the patient. For example, if the processor transmits a ternary question to the patient, patient input device 108 may include 3 contact devices, each associated with one of the response options. Alternatively, one of the contact devices 120, 122 in Fig. 1 may be designed to solicit more than one response (i.e., with two sensors or no movement signifying a third response option). [0045] One of the patient contact devices 120, 122 may include a vibrating element (not shown) suitably coupled to communication device 102. The vibrating element may be configured to vibrate when one of devices 120, 122 are initially placed into contact with the patient’s skin. Alternatively, it may vibrate each time a question is posed to the patient from communication device 102 (or each time the patient responds). The vibrating element may be associated with either the positive or the negative response. In either case, the vibrating element allows the patient to differentiate the contact devices from each other so that he or she knows which contact device is associated with a positive answer and which is associated with a negative answer.
[0046] Patient contact devices 120, 122 may also include sensors for detecting physiological parameters of the patient, such as heart rate, skin temperature, blood pressure and the like. In the preferred embodiment, contact devices 120, 122 include sensors designed to detect the patient’s heart rate to determine if, for example, the patient is anxious or withdrawing from their sedation medication. This information is preferably relayed automatically to the caregiver through communication device 102. In addition, the processor is configured to use this information to determine one or more of the multiple choice or binary questions from the database (discussed in more detail below) to determine why the patient’s heart rate has changed (e.g.,“Are you feeling anxious?”). The preferred physiological sensors will include ones ordinarily used for ambulatory monitoring. For example, the sensors may comprise those used in conventional Holter and bedside monitoring applications, for monitoring heart rate and variability, ECG, respiration depth and rate, core temperature, hydration, blood pressure, brain function, oxygenation, skin impedance, and skin temperature.
[0047] The optional second input device 110 preferably comprises a standard keyboard for use with communication device 102. As with the other components of the system, the keyboard may be directly attached, or wirelessly coupled to, device 102. Input device 110 allows the caregiver (or the patient’s friends or family) to input questions or messages directly into communication device 102. These questions or messages are then relayed to patient through output device 106, as discussed in more detail below.
[0048] Figures 2 and 3 illustrate another embodiment of system 100 that includes a communication device 102, output device 106 and a patient input device 130. In this embodiment, patient input device 130 is a single finger contact device having a finger grip 140 with an opening to allow the user to slide his/her finger therethrough to hold input device 130. Patient input device 130 also has a user interface including, but not limited to, a pair of user controls 132, 134. Controls 132, 134 preferably each comprise a button that can be pressed with the finger’s thumb, although any suitable user control may be used, such as knobs, switches, dials, touch-screen and the like. In this embodiment, one of the controls 132, 134 is associated with a positive response and the other with a negative response. Input device 130 may further include a vibrating element or some other mechanism to differentiate the two controls from each other for the patient. As in the previous embodiments, patient input device 130 may include more than two controls to enable responses to multiple choice questions with more than two possible responses.
[0049] The present disclosure is not limited to the input devices described herein and may comprise other suitable skin contacting devices for detecting pressure or movement of the patient, such as toe sensors, pupil sensors, eyelid sensors and the like. For example, the input device may be attached to the patient’s toes and configured to sense toe movement to signify responses in a similar manner as described above with the finger sensors. Alternatively, the input device may be attached to the patient’s eyelids and configured to sense movement of the eyelids (i.e., blinking). In this embodiment, the sensor(s) may be attached to both eyelids for positive/negative responses or the sensor(s) may be attached to only one eyelid. In this latter case, the sensor may be designed to signify an affirmative response when the eyelid blinks once and a negative response with two blinks. Alternatively, the positive response can be associated with one blink and the negative response with no movement for a particular period of time (e.g., 2-5 seconds). In yet another embodiment, a simple pressure sensor can be used wherein one tap is associated with Yes, two taps with No and three taps with“go back” to return to a previous question or message.
[0050] The processor of the present disclosure is provided with software that includes a conditional“decision-tree” type algorithm, program or application and a database of multiple choice questions. The multiple choice questions are closed-ended and thus designed to solicit a limited number of responses. In the preferred embodiment, the multiple choice questions are binary and specifically designed to solicit a yes/no or true/false response from the patient. The processor is designed to move through the database of questions and develop a decision tree based on the patient’s responses by breaking down the data into smaller and smaller sets while incrementally developing the associated decision tree. The processor will select an initial broad or generic question based on various factors, including, but not limited to, general information about the patient or their injury or illness. Based on the patient’s response to the initial question, the algorithm will then utilize an If-Then type program step to move to a second question. In some cases, a certain response will progress the processor along the same“branch” of the decision tree, thereby further narrowing down the questions for the patient. In other cases, a response will cause the processor to jump to a completely different branch that involves a completely different set of questions.
[0051] In an alternative embodiment, the questions may be designed to solicit more than two options or responses. For example, the response options may include yes, no and“I don’t know”,“maybe”,“go back”,“start over” and the like. Or the processor may be designed to interpret no response from the patient as“I don’t know”,“maybe” or“go back”. In this latter case or third response option, the processor is preferably configured to transmit a follow-up question with different wording in the event that the patient was confused by the first question. Alternatively, the processor may be configured to jump to a different branch of the decision tree to transmit another type of question until the patient answers either yes or no. If the third response is“go back”, then the processor will automatically go back to the immediately succeeding question (and continue to do so until it reaches the first question if the patient continues to response“go back”).
[0052] Figure 4 illustrates one example of the binary tree approach of the preferred embodiment of the present invention. As shown, the first binary question to the patient may be “Are you in pain?”. The question is transmitted to the patient through output device 106 and the patient can either answer yes or no by tapping his/her fingers, toes, eyelids, etc. Patient input device 108 transmits this response back to the processor, which then moves to one limb of the decision tree based on whether the response was yes or no. For example, if the answer to the first question “Are you in pain?” was affirmative, the processor then selects a second binary question, such as“Is the pain under your waist?” and transmits this question to the patient. If the answer to the first question was No, then the processor will switch to a different tree limb and ask a different question (e.g.,“Are you thirsty?”). As the patient answers each binary question, the processor continues to follow these simple and strategically chosen yes/no questions until it eventually reaches a final response that communicates an outcome, which may include the patient’s wants or needs at that given time (i.e.,“My throat hurts”,“I need a drink or water” or “Where is my nurse?”).
[0053] In an alternative embodiment, the processor may include a machine learning or other suitable program or application of artificial intelligence that provides the processor with the ability to automatically learn and improve from experience without being explicitly programmed. In this embodiment, the machine learning application will accumulate observations or data from a plurality of patients and combine this learning to look for patterns in the data and make better decisions on which questions to select for each patient. For example, the machine learning application may find specific patterns of complaints associated with certain injuries, diseases or disorders and then use that learning to move questions related to those complaints forward in the decision tree. Thus, the processor is capable of“learning” that patients with a certain injury often feel pain in a particular area of the body at certain points in time after the initial surgery or hospital admission. The processor will then ask questions about pain in those areas of the body earlier in the decision tree process to make the communication process quicker and easier for the patient and hospital staff.
[0054] Once the processor obtains an outcome from the communication (e.g., the needs of the patient), communication device 102 transmits this outcome to the appropriate caregiver using various methods, such as SMS, email, printing at the nurse print station, WhatsApp, playing the message on a speaker, displaying the message on display monitor 104 or another monitor in this hospital or the like. Device 102 may also communicate to the patient that his or her desires have been transmitted to the appropriate person. The system may further allow the caregiver to directly send a message to the patient through communication device 102 (e.g.,“Received your message that you are thirsty. A nurse will be in your room shortly to get you a drink”). In this manner, the patient immediately knows that someone has heard his or her complaint and is responding.
[0055] In an exemplary embodiment, the processor is programmed to change the sequence of questions in the decision tree based on information that is either inputted into the processor by a caregiver, or based on past information stored in the database. This information may include the patient’s injuries, disease or disorder, the current condition of the patient, time of day, medication schedule, previous or recurrent needs of the patient and the like. For example, if the patient has previously experienced abdominal pain, that option/question will be positioned higher in the decision tree so that this need is identified with less questions. As another example, if the patient is typically thirsty in the morning, the processor may ask that question first in the morning. As yet another example, if the patient is scheduled for pain medication, the processor might ask about the particular pain relieved by that medication to determine if the patient is in need of the medication.
[0056] In another embodiment, the processor is programmed to receive and transmit messages to the patient through output device 106 The messages may be from a caregiver, or from outside friends or family members. The messages can be received wirelessly by communication device 102 or inputted directly through keyboard 110 or another suitable input device. Processor will preferably include any suitable text to speech software to convert text messages to sound so that the messages can be transmitted to the patient through output device 106 The processor may also be programmed to play music in the intervals between questions to ease the patient’s anxiety or it may be programmed to soothe the patient with, for example, repeated suggestions to calm down, to relax and/or to let the communication device 102 know what he or she needs.
[0057] The processor is preferably programmed to ask the patient if he/she wants to send a message to a caregiver, family or friends. The database includes a plurality of potential messages that the patient can send. The processor, for example, ask the patient: “Do you want to send a message?”. If the answer is yes, then the processor will scroll through potential individuals to whom the patient may want to send that message. Once the patient has selected a recipient, the processor is then programmed to scroll through another set of possible messages until the patient has chosen the message that he or she wishes to send. The processor may be configured to choose certain messages depending on certain information in the database, such as the time of day, condition of the patient, previous messages sent by the patient and the like. For example, when the patient wakes up, the processor could choose a message to family that says “Good morning. I just woke up and am feeling better.” or“Are you coming to visit me today?”.
[0058] In another embodiment, the processor is programed to automatically provide information to the patient at periodic times (or the information may be manually inputted from the caregiver or family members). For example, the processor may automatically inform the patient of the day or time, where he or she is, how many days he or she has been there, who his or her nurse is, etc. Alternatively, the processor may be programmed to inform the patient of certain medication times or other scheduled events for that day. The processor may also have functionality that allows the patient to shut down all communication so that the patient can sleep or otherwise relax if he/she does not wish to be disturbed.
[0059] In yet another embodiment, the processor includes suitable language translation software to communicate with a patient in different languages. The decision tree may include initial questions to determine the patient’s language (if unknown) or to provide the patient with a language preference (i.e.,“would you prefer Spanish or English?”). The processor is configured to speak to the patient in the patient’s preferred language and then to communicate the final message to hospital staff or other caregivers in their own language.
[0060] Referring now to Figure 5, communication device 102 may be wireless or directly connected to one or more external electronic devices that, for example, control the patient’s environment, such as room temperature controls, shades/curtains, bed position controls, lights, television, sound systems, nurse call buttons, communication devices for contacting other guests or family members or the like. The processor will include a number of multiple choice decision tree questions related to the control of these electronic devices. For example, the processor may ask the patient: “Do you want to control entertainment?”. If the answer is Yes, then the processor will move through the decision tree until it arrives at the patient’s desire (e.g., turn on the television or change the channel). The decision tree may include options for contacting third- parties, such as other guests, the nurse contact station or visiting family members. In this embodiment, the processor will be configured to transmit a message to the third party (e.g., SMS, text, email or the like). The decision tree may even include a number of possible messages/questions that the patient can transmit to the third-party. In this embodiment, the database will include a number of possible questions that are typically asked by patients in similar situations (e.g.,“Where am I?” or“How much longer will I be here?”). Simply by moving through the decision tree, the patient can select the message and the third party by answering multiple choice or yes/no questions.
[0061] The systems and methods of the present disclosure may be customizable and configured to suit the needs of a particular patient or hospital. Through fully extendable external connections, the patient is able to move through simple yes or no responses to communicate with hospital staff and/or control environmental conditions without the assistance of another person. For example, the disabled or acutely ill patient is able to traverse the binary tree to communicate pain in a particular part of their body, ask questions of hospital staff, turn on the television, control the room’s temperature, or adjust the bed’s elevation all through simple binary responses (i.e., by moving one or more of their fingers) . As a result, the patient is less dependent on external assistance and gains more control of their everyday life. This offers the patient a better experience while in the hospital, and focuses on the quality of care for such patients while interacting with the healthcare system
[0062] Hereby, all issued patents, published patent applications, and non-patent publications that are mentioned in this specification are herein incorporated by reference in their entirety for all purposes, to the same extent as if each individual issued patent, published patent application, or non-patent publication were specifically and individually indicated to be incorporated by reference.
[0063] While several embodiments of the disclosure have been shown in the drawings, it is not intended that the disclosure be limited thereto, as it is intended that the disclosure be as broad in scope as the art will allow and that the specification be read likewise. Therefore, the above description should not be construed as limiting, but merely as exemplifications of presently disclosed embodiments. Thus the scope of the embodiments should be determined by the appended claims and their legal equivalents, rather than by the examples given. [0064] Persons skilled in the art will understand that the devices and methods specifically described herein and illustrated in the accompanying drawings are non-limiting exemplary embodiments. The features illustrated or described in connection with one exemplary embodiment may be combined with the features of other embodiments. Various alternatives and modifications can be devised by those skilled in the art without departing from the disclosure. Accordingly, the present disclosure is intended to embrace all such alternatives, modifications and variances. As well, one skilled in the art will appreciate further features and advantages of the present disclosure based on the above-described embodiments. Accordingly, the present disclosure is not to be limited by what has been particularly shown and described, except as indicated by the appended claims.

Claims

What is claimed is:
1. A system for communicating to a patient, the system comprising:
a processor;
an output device coupled to the processor and configured to transmit a first multiple choice question to the patient;
a patient input device coupled to the processor and configured to receive a response from the patient to the first multiple choice question; and
wherein the processor is configured to determine a second multiple choice question for the patient based on the response to the first multiple choice question.
2. The system of claim 1 wherein the first and second multiple choice questions are binary questions.
3. The system of claim 1 wherein the first and second multiple choice questions are ternary questions.
4. The system of claim 1 further comprising a searchable database comprising a plurality of multiple choice questions, wherein the processor comprises a conditional program configured to choose one of the multiple choice questions from the database based on each response from the patient.
5. The system of claim 1 wherein the processor is configured to determine a specific series of multiple choice questions based on a series of responses from the patient until the patient communicates an outcome.
6. The system of claim 1 wherein the processor is configured to store information related to the patient and to determine the first multiple choice question based on the information.
7. The system of claim 6 wherein the information comprises a condition of the patient.
8. The system of claim 6 wherein the processor comprises a machine-learning application configured to process the information and determine the first multiple choice question.
9. The system of claim 1 wherein the output device comprises a speaker configured to transmit an audible signal to the patient.
10. The system of claim 1 further comprising a portable device connected wirelessly to a remote server, wherein the processor is housed within the portable device.
11. The system of claim 1 wherein the patient input device comprises a tactile sensor that responds to pressure or movement.
12. The system of claim 1 wherein the patient input device comprises first and
second sensors each having a contact surface configured for contacting a skin surface of the patient, wherein the first sensor corresponds to a positive response to a binary question and the second sensor corresponds to a negative response to the binary question.
13. The system of claim 12 wherein one of the first and second sensors comprises a vibrating element.
14. The system of claim 1 wherein the patient input device is configured to detect a physiological parameter of the patient, wherein the physiological parameter is one of heart rate, temperature, blood pressure or blood flow.
15. The system of claim 1 wherein the patient input device comprises a contact surface removably engageable with a finger of the patient and configured to sense pressure or movement applied to the contact surface by the patient’s finger.
16. The system of claim 1 further comprising a controller coupled to the processor and configured to transmit an instruction to an electronic device based on the patient’s response, the electronic device comprising one of a temperature controller, a light, a bed positioning device, a television or a sound system.
17. A method for communicating with a patient, the method comprising:
transmitting a first multiple choice question to a patient;
detecting a response form the patient to the first multiple choice question;
processing the response; and
transmitting a second multiple choice question to the patient based on the response.
18. The method of claim 17 wherein the first and second multiple choice questions are binary questions.
19. The method of claim 17 wherein the first and second multiple choice questions are ternary questions.
20. The method of claim 17 further comprising retrieving the second multiple choice question
from a database of multiple choice questions.
21. The method of claim 17 further comprising determining a specific series of multiple choice questions based on a series of responses from the patient until the patient communicates an outcome.
22. The method of claim 17 further comprising storing information related to the patient and determining the first multiple choice question based on said information, wherein the information comprises a condition of the patient.
23. The method of claim 17 wherein the first and second multiple choice questions are transmitted to the patient in an audible signal.
24. The method of claim 17 wherein the detecting comprises contacting an outer skin surface of the patient with a contact surface of a sensor and detecting pressure or movement applied to the contact surface by the patient.
25. The method of claim 17 wherein the detecting comprises contacting a first portion of an outer skin surface of the patient with a first contact surface of a first sensor and contacting a second portion of the outer skin surface of the patient with a second contact surface of a second sensor.
26. The method of claim 25 wherein the first sensor is associated with a positive response to a binary question and the second sensor is associated with a negative response to the binary question.
27. The method of claim 25 further comprising vibrating one of the first and second sensors.
28. The method of claim 17 further comprising controlling an environmental condition based on the patient’s response.
29. The method of claim 17 further comprising detecting a physiological parameter of the patient, wherein the physiological parameter is one of heart rate, temperature, blood pressure or blood flow.
30. A system for communicating with a patient, the system comprising:
a processor;
an output device coupled to the processor and configured to transmit an audible signal to the patient;
a patient input device having a contact surface configured to contact a skin surface of the patient, the patient input device comprising a sensor for detecting movement or pressure of the skin surface; and
wherein the processor is configured to transmit data to the patient through the output device and to receive data from the patient through the patient input device.
31. The system of claim 30 wherein the output device is configured to transmit a multiple choice question to the patient via the audible signal and wherein the patient input device is configured to transmit a response to the multiple choice question to the processor via the sensor.
32. The system of claim 31 wherein the processor is configured to determine a second multiple choice question for the patient based on the response to the first multiple choice question.
33. The system of claim 31 wherein the multiple choice question is a binary question.
PCT/US2019/027071 2018-04-14 2019-04-11 Systems and methods for improved communication with patients WO2019200158A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862657762P 2018-04-14 2018-04-14
US62/657,762 2018-04-14

Publications (1)

Publication Number Publication Date
WO2019200158A1 true WO2019200158A1 (en) 2019-10-17

Family

ID=68162991

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/027071 WO2019200158A1 (en) 2018-04-14 2019-04-11 Systems and methods for improved communication with patients

Country Status (1)

Country Link
WO (1) WO2019200158A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210906A (en) * 2020-02-25 2020-05-29 四川大学华西医院 Non-language communication system for ICU patients

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020035486A1 (en) * 2000-07-21 2002-03-21 Huyn Nam Q. Computerized clinical questionnaire with dynamically presented questions
US20040207542A1 (en) * 2003-04-16 2004-10-21 Massachusetts Institute Of Technology Methods and apparatus for vibrotactile communication
US20140242554A1 (en) * 2013-02-28 2014-08-28 King Saud University System for enabling a visually impaired or blind person to use an input device having at least one key
US20140253324A1 (en) * 2013-03-07 2014-09-11 Cellco Partnership (D/B/A Verizon Wireless) Movement monitoring
US20160125705A1 (en) * 2013-07-12 2016-05-05 University Of Iowa Research Foundation Methods And Systems For Augmentative And Alternative Communication
US20160293036A1 (en) * 2015-04-03 2016-10-06 Kaplan, Inc. System and method for adaptive assessment and training

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020035486A1 (en) * 2000-07-21 2002-03-21 Huyn Nam Q. Computerized clinical questionnaire with dynamically presented questions
US20040207542A1 (en) * 2003-04-16 2004-10-21 Massachusetts Institute Of Technology Methods and apparatus for vibrotactile communication
US20140242554A1 (en) * 2013-02-28 2014-08-28 King Saud University System for enabling a visually impaired or blind person to use an input device having at least one key
US20140253324A1 (en) * 2013-03-07 2014-09-11 Cellco Partnership (D/B/A Verizon Wireless) Movement monitoring
US20160125705A1 (en) * 2013-07-12 2016-05-05 University Of Iowa Research Foundation Methods And Systems For Augmentative And Alternative Communication
US20160293036A1 (en) * 2015-04-03 2016-10-06 Kaplan, Inc. System and method for adaptive assessment and training

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111210906A (en) * 2020-02-25 2020-05-29 四川大学华西医院 Non-language communication system for ICU patients

Similar Documents

Publication Publication Date Title
US11721183B2 (en) Methods and apparatus regarding electronic eyewear applicable for seniors
CN110891638B (en) Virtual reality device
Samarel The dying process
Grossbach et al. Promoting effective communication for patients receiving mechanical ventilation
US9286442B2 (en) Telecare and/or telehealth communication method and system
US9721450B2 (en) Wearable repetitive behavior awareness device and method
Mindell Coma: A Healing Journey: a Guide for Family, Friends, and Helpers
US20220113799A1 (en) Multiple switching electromyography (emg) assistive communications device
Cekaite Touch as embodied compassion in responses to pain and distress
Shoemark et al. Music therapy in the medical care of infants
WO2019200158A1 (en) Systems and methods for improved communication with patients
Cameron The nursing ‘How are you?’
EP3992984A1 (en) Interactive reminder companion
CN111191483B (en) Nursing method, device and storage medium
Bivens A neonatal intensive care unit (NICU) soundscape: Physiological monitors, rhetorical ventriloquism, and earwitnessing
Shi et al. Perception Research of Artificial Intelligence in Environmental Public Health Physiotherapy Nursing for the Elderly
CN110278489B (en) Method for controlling playing content according to patient expression and playing control system
JP7284325B2 (en) support system
Godfrey et al. Person-centred care: meaning and practice
Helin Mother-infant relationships in the Nicu: A multiple case study approach
Gerth Supporting children with intellectual and developmental disabilities in health care settings
Miller et al. Active listening and attending: communication skills and the healthcare environment
Miri Using Technology to Regulate Affect: A Multidisciplinary Perspective
Zimmermann et al. Caring for the patient with Alzheimer’s Disease
Shah Zero UI To Help The Elderly

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19785210

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19785210

Country of ref document: EP

Kind code of ref document: A1