WO2023057232A1 - System and method for supporting a patient's health control - Google Patents

System and method for supporting a patient's health control Download PDF

Info

Publication number
WO2023057232A1
WO2023057232A1 PCT/EP2022/076460 EP2022076460W WO2023057232A1 WO 2023057232 A1 WO2023057232 A1 WO 2023057232A1 EP 2022076460 W EP2022076460 W EP 2022076460W WO 2023057232 A1 WO2023057232 A1 WO 2023057232A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
module
remote computer
patient
specific patient
Prior art date
Application number
PCT/EP2022/076460
Other languages
French (fr)
Inventor
Thomas Doerr
Jens Mueller
Matthias Gratz
R. Hollis Whittington
Original Assignee
Biotronik Se & Co. Kg
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Biotronik Se & Co. Kg filed Critical Biotronik Se & Co. Kg
Publication of WO2023057232A1 publication Critical patent/WO2023057232A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • the invention is directed to a system comprising a device and a remote computer facility for supporting a patient's health control and to a respective method as well as a to computer program product and to a computer readable data carrier.
  • the computer program product may be a software routine, e.g. related to hardware support means within the remote computer facility.
  • Examples of such medical devices are a pacemaker (with leads), an implantable loop recorder, an Implantable Leadless Pacer (ILP), an Implantable Leadless Pressure Sensor (ILPS), an Implantable Cardiac Defibrillator (ICD) or an implantable non- transvenous defibrillator, which contain sensors that collect physiological signals and transmit them as data to a physician device or to a remote server where they can be viewed. Further long-term trends of each of these data are visualized to help guide patient care.
  • the signals are generally processed and presented independently from the other numerous signals or health data.
  • a system for supporting a patient's health control comprising a device and a remote computer facility with the features of claim 1, by a method for supporting a patient's health control with the features of claim 7, by a computer program product with the features of claim 14, and by a computer readable data carrier with the features of claim 15.
  • a system for supporting a patient's health control comprising a device and a remote computer facility, wherein the remote computer facility comprises an artificial intelligence module (Al module), wherein the device comprises at least one camera configured to record image data of a specific patient's face and a communication module configured to transmit the recorded image data to the remote computer facility, wherein the remote computer facility is configured to receive the recorded image data and to automatically conduct an analysis of the image data using the Al module, wherein the remote computer facility is further configured to determine one health assessment information of a group of pre-defined health assessment information based on this analysis and to provide the determined health assessment information to a pre-defined recipient.
  • Al module artificial intelligence module
  • the system is directed to the support of the health control of a patient, wherein the patient is a human or animal patient.
  • the specific patient is a single pre-defined person/animal who may have a chronic disease.
  • the system comprises a device and a remote computer facility, wherein the device comprises a communication module in order to transmit recorded image data to the remote computer facility.
  • the remote computer facility comprises a respective receiver configured to receive these image data.
  • the device comprises at least one camera which is configured to record image data of the specific patient's face.
  • the captured image is converted to respective image data according to known methods.
  • a picture of the full face or a picture of a part of the specific patient's face is taken, wherein the face may include the surface of the face and/or the interior of the face's openings, as far as it is accessible from the outside without using further equipment, e.g. the interior of the mouth, e.g. the tongue or the teeth.
  • the camera may be, for example, a CCD camera, a camera of a mobile device, a notebook camera, a web cam or a dedicated medical camera.
  • the image data may comprise a single image, a series of images and/or a video sequence.
  • the device may be a mobile device such as a mobile phone, a smartphone, or similar or may be a stationery device such as a dedicated image acquisition system, a webcam or a mirror with integrated camera.
  • the image or images may be captured by an app that remains open in the background and may take a facial image whenever the camera detects a human face or the specific patient's face.
  • the device uses its at least one camera to record image data of the specific patient's face regularly (e.g. once a day) or on request by the patient, by an HCP or by another system member such as a medical device (e.g. pacemaker, implantable loop recorder, . . .).
  • the image data of the patient's face may be captured each time, the device is activated.
  • the recorded image data are then transmitted by the communication module to the remote computer facility.
  • the communication module of the device provides (one-directional) data transmission to the remote computer facility, for example of image data.
  • data exchange may be bi-directional.
  • Such communication may comprise communication over the air (i.e. wireless, without wire) and/or by wired media.
  • the communication may use inductive magnetic means, acoustic methods (e.g. ultrasound), and/or acoustic, optical and/or electromagnetic waves, for example Bluetooth, WLAN, ZigBee, NFC, Wibree or WiMAX in the radio frequency region, Ethernet, or IrDA or free-space optical communication (FSO) in the infrared or optical frequency region.
  • inductive magnetic means e.g. ultrasound
  • acoustic methods e.g. ultrasound
  • acoustic, optical and/or electromagnetic waves for example Bluetooth, WLAN, ZigBee, NFC, Wibree or WiMAX in the radio frequency region, Ethernet, or IrDA or free-space optical communication (FSO) in the infrared or optical frequency region.
  • the remote computer facility comprises at least one processor which is regarded as a functional unit of the remote computer facility that interprets and executes instructions comprising an instruction control unit and an arithmetic and logic unit.
  • the remote computer facility is a functional unit that can perform substantial computations, including numerous arithmetic operations and logic operations without human intervention, such as, for example, a personal mobile device (PMD), a desktop computer, a server computer, clusters/warehouse scale computer or embedded system.
  • the at least one processor is connected with the receiver so that the received data are transmitted to the at least one processor for data analysis. At least a part of the at least one processor is used for the algorithms forming the Al module data analysis.
  • the remote computer facility further may comprise a memory which may include any volatile, non-volatile, magnetic, or electrical media, such as a random access memory (RAM), read-only memory (ROM), non-volatile RAM (NVRAM), electrically-erasable programmable ROM (EEPROM), flash memory, or any other memory device.
  • RAM random access memory
  • ROM read-only memory
  • NVRAM non-volatile RAM
  • EEPROM electrically-erasable programmable ROM
  • flash memory or any other memory device.
  • the remote computer facility receives the transmitted image data from the device by its corresponding receiver. These image data form the input data.
  • Al module is used to analyse the input image data.
  • the Al module realizes an Al algorithm, wherein an algorithm is finite set of well-defined rules for the solution of the above problem in a finite number of steps or a sequence of operations for performing the above and below specific task.
  • the Al algorithm comprises at least one so-called machine learning algorithm where computers programs (algorithms) learn associations of predictive power from examples in data. Machine learning is most simply the application of statistical models to data using computers. Machine learning uses a broader set of statistical techniques than those typically used in medicine.
  • Al algorithms further comprises so-called deep learning algorithms that are based on models with less assumptions about the underlying data and are therefore able to handle more complex data. Deep learning algorithms allow a computer to be fed with large quantities of raw data and to discover the representations necessary for detection or classification. Deep learning algorithms rely on multiple layers of representation of the data with successive transformations that amplify aspects of the input that are important for discrimination and suppress irrelevant variations. Deep learning may be supervised or unsupervised. Al algorithms further comprise supervised learning training computer algorithms to learn associations between inputs and outputs in data through analysis of outputs of interest defined by a (typically human) supervisor. Once associations have been learned based on existing data they can be used to predict future examples. Al algorithms further comprise unsupervised learning computer algorithms that learn associations in data without external definition of associations of interest. Unsupervised learning is able to identify previously undiscovered predictors, as opposed to simply relying on known associations. Al algorithms further comprise reinforcement learning computer algorithms that learn actions based on their ability to maximize a defined reward.
  • the Al module comprises a neural network with deep learning and/or a generative adversarial network and/or a self-organizing map. This means that embodiments of machine learning / Al approaches used by the Al module provided for preparation/training of the Al module prior analysis of image data are:
  • Neural networks with deep learning i.e. neural networks with more than one hidden layer:
  • GANs Geneative Adversarial Networks
  • Self organizing maps such as the Kohonen feature map
  • the Al module comprises implemented/trained artificial intelligence, e.g. in form of above network or map and a respective analysis/assignment algorithm, wherein this artificial intelligence is provided/established prior starting with the analysis of the image data.
  • This training state may be frozen for future image data analysis.
  • the image data may be used for further improvement and/or training of the artificial intelligence, e.g. the network or map and algorithm.
  • This may be realized, for example, by regularly also performing the described analysis using the image data of a training data set and automatically determining the output health assessment information.
  • the Al module is reset to a version referring to the directly preceding training state or any other preceding training state of the Al module.
  • the training of the Al module is provided using image data, health assessment information corresponding to the image data and respective personal and health data (including the pre-defined group of health assessment information) of a representative group of patients and/or image data and respective personal and health data (including the pre-defined group of health assessment information) of the specific patient for which the above system is to be used later for supporting his/her health control.
  • the Al module is a neuronal network and the training of the Al module is performed as a supervised learning, whereby the Al module is stimulated by input vectors (image data) from the patients and the people in a control group and the Al module has access to the know diagnosis (teaching vector).
  • the training of the Al module gets stopped, if the output of the Al module meets a predefined quality criterion (e.g. ratio of correct vs. incorrect classification; sensitivity; specificity; false positive; false negative; etc.).
  • a predefined quality criterion e.g. ratio of correct vs. incorrect classification; sensitivity; specificity; false positive; false negative; etc.
  • network weights are frozen, the Al module is tested on a verification dataset independent of the training data set and released for diagnosis.
  • Weights or network weights are parameters within a neural network that transforms input data within the network's hidden layers.
  • the neural network may additionally be trained using image data determined by scientific studies or models (diseasespecific face-related visual biomarkers). Such training image data are generated according to patho-physiognomic features of the disease under observation.
  • the Al module After provision of machine learning by the Al module, the Al module is used for an analysis of the image data provided by the device. During the analysis the Al module assesses the received image data and, as a result or output of the analysis, determines one health assessment information of a group of pre-defined health assessment information.
  • the assessment using the Al module may comprise, for example, assignment of the image data to a neuronal net (e.g. self-organizing map or Kohonen-Feature-Map) provided by the Al module.
  • Each node of the neuronal net is also assigned to one information of a group of predefined health assessment information, so that if the image data is assigned to one appropriate node of the neuronal net, the health assessment information assigned to the one found node is determined as the one health assessment information being the output of the analysis.
  • the health assessment information group may comprise, for example, the information "better health status”, “constant health status”, “worse health status”, “critical health status, immediate action required", "health status cannot be determined” or similar information, wherein the determined information mirrors the actual health status of the patient as it can be derived from the image data or an error-like information.
  • Alternative and additional health assessment information are.
  • the determined health information is then provided by the remote computer facility to the pre-defined recipient, for example, the HCP, the patient, a relative and/or representative of the patient. Therefore, the determined information may be transmitted to a computer, a mobile phone, a smartphone or similar device of the HCP, the patient, his/her relative and/or representative.
  • the system provides an easy-to-use way of monitoring the progress of a chronic disease of the specific patient as a remote monitoring system because no additional, complicated medical device is needed. Further, taking facial pictures of the specific patient does not cause any harm to the patient and may be provided even several times a day. Additionally, these image data are easily and fast transmittable to the remote computer facility. In particular, the system may be used for such patients whose chronic disease is affecting the face.
  • the chronic disease may be a cardiac disease (heart failure, coronary heart disease, hypertension), a respiratory disease (e.g. COPD), a mental disease (e.g. depression, eating disorders, alcohol and drug addiction, stress and strain syndrome), a disease of inner organs (liver, biliary tract, pancreas, kidney, diabetes, other metabolic disease . . .) or other diseases.
  • cardiac disease heart failure, coronary heart disease, hypertension
  • a respiratory disease e.g. COPD
  • a mental disease e.g. depression, eating disorders, alcohol and drug addiction, stress and strain syndrome
  • a disease of inner organs liver, biliary tract, pancreas, kidney, diabetes, other metabolic disease . . .
  • the system allows an easy, compliance promoting and cost-effective follow-up of a chronic disease by remote monitoring, wherein the system has the clinical and health economic advantage of being able to detect deterioration of the specific patient's health status at an early stage thereby providing the possibility to adjust the patient's therapy in good time. This has been shown to improve the prognosis of certain patient populations and save considerable costs for the healthcare system.
  • the Al module may comprise automatic facial attribute extraction from the input image data without the need to know the individual attributes of the face in advance.
  • the intelligence of the Al module e.g. the neural network (for example, a self-organizing map) is trained using the facial image data of the specific patient over a longer period of time and using additional information about the respective disease status of the specific patient.
  • the input data are e.g. (static) facial image data of the specific patient with a chronic disease at the beginning of a hospital stay (due to a deterioration of his/her health status) as well as facial image data captured during the hospital stay (representing an improvement of his/her health status) and at the end of the hospital stay (representing a further health status of the specific patient).
  • the Al module gets stimulated by the output of commercially available face recognition and face attribute extraction algorithms (e.g. DeepFace library).
  • At least one of the following facial attributes may, for example, be analysed by the remote computer facility using the Al module:
  • input image data (signal) conditioning filtering, scaling, normalization, white balance, transformation
  • health assessment information post-processing clustering, weighting, filtering, plausibility checking, Certainly may be performed partly in the device and/or partly or fully by the processor of the remote computer facility, wherein in one embodiment the Al module may perform at least one of the above steps.
  • automated model verification / model health monitoring / model drift monitoring may be implemented for the underlying Al intelligence model that is carried along / continuously learned in order to ensure the quality of the analysis result.
  • the automated model verification will be performed automatically after every training iteration of the Al module, using a validation framework which comperes the output of the retrained model with a current validation data set.
  • the retrained model gets only released, if the validation result fulfils one or more predefined quality criterions (e.g. ratio of correct vs. incorrect classification; sensitivity; specificity; false positive; false negative; ).
  • the automatic model health monitoring is a verification step to ensure the technical robustness and performance of the retrained Al module and includes steps to check necessary computational power for model execution, memory load, structural analysis and comparable automated steps to evaluate the technical state of Al module.
  • For automated model drift monitoring the retrained Al module stimulated with a historical data set that is always the same and the output is statistically compared with the results of the previous model iterations and evaluated (trending). If this trend deviates from a predefined range of expectations, the Al model cannot be used for clinical diagnostics.
  • the processor of the remote data facility implements a data / model governance layer to make the medical application traceable and audit-proof (documentation).
  • a data / model governance layer to make the medical application traceable and audit-proof (documentation).
  • all training and validation processes of the Al model are fully documented over the entire product life cycle in an automated software layer. This includes all information required to reconstruct the Al model in any state used for clinical diagnostics at any time (Al architecture; training vectors; verification results; usage data; ).
  • the analysis of the image data may comprise face recognition in order to provide identification of the specific patient for assignment of the input image data to the specific patient.
  • face recognition known methods may be used, for example FacelD (Apple Inc.) or DeepFace (Facebook).
  • the determined health assessment information may be displayed on a display unit of the remote computer facility or on a display unit connected to the remote computer facility in order to show the HCP the determined health assessment information.
  • the display unit may be formed by a computer monitor or screen having, for example, an electroluminescent (EL) display, a liquid crystal (LC) display, a light emitting diode (LED) display, an organic light emitting diode (OLED) display, an active matrix organic light emitting diode (AMOLED) display, a plasma (P) display or a quantum dot (QD) display.
  • the processor of the remote computer facility transmits the health assessment information to be displayed to the respective display unit. Accordingly, the display unit shows an accurate picture of the health condition of the patient.
  • the computer facility is configured to additionally consider sensor data provided by at least one sensor measuring at least one bodily parameter (physiological parameter) of the specific patient in the image data analysis using the Al module, wherein the measurement of the at least one bodily parameter may be time correlated with regard to the corresponding recordal of the image data of the specific patient.
  • the measurement of the at least one bodily parameter may be provided within a pre-defined time interval around the corresponding image data recordal, e.g. within a time interval of 1 hour or 10 minutes.
  • the medical device being different from the device may communicate with each other.
  • the device or a medical device forming an additional element of the system may provide the at least one sensor measuring at least one bodily parameter of the specific patient, for example, ECG, activity, blood glucose value, blood pressure value, body temperature, respiratory rate.
  • the sensor data are transmitted to the remote computer facility using the communication module of the device or a similar communication module of the medical device.
  • the sensor data are then received by a corresponding receiver of the remote computer facility and transmitted to its processor for analysis.
  • the sensor data may additionally be used in the analysis of the Al module, for example, the sensor data being an additional parameter that helps to assign the image data to a corresponding node in the neuronal network. Accordingly, the usage of sensor data of at least one bodily parameter increases accuracy of the analysis and therefore reliability of the output health assessment information.
  • sensor data of the at least one bodily parameter are included in the preparation / training of the Al intelligence of the Al module prior and/or during usage of the system.
  • the device further comprises a microphone and/or input means configured to record at least one answer of the specific patient with regard to at least one pre-defined anamnestic question queried by the device in connection and closely time correlated with the corresponding recordal of the image data of the specific patient, wherein the device is further configured to convert the specific patient's at least one answer into selfassessment data of the specific patient, wherein the communication module is configured to transmit the self-assessment data of the specific patient to the remote computer facility, wherein the remote computer facility is further configured to additionally consider the selfassessment data in the image data analysis using the Al module.
  • the recordal of the at least one answer is provided within a pre-defined time interval around the corresponding image data recordal, e.g.
  • Such pre-defined anamnestic question may comprise the following questions: What is your somatic medical history? What is your mental history? What symptoms are present? What medications you are taking?
  • the questions may be provided to the patient by the loudspeaker of the device or displayed to the patient by a screen/di splay of the device.
  • the answer of the patient may be speech data or text data inputted using a keypad/keyboard.
  • the speech data may comprise a system-guided voice dialog with the specific patient in the format of a brief medical history.
  • Converting the specific patient's at least one answer into self-assessment data of the specific patient may comprise A/D conversion of the sound data and/or of text data, extraction of the relevant information (e.g. from the sound data, for example comprising text conversion).
  • self-assessment data are included in the preparation / training of the Al intelligence of the Al module prior and/or during usage of the system.
  • the recordal of image data and, if applicable, the measurement of the at least one bodily parameter by the at least one sensor and/or the recordal of at least one answer of the specific patient with regard to the at least one pre-defined anamnestic question is provided in regular intervals, e.g. every week or every day around the same time, and/or on demand by the specific patient and/or the HCP, for example in a pre-defined time interval if a heart attack occurred. This improves comparability of these data.
  • the device is a mobile device, for example a smartphone, or a stationary device, for example a web cam or a mirror with at least one camera.
  • the remote computer facility comprises an artificial intelligence module (Al module), wherein the device comprises at least one camera recording image data of a specific patient's face and a communication module transmitting the recorded image data to the remote computer facility, wherein the remote computer facility receives the recorded image data and automatically conducts an analysis of the image data using the Al module, wherein the remote computer facility determines one health assessment information of a group of pre-defined health assessment information based on this analysis and provides the determined health assessment information to a pre-defined recipient.
  • Al module artificial intelligence module
  • the above method is, for example, realized as a computer program which is a combination of above and below specified computer instructions and data definitions that enable computer hardware to perform computational or control functions or which is a syntactic unit that conforms to the rules of a particular programming language and that is composed of declarations and statements or instructions needed for a above and below specified function, task, or problem solution.
  • additionally sensor data provided by at least one sensor measuring at least one bodily parameter of the specific patient are considered in the image data analysis using the Al module, wherein, in one embodiment, the measurement of the at least one bodily parameter may be time correlated with regard to the corresponding recordal of the image data of the specific patient.
  • the device further comprises a microphone and/or input means recording at least one answer of the specific patient with regard to at least one predefined anamnestic question queried by the device in connection and closely time correlated with the corresponding recordal of the image data of the specific patient, wherein the device converts the least one answer into self-assessment data of the specific patient, wherein the communication module transmits the self-assessment data of the specific patient to the remote computer facility, wherein the remote computer facility additionally considers the self-assessment data in the image data analysis using the Al module.
  • the recordal of image data and, if applicable, the measurement of the at least one bodily parameter by the at least one sensor and/or the recordal of at least one answer of the specific patient with regard to the at least one predefined anamnestic question is provided in regular intervals and/or on demand by the specific patient and/or the HCP.
  • the device is a mobile device, for example a smartphone, or a stationary device, for example a web cam or a mirror with at least one camera.
  • the Al module comprises a neural network with deep learning and/or a generative adversarial network and/or a self-organizing map.
  • the above method is, for example, realized as a computer program (to be executed at or within the remote computer facility and/or the medical device and/or the device, in particular utilizing their processors) which is a combination of above and below specified (computer) instructions and data definitions that enable computer hardware or a communication system to perform computational or control functions and/or operations, or which is a syntactic unit that conforms to the rules of a particular programming language and that is composed of declarations and statements or instructions needed for an above and below specified function, task, or problem solution.
  • a computer program to be executed at or within the remote computer facility and/or the medical device and/or the device, in particular utilizing their processors
  • a combination of above and below specified (computer) instructions and data definitions that enable computer hardware or a communication system to perform computational or control functions and/or operations, or which is a syntactic unit that conforms to the rules of a particular programming language and that is composed of declarations and statements or instructions needed for an above and below specified function, task, or problem solution.
  • Fig. 1 shows one embodiment of a system for supporting a patient's health control
  • Fig. 2 depicts one embodiment of the image and sensor data analysis using the Al module of the embodiment of Fig. 1.
  • Fig. 1 shows one embodiment of a system 100 for supporting a patient's 110 health control comprising a device, for example a smartphone 130, and a remote computer facility 120 comprising an Al module 121.
  • the smartphone 130 comprises a camera 131 and a communication module 133.
  • the system 100 further comprises a medical device, for example a pacemaker 140, implanted within the patient 110.
  • the patient 110 may have a chronic disease such as heart failure.
  • the pacemaker 140 regularly monitors a bodily parameter, for example provides regularly, for example twice a day, an ECG recording 160.
  • the pacemaker 140 comprises a sensor measuring the electrical ECG signals (electrodes and respective hard- and software) of the patient's heart.
  • the patient's face is captured by camera 131 of the smartphone.
  • an app of the smartphone 130 requests the patient 110 to take a picture of his/her face.
  • the recording of the patient's face is done using another smartphone app, which takes the picture of the patient's face whenever he/she activates the smartphone 130.
  • Image data 150 of the patient's face resulting therefrom are then transmitted (see arrow 135) to the remote computer facility 120.
  • the pacemaker 140 transmits the ECG data 160 to the remote computer facility 120 as well (see arrow 145).
  • the image data 150 and the ECG data 160 are transmitted to the Al module 121 where they are automatically analysed Al-based with regard to the progression of the chronic disease.
  • the recording of the face is done here in a smartphone app, which takes a picture of the patient whenever he activates the smartphone.
  • Figure 2 depicts an example of operation of the remote computer facility 120 comprising the Al module 121 with regard to the embodiment of the system 100 shown in Fig. 1.
  • the image data 150 and the ECG data 160 of the patient 110 are received by the remote computer facility 120.
  • the image data 150 are processed by a pre-processing unit 220 which scales the image data into a uniform format (for example with regard to resolution, colour depth, white balance, ). Additionally, the pre-processing unit 220 may identify the specific patient 110 using a common face recognition method.
  • the pre-processed image data and the ECG data 160 are provided to the Al module 121 comprising a trained neural network in the form of a Kohonen-Feature-Map.
  • the Al module 121 is trained as indicated above using image data of the specific patient 110 as well as ECG data, personal data, health data and health assessment information of the same patient.
  • the neural network consists of an input neuron layer 230, which is adapted with respect to its dimension to the pre-processed image dimension (normalized output from the pre-processing unit 220).
  • the input data are forwarded and finally output to the output feature map 250 in clusters 251, 252, 253, 254.
  • the following health assessment information can be distinguished by a downstream classification unit 260 of the Al module 121 for the respective cluster: cluster 251 : “better health status”, cluster 252: “worse health status”, cluster 253: “constant health status”, and cluster 254: "health status cannot be determined”, visualized by the coloration/pattern assigned to the respective cluster 251 to 254.
  • the Al module 121 may be a neuronal network and the training of the Al module may be performed as a supervised learning, whereby the Al module 121 is stimulated by input vectors (image data 150) from the patients and people in a control group, and the Al module 121 has access to the know diagnosis (teaching vector).
  • the training of the Al module 121 gets stopped, if the output of the Al module 121 meets a predefined quality criterion (e.g. ratio of correct vs. incorrect classification; sensitivity; specificity; false positive; false negative; etc).
  • a predefined quality criterion e.g. ratio of correct vs. incorrect classification; sensitivity; specificity; false positive; false negative; etc.
  • network weights are frozen, the Al module 121 is tested on a verification dataset independent of the training data set and released for diagnosis.
  • the results of the classification unit 260 may be made available either to an HCP, the patient 110 or (caring) relatives for example by displaying it on a screen. Then, depending on the underlying disease to be monitored, necessary therapeutic steps are initiated for the patient 110 (e.g., reminder to take medication; contacting the patient; adjusting medication; calling in the family HCP; hospitalization; ).
  • the above method of data analysis using the Al module 121 may be provided by using image data 150 of the patient 110 only or by using image data 150, ECG data 160 and additional sensor data of a bodily parameter. The training of the Al module 121 is then provided considering the different data situation.
  • the method and system described above provide a non-invasive, automatable means supporting health control applicable to a large number of patients for the follow-up of a chronic disease affecting the human face, in particular, for the follow-up of chronic heart failure.
  • a chronic disease affecting the human face
  • at least the image data of a specific patient's face are analyzed. The disease must therefore have a visual effect on the patient's face.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A system (100) for supporting a patient's (110) health control is described comprising a device (130) and a remote computer facility (120). The non-invasive system allows easy and cost-efficient follow-up of a chronic disease of the patient (110). The remote computer facility comprises an artificial intelligence module (AI module, 121), wherein the device comprises at least one camera (131) configured to record image data (150) of a specific patient's face and a communication module (133) configured to transmit the recorded image data to the remote computer facility, wherein the remote computer facility (120) is configured to receive the recorded image data and to automatically conduct an analysis of the image data using the AI module (121), wherein the remote computer facility is further configured to determine one health assessment information of a group of pre-defined health assessment information to diseases visually affecting the patient's face based on this analysis and to provide the determined health assessment information to a pre-defined recipient. A respective method is described, as well.

Description

System and method for supporting a patient's health control
The invention is directed to a system comprising a device and a remote computer facility for supporting a patient's health control and to a respective method as well as a to computer program product and to a computer readable data carrier. The computer program product may be a software routine, e.g. related to hardware support means within the remote computer facility.
It is known that data related to a patient's health are collected by implanted and/or extracorporeally located medical devices and analyzed in a remote monitoring system in order to adapt the patient's therapy. Often these data are produced by specific physiological sensors (e.g. ECG, impedance, activity, posture, heart sounds, pressure, respiration ...) which are cost intensive and contained in complicated medical devices. In order to receive precise and locally identifiable results of such data, in many cases, the medical device needs to be implanted into the patient's body. An implanted medical device may, however, impair the patient's organism. Examples of such medical devices are a pacemaker (with leads), an implantable loop recorder, an Implantable Leadless Pacer (ILP), an Implantable Leadless Pressure Sensor (ILPS), an Implantable Cardiac Defibrillator (ICD) or an implantable non- transvenous defibrillator, which contain sensors that collect physiological signals and transmit them as data to a physician device or to a remote server where they can be viewed. Further long-term trends of each of these data are visualized to help guide patient care. The signals are generally processed and presented independently from the other numerous signals or health data.
Further, there are state-of-the-art solutions which generally treat each physiological data independently and present them to the end-user as such. Treating each physiological signal independently provides an inaccurate picture in certain cases. Further, presentation of numerous signals independently can result in “information overload” for the patient or health care practitioner (HCP), and may not provide a holistic and clear picture of the patient’s health condition. As an example, the collection of physiological signals may be displayed as trends to the end-user, e.g. one trend for heart sounds amplitudes, another for DC impedance, another for posture, etc. Interpreting and using the numerous trends from all the physiological data (signals) to create an actionable health care plan for the patient can be difficult.
It is therefore desirable to provide a more cost efficient and less complicated and impairing system which provides a holistic and easy understandable information on the health condition of a specific patient.
The above problem is solved by a system for supporting a patient's health control comprising a device and a remote computer facility with the features of claim 1, by a method for supporting a patient's health control with the features of claim 7, by a computer program product with the features of claim 14, and by a computer readable data carrier with the features of claim 15.
In particular, the problem is solved by a system for supporting a patient's health control comprising a device and a remote computer facility, wherein the remote computer facility comprises an artificial intelligence module (Al module), wherein the device comprises at least one camera configured to record image data of a specific patient's face and a communication module configured to transmit the recorded image data to the remote computer facility, wherein the remote computer facility is configured to receive the recorded image data and to automatically conduct an analysis of the image data using the Al module, wherein the remote computer facility is further configured to determine one health assessment information of a group of pre-defined health assessment information based on this analysis and to provide the determined health assessment information to a pre-defined recipient.
The system is directed to the support of the health control of a patient, wherein the patient is a human or animal patient. The specific patient is a single pre-defined person/animal who may have a chronic disease. The system comprises a device and a remote computer facility, wherein the device comprises a communication module in order to transmit recorded image data to the remote computer facility. Accordingly, the remote computer facility comprises a respective receiver configured to receive these image data.
The device comprises at least one camera which is configured to record image data of the specific patient's face. The captured image is converted to respective image data according to known methods. In one embodiment a picture of the full face or a picture of a part of the specific patient's face is taken, wherein the face may include the surface of the face and/or the interior of the face's openings, as far as it is accessible from the outside without using further equipment, e.g. the interior of the mouth, e.g. the tongue or the teeth. The camera may be, for example, a CCD camera, a camera of a mobile device, a notebook camera, a web cam or a dedicated medical camera. The image data may comprise a single image, a series of images and/or a video sequence. The device may be a mobile device such as a mobile phone, a smartphone, or similar or may be a stationery device such as a dedicated image acquisition system, a webcam or a mirror with integrated camera. When using a mobile phone or a smartphone, the image or images may be captured by an app that remains open in the background and may take a facial image whenever the camera detects a human face or the specific patient's face. The device uses its at least one camera to record image data of the specific patient's face regularly (e.g. once a day) or on request by the patient, by an HCP or by another system member such as a medical device (e.g. pacemaker, implantable loop recorder, . . .). Alternatively, the image data of the patient's face may be captured each time, the device is activated. The recorded image data are then transmitted by the communication module to the remote computer facility.
The communication module of the device provides (one-directional) data transmission to the remote computer facility, for example of image data. In one embodiment data exchange may be bi-directional. Such communication may comprise communication over the air (i.e. wireless, without wire) and/or by wired media. The communication may use inductive magnetic means, acoustic methods (e.g. ultrasound), and/or acoustic, optical and/or electromagnetic waves, for example Bluetooth, WLAN, ZigBee, NFC, Wibree or WiMAX in the radio frequency region, Ethernet, or IrDA or free-space optical communication (FSO) in the infrared or optical frequency region.
The remote computer facility comprises at least one processor which is regarded as a functional unit of the remote computer facility that interprets and executes instructions comprising an instruction control unit and an arithmetic and logic unit. The remote computer facility is a functional unit that can perform substantial computations, including numerous arithmetic operations and logic operations without human intervention, such as, for example, a personal mobile device (PMD), a desktop computer, a server computer, clusters/warehouse scale computer or embedded system. The at least one processor is connected with the receiver so that the received data are transmitted to the at least one processor for data analysis. At least a part of the at least one processor is used for the algorithms forming the Al module data analysis.
The remote computer facility further may comprise a memory which may include any volatile, non-volatile, magnetic, or electrical media, such as a random access memory (RAM), read-only memory (ROM), non-volatile RAM (NVRAM), electrically-erasable programmable ROM (EEPROM), flash memory, or any other memory device.
According to the invention, the remote computer facility receives the transmitted image data from the device by its corresponding receiver. These image data form the input data. Then, Al module is used to analyse the input image data. The Al module realizes an Al algorithm, wherein an algorithm is finite set of well-defined rules for the solution of the above problem in a finite number of steps or a sequence of operations for performing the above and below specific task. The Al algorithm comprises at least one so-called machine learning algorithm where computers programs (algorithms) learn associations of predictive power from examples in data. Machine learning is most simply the application of statistical models to data using computers. Machine learning uses a broader set of statistical techniques than those typically used in medicine. Al algorithms further comprises so-called deep learning algorithms that are based on models with less assumptions about the underlying data and are therefore able to handle more complex data. Deep learning algorithms allow a computer to be fed with large quantities of raw data and to discover the representations necessary for detection or classification. Deep learning algorithms rely on multiple layers of representation of the data with successive transformations that amplify aspects of the input that are important for discrimination and suppress irrelevant variations. Deep learning may be supervised or unsupervised. Al algorithms further comprise supervised learning training computer algorithms to learn associations between inputs and outputs in data through analysis of outputs of interest defined by a (typically human) supervisor. Once associations have been learned based on existing data they can be used to predict future examples. Al algorithms further comprise unsupervised learning computer algorithms that learn associations in data without external definition of associations of interest. Unsupervised learning is able to identify previously undiscovered predictors, as opposed to simply relying on known associations. Al algorithms further comprise reinforcement learning computer algorithms that learn actions based on their ability to maximize a defined reward.
In one embodiment, the Al module comprises a neural network with deep learning and/or a generative adversarial network and/or a self-organizing map. This means that embodiments of machine learning / Al approaches used by the Al module provided for preparation/training of the Al module prior analysis of image data are:
Neural networks with deep learning, i.e. neural networks with more than one hidden layer:
- Feed Foreward Networks with multiple hidden layers (well suited, since relatively easy to implement and train),
- Convolutional Neuronal Networks, for example for applications in image, speech and biosignal recognition (i.e. multi-dimensional input vectors, also quite easy to handle), or
- Recurrent Neuronal Networks (more challenging to train).
Alternatively may be used in the Al module:
GANs (Generative Adversarial Networks) which are a good learning approach, since they may be trained with fewer training data sets (smaller patient population) and since the network can also be trained with "statistically" generated data, or Self organizing maps (such as the Kohonen feature map) may be used as tools for the automated identification of facial features and are considered to be quite robust for this application.
The Al module comprises implemented/trained artificial intelligence, e.g. in form of above network or map and a respective analysis/assignment algorithm, wherein this artificial intelligence is provided/established prior starting with the analysis of the image data. This training state may be frozen for future image data analysis. Alternatively, during analysis of the image data, the image data may be used for further improvement and/or training of the artificial intelligence, e.g. the network or map and algorithm. In the latter case, however, it may be necessary to implement a continuous quality assurance process in parallel to avoid an unwanted deviation of the artificial intelligence in an incorrect direction. This may be realized, for example, by regularly also performing the described analysis using the image data of a training data set and automatically determining the output health assessment information. In case the output health assessment information using the improved Al module intelligence considerably deviates from the output health assessment information assigned with the training data set the Al module is reset to a version referring to the directly preceding training state or any other preceding training state of the Al module.
In one embodiment, the training of the Al module is provided using image data, health assessment information corresponding to the image data and respective personal and health data (including the pre-defined group of health assessment information) of a representative group of patients and/or image data and respective personal and health data (including the pre-defined group of health assessment information) of the specific patient for which the above system is to be used later for supporting his/her health control.
In one embodiment, the Al module is a neuronal network and the training of the Al module is performed as a supervised learning, whereby the Al module is stimulated by input vectors (image data) from the patients and the people in a control group and the Al module has access to the know diagnosis (teaching vector). The training of the Al module gets stopped, if the output of the Al module meets a predefined quality criterion (e.g. ratio of correct vs. incorrect classification; sensitivity; specificity; false positive; false negative; etc.). After training is complete, network weights are frozen, the Al module is tested on a verification dataset independent of the training data set and released for diagnosis. Weights or network weights are parameters within a neural network that transforms input data within the network's hidden layers.
In a special preparation/training method of the Al module, the neural network may additionally be trained using image data determined by scientific studies or models (diseasespecific face-related visual biomarkers). Such training image data are generated according to patho-physiognomic features of the disease under observation.
After provision of machine learning by the Al module, the Al module is used for an analysis of the image data provided by the device. During the analysis the Al module assesses the received image data and, as a result or output of the analysis, determines one health assessment information of a group of pre-defined health assessment information. The assessment using the Al module may comprise, for example, assignment of the image data to a neuronal net (e.g. self-organizing map or Kohonen-Feature-Map) provided by the Al module. Each node of the neuronal net is also assigned to one information of a group of predefined health assessment information, so that if the image data is assigned to one appropriate node of the neuronal net, the health assessment information assigned to the one found node is determined as the one health assessment information being the output of the analysis. The health assessment information group may comprise, for example, the information "better health status", "constant health status", "worse health status", "critical health status, immediate action required", "health status cannot be determined" or similar information, wherein the determined information mirrors the actual health status of the patient as it can be derived from the image data or an error-like information. Alternative and additional health assessment information are. The determined health information is then provided by the remote computer facility to the pre-defined recipient, for example, the HCP, the patient, a relative and/or representative of the patient. Therefore, the determined information may be transmitted to a computer, a mobile phone, a smartphone or similar device of the HCP, the patient, his/her relative and/or representative. The system provides an easy-to-use way of monitoring the progress of a chronic disease of the specific patient as a remote monitoring system because no additional, complicated medical device is needed. Further, taking facial pictures of the specific patient does not cause any harm to the patient and may be provided even several times a day. Additionally, these image data are easily and fast transmittable to the remote computer facility. In particular, the system may be used for such patients whose chronic disease is affecting the face. Since at least image data of a specific patient's face are analyzed, it should be clear that the disease visually affects the patient's face. The system allows to assess the progress of the chronic disease as well as to support diagnostics and therapy decisions with regard to the chronic disease and/or chronic pain syndrome. The chronic disease may be a cardiac disease (heart failure, coronary heart disease, hypertension), a respiratory disease (e.g. COPD), a mental disease (e.g. depression, eating disorders, alcohol and drug addiction, stress and strain syndrome), a disease of inner organs (liver, biliary tract, pancreas, kidney, diabetes, other metabolic disease . . .) or other diseases.
The system allows an easy, compliance promoting and cost-effective follow-up of a chronic disease by remote monitoring, wherein the system has the clinical and health economic advantage of being able to detect deterioration of the specific patient's health status at an early stage thereby providing the possibility to adjust the patient's therapy in good time. This has been shown to improve the prognosis of certain patient populations and save considerable costs for the healthcare system.
In one embodiment, the Al module may comprise automatic facial attribute extraction from the input image data without the need to know the individual attributes of the face in advance. In this case, for example, the intelligence of the Al module, e.g. the neural network (for example, a self-organizing map) is trained using the facial image data of the specific patient over a longer period of time and using additional information about the respective disease status of the specific patient. The input data are e.g. (static) facial image data of the specific patient with a chronic disease at the beginning of a hospital stay (due to a deterioration of his/her health status) as well as facial image data captured during the hospital stay (representing an improvement of his/her health status) and at the end of the hospital stay (representing a further health status of the specific patient). During such training, the Al module gets stimulated by the output of commercially available face recognition and face attribute extraction algorithms (e.g. DeepFace library).
At least one of the following facial attributes may, for example, be analysed by the remote computer facility using the Al module:
- mouth shape and/or lip shape (open, closed, pressing lips, ...),
- facial symmetry and/or expression,
- opening width of the eyelids,
- skin colour, pigmentation, rash,
- colour of lips,
- occurrence of a swelling,
- shape of the nose,
- nose wing curvature,
- eye colour, size of the pupils,
- size of a lacrimal sac,
- eye movement, blinking, twitching of the eyelids,
- opening width of the eyelid,
- shape, colour and/or degree of coating of the tongue,
- movement patterns when breathing and/or speaking,
- forehead wrinkles, nasal and/or other facial wrinkles
- facial movements, face twitching,
- sweat.
In one embodiment input image data (signal) conditioning (filtering, scaling, normalization, white balance, transformation ...) and health assessment information post-processing (clustering, weighting, filtering, plausibility checking, ...) may be performed partly in the device and/or partly or fully by the processor of the remote computer facility, wherein in one embodiment the Al module may perform at least one of the above steps.
In one embodiment automated model verification / model health monitoring / model drift monitoring may be implemented for the underlying Al intelligence model that is carried along / continuously learned in order to ensure the quality of the analysis result. The automated model verification will be performed automatically after every training iteration of the Al module, using a validation framework which comperes the output of the retrained model with a current validation data set. The retrained model gets only released, if the validation result fulfils one or more predefined quality criterions (e.g. ratio of correct vs. incorrect classification; sensitivity; specificity; false positive; false negative; ...). The automatic model health monitoring is a verification step to ensure the technical robustness and performance of the retrained Al module and includes steps to check necessary computational power for model execution, memory load, structural analysis and comparable automated steps to evaluate the technical state of Al module. For automated model drift monitoring the retrained Al module stimulated with a historical data set that is always the same and the output is statistically compared with the results of the previous model iterations and evaluated (trending). If this trend deviates from a predefined range of expectations, the Al model cannot be used for clinical diagnostics.
In one embodiment, the processor of the remote data facility implements a data / model governance layer to make the medical application traceable and audit-proof (documentation). For this purpose, all training and validation processes of the Al model are fully documented over the entire product life cycle in an automated software layer. This includes all information required to reconstruct the Al model in any state used for clinical diagnostics at any time (Al architecture; training vectors; verification results; usage data; ...).
In one embodiment the analysis of the image data may comprise face recognition in order to provide identification of the specific patient for assignment of the input image data to the specific patient. For face recognition known methods may be used, for example FacelD (Apple Inc.) or DeepFace (Facebook).
In one embodiment the determined health assessment information may be displayed on a display unit of the remote computer facility or on a display unit connected to the remote computer facility in order to show the HCP the determined health assessment information. The display unit may be formed by a computer monitor or screen having, for example, an electroluminescent (EL) display, a liquid crystal (LC) display, a light emitting diode (LED) display, an organic light emitting diode (OLED) display, an active matrix organic light emitting diode (AMOLED) display, a plasma (P) display or a quantum dot (QD) display. The processor of the remote computer facility transmits the health assessment information to be displayed to the respective display unit. Accordingly, the display unit shows an accurate picture of the health condition of the patient.
In one embodiment, the computer facility is configured to additionally consider sensor data provided by at least one sensor measuring at least one bodily parameter (physiological parameter) of the specific patient in the image data analysis using the Al module, wherein the measurement of the at least one bodily parameter may be time correlated with regard to the corresponding recordal of the image data of the specific patient. For example, the measurement of the at least one bodily parameter may be provided within a pre-defined time interval around the corresponding image data recordal, e.g. within a time interval of 1 hour or 10 minutes. In order to provide this correlation, the medical device being different from the device may communicate with each other. In this embodiment, the device or a medical device forming an additional element of the system may provide the at least one sensor measuring at least one bodily parameter of the specific patient, for example, ECG, activity, blood glucose value, blood pressure value, body temperature, respiratory rate. After measurement, the sensor data are transmitted to the remote computer facility using the communication module of the device or a similar communication module of the medical device. The sensor data are then received by a corresponding receiver of the remote computer facility and transmitted to its processor for analysis. The sensor data may additionally be used in the analysis of the Al module, for example, the sensor data being an additional parameter that helps to assign the image data to a corresponding node in the neuronal network. Accordingly, the usage of sensor data of at least one bodily parameter increases accuracy of the analysis and therefore reliability of the output health assessment information. For this embodiment, sensor data of the at least one bodily parameter are included in the preparation / training of the Al intelligence of the Al module prior and/or during usage of the system.
In one embodiment, the device further comprises a microphone and/or input means configured to record at least one answer of the specific patient with regard to at least one pre-defined anamnestic question queried by the device in connection and closely time correlated with the corresponding recordal of the image data of the specific patient, wherein the device is further configured to convert the specific patient's at least one answer into selfassessment data of the specific patient, wherein the communication module is configured to transmit the self-assessment data of the specific patient to the remote computer facility, wherein the remote computer facility is further configured to additionally consider the selfassessment data in the image data analysis using the Al module. In this embodiment, the recordal of the at least one answer is provided within a pre-defined time interval around the corresponding image data recordal, e.g. within a time interval of 1 hour or 10 minutes. Such pre-defined anamnestic question may comprise the following questions: What is your somatic medical history? What is your mental history? What symptoms are present? What medications you are taking? The questions may be provided to the patient by the loudspeaker of the device or displayed to the patient by a screen/di splay of the device. The answer of the patient may be speech data or text data inputted using a keypad/keyboard. The speech data may comprise a system-guided voice dialog with the specific patient in the format of a brief medical history. Converting the specific patient's at least one answer into self-assessment data of the specific patient may comprise A/D conversion of the sound data and/or of text data, extraction of the relevant information (e.g. from the sound data, for example comprising text conversion). For this embodiment, self-assessment data are included in the preparation / training of the Al intelligence of the Al module prior and/or during usage of the system.
In one embodiment, the recordal of image data and, if applicable, the measurement of the at least one bodily parameter by the at least one sensor and/or the recordal of at least one answer of the specific patient with regard to the at least one pre-defined anamnestic question is provided in regular intervals, e.g. every week or every day around the same time, and/or on demand by the specific patient and/or the HCP, for example in a pre-defined time interval if a heart attack occurred. This improves comparability of these data.
In one embodiment, the device is a mobile device, for example a smartphone, or a stationary device, for example a web cam or a mirror with at least one camera. Additionally, the above problem is solved with the same advantages as explained above by a method for supporting a patient's health control provided by a system comprising a device and a remote computer facility, wherein the remote computer facility comprises an artificial intelligence module (Al module), wherein the device comprises at least one camera recording image data of a specific patient's face and a communication module transmitting the recorded image data to the remote computer facility, wherein the remote computer facility receives the recorded image data and automatically conducts an analysis of the image data using the Al module, wherein the remote computer facility determines one health assessment information of a group of pre-defined health assessment information based on this analysis and provides the determined health assessment information to a pre-defined recipient.
The above method is, for example, realized as a computer program which is a combination of above and below specified computer instructions and data definitions that enable computer hardware to perform computational or control functions or which is a syntactic unit that conforms to the rules of a particular programming language and that is composed of declarations and statements or instructions needed for a above and below specified function, task, or problem solution.
In one embodiment of the method, additionally sensor data provided by at least one sensor measuring at least one bodily parameter of the specific patient are considered in the image data analysis using the Al module, wherein, in one embodiment, the measurement of the at least one bodily parameter may be time correlated with regard to the corresponding recordal of the image data of the specific patient.
In one embodiment of the method, the device further comprises a microphone and/or input means recording at least one answer of the specific patient with regard to at least one predefined anamnestic question queried by the device in connection and closely time correlated with the corresponding recordal of the image data of the specific patient, wherein the device converts the least one answer into self-assessment data of the specific patient, wherein the communication module transmits the self-assessment data of the specific patient to the remote computer facility, wherein the remote computer facility additionally considers the self-assessment data in the image data analysis using the Al module.
In one embodiment of the method, the recordal of image data and, if applicable, the measurement of the at least one bodily parameter by the at least one sensor and/or the recordal of at least one answer of the specific patient with regard to the at least one predefined anamnestic question is provided in regular intervals and/or on demand by the specific patient and/or the HCP.
In one embodiment of the method, the device is a mobile device, for example a smartphone, or a stationary device, for example a web cam or a mirror with at least one camera.
In one embodiment of the method, the Al module comprises a neural network with deep learning and/or a generative adversarial network and/or a self-organizing map.
The embodiments described for the medical device and the system apply to the above methods, as well, causing the analogous advantages. It is referred to above embodiments of the medical device and the system in this regard.
The above method is, for example, realized as a computer program (to be executed at or within the remote computer facility and/or the medical device and/or the device, in particular utilizing their processors) which is a combination of above and below specified (computer) instructions and data definitions that enable computer hardware or a communication system to perform computational or control functions and/or operations, or which is a syntactic unit that conforms to the rules of a particular programming language and that is composed of declarations and statements or instructions needed for an above and below specified function, task, or problem solution.
Furthermore, a computer program product is disclosed comprising instructions which, when executed by a processor, cause the processor to perform the steps of the above defined method. Accordingly, a computer readable data carrier storing such computer program product is described. The present invention will now be described in further detail with reference to the accompanying schematic drawing, wherein
Fig. 1 shows one embodiment of a system for supporting a patient's health control and
Fig. 2 depicts one embodiment of the image and sensor data analysis using the Al module of the embodiment of Fig. 1.
Fig. 1 shows one embodiment of a system 100 for supporting a patient's 110 health control comprising a device, for example a smartphone 130, and a remote computer facility 120 comprising an Al module 121. The smartphone 130 comprises a camera 131 and a communication module 133. The system 100 further comprises a medical device, for example a pacemaker 140, implanted within the patient 110. The patient 110 may have a chronic disease such as heart failure. The pacemaker 140 regularly monitors a bodily parameter, for example provides regularly, for example twice a day, an ECG recording 160. For that, the pacemaker 140 comprises a sensor measuring the electrical ECG signals (electrodes and respective hard- and software) of the patient's heart. At about the same time at which the ECG 160 of the patient 110 is measured, the patient's face is captured by camera 131 of the smartphone. For example, an app of the smartphone 130 requests the patient 110 to take a picture of his/her face. Alternatively, the recording of the patient's face is done using another smartphone app, which takes the picture of the patient's face whenever he/she activates the smartphone 130. Image data 150 of the patient's face resulting therefrom are then transmitted (see arrow 135) to the remote computer facility 120. The pacemaker 140 transmits the ECG data 160 to the remote computer facility 120 as well (see arrow 145).
Within the remote computer facility 120 the image data 150 and the ECG data 160 are transmitted to the Al module 121 where they are automatically analysed Al-based with regard to the progression of the chronic disease. The recording of the face is done here in a smartphone app, which takes a picture of the patient whenever he activates the smartphone. Figure 2 depicts an example of operation of the remote computer facility 120 comprising the Al module 121 with regard to the embodiment of the system 100 shown in Fig. 1. The image data 150 and the ECG data 160 of the patient 110 are received by the remote computer facility 120. There, in the first step, the image data 150 are processed by a pre-processing unit 220 which scales the image data into a uniform format (for example with regard to resolution, colour depth, white balance, ...). Additionally, the pre-processing unit 220 may identify the specific patient 110 using a common face recognition method.
Then, the pre-processed image data and the ECG data 160, both being the input data, are provided to the Al module 121 comprising a trained neural network in the form of a Kohonen-Feature-Map. The Al module 121 is trained as indicated above using image data of the specific patient 110 as well as ECG data, personal data, health data and health assessment information of the same patient. The neural network consists of an input neuron layer 230, which is adapted with respect to its dimension to the pre-processed image dimension (normalized output from the pre-processing unit 220). In one or more hidden layers 240 the input data are forwarded and finally output to the output feature map 250 in clusters 251, 252, 253, 254. Depending on the assignment/accumulation in one of these clusters, the following health assessment information can be distinguished by a downstream classification unit 260 of the Al module 121 for the respective cluster: cluster 251 : "better health status", cluster 252: "worse health status", cluster 253: "constant health status", and cluster 254: "health status cannot be determined", visualized by the coloration/pattern assigned to the respective cluster 251 to 254.
The Al module 121 may be a neuronal network and the training of the Al module may be performed as a supervised learning, whereby the Al module 121 is stimulated by input vectors (image data 150) from the patients and people in a control group, and the Al module 121 has access to the know diagnosis (teaching vector). The training of the Al module 121 gets stopped, if the output of the Al module 121 meets a predefined quality criterion (e.g. ratio of correct vs. incorrect classification; sensitivity; specificity; false positive; false negative; etc). After training is complete, network weights are frozen, the Al module 121 is tested on a verification dataset independent of the training data set and released for diagnosis.
The results of the classification unit 260 may be made available either to an HCP, the patient 110 or (caring) relatives for example by displaying it on a screen. Then, depending on the underlying disease to be monitored, necessary therapeutic steps are initiated for the patient 110 (e.g., reminder to take medication; contacting the patient; adjusting medication; calling in the family HCP; hospitalization; ...).
The above method of data analysis using the Al module 121 may be provided by using image data 150 of the patient 110 only or by using image data 150, ECG data 160 and additional sensor data of a bodily parameter. The training of the Al module 121 is then provided considering the different data situation.
The method and system described above provide a non-invasive, automatable means supporting health control applicable to a large number of patients for the follow-up of a chronic disease affecting the human face, in particular, for the follow-up of chronic heart failure. As already mentioned, at least the image data of a specific patient's face are analyzed. The disease must therefore have a visual effect on the patient's face.
Reference numbers
100 system
110 patient
120 remote computer facility
121 Al module
130 smartphone
131 camera
133 communication module
135 arrow
140 pacemaker
145 arrow
150 image data
160 ECG data
220 pre-processing unit
230 input neuron layer
240 hidden layer
250 output feature map
251 to 254 cluster
260 classification unit

Claims

Claims
1. A system (100) for supporting a patient's (110) health control comprising a device (130) and a remote computer facility (120), wherein the remote computer facility comprises an artificial intelligence module (Al module, 121), wherein the device comprises at least one camera (131) configured to record image data (150) of a specific patient's face and a communication module (133) configured to transmit the recorded image data to the remote computer facility, wherein the remote computer facility (120) is configured to receive the recorded image data and to automatically conduct an analysis of the image data using the Al module (121), wherein the remote computer facility is further configured to determine one health assessment information of a group of pre-defined health assessment information to diseases visually affecting the patient’s face based on this analysis and to provide the determined health assessment information to a pre-defined recipient.
2. The system of claim 1, wherein the computer facility (120) is configured to additionally consider sensor data (160) provided by at least one sensor measuring at least one bodily parameter of the specific patient (110) in the image data analysis using the Al module (121).
3. The system of any of the previous claims, wherein the device further comprises a microphone and/or input means configured to record at least one answer of the specific patient (110) with regard to at least one pre-defined anamnestic question queried by the device in connection and closely time correlated with the corresponding recordal of the image data (150) of the specific patient, wherein the device is further configured to convert the specific patient's at least one answer into self-assessment data of the specific patient, wherein the communication module (133) is configured to transmit the self-assessment data of the specific patient to the remote computer facility, wherein the remote computer facility is further configured to additionally consider the selfassessment data in the image data analysis using the Al module (121). The system of any of the previous claims, wherein the recordal of the image data (150) and, if applicable, the measurement of the at least one bodily parameter by the at least one sensor and/or the recordal of at least one answer of the specific patient (110) with regard to the at least one pre-defined anamnestic question is provided in regular intervals and/or on demand by the specific patient and/or the HCP. The system of any of the previous claims, wherein the device is a mobile device, for example a smartphone (130), or a stationary device, for example a web cam or a mirror with at least one camera. The system of any of the previous claims, wherein the Al module (121) comprises a neural network (230, 240, 250) with deep learning and/or a generative adversarial network and/or a self-organizing map. A method for supporting a patient's (110) health control provided by a system (100) comprising a device (130) and a remote computer facility (120), wherein the remote computer facility comprises an artificial intelligence module (Al module, 121), wherein the device comprises at least one camera (131) recording image data (150) of a specific patient's face and a communication module (133) transmitting the recorded image data to the remote computer facility, wherein the remote computer (120) facility receives the recorded image data and automatically conducts an analysis of the image data (150) using the Al module (121), wherein the remote computer facility determines one health assessment information of a group of pre-defined health assessment information to diseases visually affecting the patient’s face based on this analysis and provides the determined health assessment information to a pre-defined recipient. The method of claim 7, wherein additionally sensor data (160) provided by at least one sensor measuring at least one bodily parameter of the specific patient (110) are considered in the image data analysis using the Al module (121). The method of any of the claims 7 to 8, wherein the device further comprises a microphone and/or input means recording at least one answer of the specific patient (110) with regard to at least one pre-defined anamnestic question queried by the device in connection and closely time correlated with the corresponding recordal of the image data (150) of the specific patient, wherein the device converts the least one answer into self-assessment data of the specific patient, wherein the communication module (133) transmits the self-assessment data of the specific patient to the remote computer facility (120), wherein the remote computer facility additionally considers the selfassessment data in the image data analysis using the Al module (121). The method of any of the claims 7 to 9, wherein the recordal of the image data (150) and, if applicable, the measurement of the at least one bodily parameter by the at least one sensor and/or the recordal of at least one answer of the specific patient (110) with regard to the at least one pre-defined anamnestic question is provided in regular intervals and/or on demand by the specific patient and/or the HCP. The method of any of the claims 7 to 10, wherein the device is a mobile device, for example a smartphone, or a stationary device, for example a web cam or a mirror with at least one camera. The method of any of the claims 7 to 11, wherein the Al module (121) comprises a neural network with deep learning and/or a generative adversarial network and/or a self-organizing map. The method of any of the claims 7 to 12, wherein the automatic conduction of the analysis of the image data (150) comprises face recognition. A computer program product comprising instructions which, when executed by a processor, causes the processor to perform the steps of the method according to any of the claims 7 to 13. Computer readable data carrier storing a computer program product according to claim 14.
PCT/EP2022/076460 2021-10-04 2022-09-23 System and method for supporting a patient's health control WO2023057232A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202163251802P 2021-10-04 2021-10-04
US63/251,802 2021-10-04
EP21202405.3 2021-10-13
EP21202405 2021-10-13

Publications (1)

Publication Number Publication Date
WO2023057232A1 true WO2023057232A1 (en) 2023-04-13

Family

ID=83691127

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2022/076460 WO2023057232A1 (en) 2021-10-04 2022-09-23 System and method for supporting a patient's health control

Country Status (1)

Country Link
WO (1) WO2023057232A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140243651A1 (en) * 2013-02-27 2014-08-28 Min Jun Kim Health diagnosis system using image information
US20210007606A1 (en) * 2019-07-10 2021-01-14 Compal Electronics, Inc. Method of and imaging system for clinical sign detection

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140243651A1 (en) * 2013-02-27 2014-08-28 Min Jun Kim Health diagnosis system using image information
US20210007606A1 (en) * 2019-07-10 2021-01-14 Compal Electronics, Inc. Method of and imaging system for clinical sign detection

Similar Documents

Publication Publication Date Title
US20210106265A1 (en) Real time biometric recording, information analytics, and monitoring systems and methods
US20230082019A1 (en) Systems and methods for monitoring brain health status
US20210128059A1 (en) Method and apparatus for determining health status
CN108780663B (en) Digital personalized medical platform and system
CN111225612A (en) Neural obstacle identification and monitoring system based on machine learning
US20240091623A1 (en) System and method for client-side physiological condition estimations based on a video of an individual
JP2023544550A (en) Systems and methods for machine learning-assisted cognitive assessment and treatment
JP2015533559A (en) Systems and methods for perceptual and cognitive profiling
US20230395235A1 (en) System and Method for Delivering Personalized Cognitive Intervention
US20220218198A1 (en) Method and system for measuring pupillary light reflex with a mobile phone
US20200075160A1 (en) Systems and methods for seva: senior's virtual assistant
US20210145323A1 (en) Method and system for assessment of clinical and behavioral function using passive behavior monitoring
EP4124287A1 (en) Regularized multiple-input pain assessment and trend
WO2023057232A1 (en) System and method for supporting a patient's health control
Leon et al. Prospect of smart home-based detection of subclinical depressive disorders
EP4367609A1 (en) Integrative system and method for performing medical diagnosis using artificial intelligence
WO2021127566A1 (en) Devices and methods for measuring physiological parameters
JP7435965B2 (en) Information processing device, information processing method, learning model generation method, and program
US20230012989A1 (en) Systems and methods for rapid neurological assessment of clinical trial patients
US20240021313A1 (en) System and a method for detecting and quantifying electroencephalographic biomarkers in alzheimer's disease
EP4261834A1 (en) System for supporting a patient's health control and operating method of such system
WO2023053176A1 (en) Learning device, behavior recommendation device, learning method, behavior recommendation method, and recording medium
US20210335491A1 (en) Predictive adaptive intelligent diagnostics and treatment
Pramanik et al. Cardiovascular Diseases: Artificial Intelligence Clinical Decision Support System
Pattnayak et al. The Tracking System for E-Healthcare Employs IoT and Machine Learning Techniques

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22789572

Country of ref document: EP

Kind code of ref document: A1