US20200342968A1 - Visualization of medical device event processing - Google Patents

Visualization of medical device event processing Download PDF

Info

Publication number
US20200342968A1
US20200342968A1 US16/656,034 US201916656034A US2020342968A1 US 20200342968 A1 US20200342968 A1 US 20200342968A1 US 201916656034 A US201916656034 A US 201916656034A US 2020342968 A1 US2020342968 A1 US 2020342968A1
Authority
US
United States
Prior art keywords
data
block
patient
respect
user interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/656,034
Inventor
Gopal B. Avinash
Qian Zhao
Zili Ma
Dibyajyoti PATI
Venkata Ratnam Saripalli
Ravi Soni
Jiahui Guan
Min Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GE Precision Healthcare LLC
Original Assignee
GE Precision Healthcare LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GE Precision Healthcare LLC filed Critical GE Precision Healthcare LLC
Priority to US16/656,034 priority Critical patent/US20200342968A1/en
Assigned to GE Precision Healthcare LLC reassignment GE Precision Healthcare LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AVINASH, GOPAL B., GUAN, JIAHUI, MA, Zili, PATI, DIBYAJYOTI, SARIPALLI, VENKATA RATNAM, SONI, Ravi
Assigned to GE Precision Healthcare LLC reassignment GE Precision Healthcare LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ZHAO, QIAN, ZHANG, MIN, AVINASH, GOPAL B., GUAN, JIAHUI, MA, Zili, PATI, DIBYAJYOTI, SARIPALLI, VENKATA RATNAM, SONI, Ravi
Publication of US20200342968A1 publication Critical patent/US20200342968A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H70/00ICT specially adapted for the handling or processing of medical references
    • G16H70/20ICT specially adapted for the handling or processing of medical references relating to practices or guidelines
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Definitions

  • This disclosure relates generally to medical data visualization and, more particularly, to visualization of medical device event processing.
  • HIS hospital information systems
  • RIS radiology information systems
  • CIS clinical information systems
  • CVIS cardiovascular information systems
  • storage systems such as picture archiving and communication systems (PACS), library information systems (LIS), and electronic medical records (EMR).
  • PES picture archiving and communication systems
  • LIS library information systems
  • EMR electronic medical records
  • Information stored can include patient medication orders, medical histories, imaging data, test results, diagnosis information, management information, and/or scheduling information, for example.
  • a wealth of information is available, but the information can be siloed in various separate systems requiring separate access, search, and retrieval. Correlations between healthcare data remain elusive due to technological limitations on the associated systems.
  • a time series data visualization apparatus including a data processor to process one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference.
  • the example apparatus includes a visualization processor to transform the processed data into a plurality of graphical representations visually indicating a change over time in the data and to cluster the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion.
  • the example apparatus includes an interface builder to construct a graphical user interface to display the at least first and second blocks of graphical representations.
  • the example apparatus includes an interaction processor to facilitate interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.
  • Certain examples provide a tangible computer-readable storage medium including instructions that, when executed, cause at least one processor to at least: process one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference; transform the processed data into a plurality of graphical representation visually indicating a change over time in the data; cluster the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion, the first block, the second block, and the indicator to be displayed via a graphical user interface; and facilitate interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.
  • Certain examples provide a computer-implemented method for medical machine time-series event data processing and visualization.
  • the example method includes processing one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference.
  • the example method includes transforming the processed data into a plurality of graphical representations visually indicating a change over time in the data.
  • the example method includes clustering the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion.
  • the example method includes facilitating interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.
  • FIG. 1 is a block diagram of an example system including medical devices and associated monitoring devices for a patient.
  • FIG. 2 is a block diagram of an example system to process machine and physiological data and apply one or more machine learning models to predict future events from the data.
  • FIG. 3 is a block diagram of an example system to process machine and physiological data and apply one or more machine learning models to detect events that have occurred.
  • FIGS. 4A-4D depict example artificial intelligence models.
  • FIG. 5 illustrates an example visualization of data provided from multiple sources.
  • FIGS. 6-10E illustrate example interfaces displaying one-dimensional patient data and associated analysis for interaction and processing.
  • FIG. 11 illustrates an example time series data visualization system.
  • FIGS. 12-14 illustrate flow diagrams of example methods to process one-dimensional time series data using the example system(s) of FIGS. 1-4 and/or 11 .
  • FIG. 15 is a block diagram of an example processor platform capable of executing instructions to implement the example systems and methods disclosed and described herein.
  • the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements.
  • the terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements.
  • one object e.g., a material, element, structure, member, etc.
  • one object can be connected to or coupled to another object regardless of whether the one object is directly connected or coupled to the other object or whether there are one or more intervening objects between the one object and the other object.
  • a module, unit, or system may include a computer processor, controller, and/or other logic-based device that performs operations based on instructions stored on a tangible and non-transitory computer readable storage medium, such as a computer memory.
  • a module, unit, engine, or system may include a hard-wired device that performs operations based on hard-wired logic of the device.
  • Various modules, units, engines, and/or systems shown in the attached figures may represent the hardware that operates based on software or hardwired instructions, the software that directs hardware to perform the operations, or a combination thereof.
  • A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C.
  • the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
  • the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
  • the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
  • the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
  • references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.
  • Medical data can be obtained from imaging devices, sensors, laboratory tests, and/or other data sources. Alone or in combination, medical data can assist in diagnosing a patient, treating a patient, forming a profile for a patient population, influencing a clinical protocol, etc. However, to be useful, medical data must be organized properly for analysis and correlation beyond a human's ability to track and reason. Computers and associated software and data constructs can be implemented to transform disparate medical data into actionable results.
  • imaging devices e.g., gamma camera, positron emission tomography (PET) scanner, computed tomography (CT) scanner, X-Ray machine, magnetic resonance (MR) imaging machine, ultrasound scanner, etc.
  • PET positron emission tomography
  • CT computed tomography
  • MR magnetic resonance
  • ultrasound scanner etc.
  • imaging devices generate two-dimensional (2D) and/or three-dimensional (3D) medical images (e.g., native Digital Imaging and Communications in Medicine (DICOM) images) representative of the parts of the body (e.g., organs, tissues, etc.) to diagnose and/or treat diseases.
  • 3D medical images e.g., native Digital Imaging and Communications in Medicine (DICOM) images
  • Other devices such as electrocardiogram (ECG) systems, echoencephalograph (EEG), pulse oximetry (SpO2) sensors, blood pressure measuring cuffs, etc., provide one-dimensional waveform and/or time series data regarding a patient.
  • ECG electrocardiogram
  • EEG echoence
  • time-series data e.g., one-dimensional waveform data, etc.
  • Devices involved in the workflow can be configured, monitored, and updated throughout operation of the medical workflow.
  • Machine learning can be used to help configure, monitor, and update the medical workflow and devices.
  • Machine learning techniques whether deep learning networks or other experiential/observational learning system, can be used to characterize and otherwise interpret, extrapolate, conclude, and/or complete acquired medical data from a patient, for example.
  • Deep learning is a subset of machine learning that uses a set of algorithms to model high-level abstractions in data using a deep graph with multiple processing layers including linear and non-linear transformations. While many machine learning systems are seeded with initial features and/or network weights to be modified through learning and updating of the machine learning network, a deep learning network trains itself to identify “good” features for analysis.
  • machines employing deep learning techniques can process raw data better than machines using conventional machine learning techniques. Examining data for groups of highly correlated values or distinctive themes is facilitated using different layers of evaluation or abstraction.
  • Certain examples provide top-down systems and associated methods to capture and organize data (e.g., group, arrange with respect to an event, etc.), remove outliers, and/or otherwise align data with respect to a clinical event, trigger, other occurrence, etc., to form a ground truth for training, testing, etc., of a learning network model.
  • Certain examples provide automated processing and visualization of data for a group of patients and enable removal of outliers and drilling down into the data to determine patterns, trends, causation, individual patient data, etc.
  • Relevant data can be annotated quickly to form ground truth data for training of one or more artificial intelligence models.
  • a plurality of one-dimensional signal waveforms can be stacked and/or otherwise organized for a patient, and patients can be stacked and/or otherwise organized with respect to each other and with respect to one or more events, criterion, etc.
  • By organizing patients and their associated signals with respect to each other based on one or more events, criterion, etc. different outliers emerge from the group depending on the event, criterion, etc., used to organize the patients.
  • outliers eliminated from the data set can vary depending upon the event, criterion, etc.
  • a deep learning network also referred to as a deep neural network (DNN)
  • DNN deep neural network
  • a deep learning network/deep neural network can be a deployed network (e.g., a deployed network model or device) that is generated from the training network and provides an output in response to an input.
  • supervised learning is a deep learning training method in which the machine is provided already classified data from human sources.
  • unsupervised learning is a deep learning training method in which the machine is not given already classified data but makes the machine useful for abnormality detection.
  • semi-supervised learning is a deep learning training method in which the machine is provided a small amount of classified data from human sources compared to a larger amount of unclassified data available to the machine.
  • CNNs are biologically inspired networks of interconnected data used in deep learning for detection, segmentation, and recognition of pertinent objects and regions in datasets. CNNs evaluate raw data in the form of multiple arrays, breaking the data in a series of stages, examining the data for learned features.
  • Transfer learning is a process of a machine storing the information used in properly or improperly solving one problem to solve another problem of the same or similar nature as the first. Transfer learning may also be known as “inductive learning”. Transfer learning can make use of data from previous tasks, for example.
  • active learning is a process of machine learning in which the machine selects a set of examples for which to receive training data, rather than passively receiving examples chosen by an external entity. For example, as a machine learns, the machine can be allowed to select examples that the machine determines will be most helpful for learning, rather than relying only an external human expert or external system to identify and provide examples.
  • computer aided detection or “computer aided diagnosis” refer to computers that analyze medical data to suggest a possible diagnosis.
  • Deep learning is a class of machine learning techniques employing representation learning methods that allows a machine to be given raw data and determine the representations needed for data classification. Deep learning ascertains structure in data sets using backpropagation algorithms which are used to alter internal parameters (e.g., node weights) of the deep learning machine. Deep learning machines can utilize a variety of multilayer architectures and algorithms. While machine learning, for example, involves an identification of features to be used in training the network, deep learning processes raw data to identify features of interest without the external identification.
  • Deep learning in a neural network environment includes numerous interconnected nodes referred to as neurons.
  • Input neurons activated from an outside source, activate other neurons based on connections to those other neurons which are governed by the machine parameters.
  • a neural network behaves in a certain manner based on its own parameters. Learning refines the machine parameters, and, by extension, the connections between neurons in the network, such that the neural network behaves in a desired manner.
  • Deep learning that utilizes a convolutional neural network segments data using convolutional filters to locate and identify learned, observable features in the data.
  • Each filter or layer of the CNN architecture transforms the input data to increase the selectivity and invariance of the data. This abstraction of the data allows the machine to focus on the features in the data it is attempting to classify and ignore irrelevant background information.
  • Deep learning operates on the understanding that many datasets include high level features which include low level features. While examining an image, for example, rather than looking for an object, it is more efficient to look for edges which form motifs which form parts, which form the object being sought. These hierarchies of features can be found in many different forms of data such as speech and text, etc.
  • Learned observable features include objects and quantifiable regularities learned by the machine during supervised learning.
  • a machine provided with a large set of well classified data is better equipped to distinguish and extract the features pertinent to successful classification of new data.
  • a deep learning machine that utilizes transfer learning may properly connect data features to certain classifications affirmed by a human expert. Conversely, the same machine can, when informed of an incorrect classification by a human expert, update the parameters for classification.
  • Settings and/or other configuration information for example, can be guided by learned use of settings and/or other configuration information, and, as a system is used more (e.g., repeatedly and/or by multiple users), a number of variations and/or other possibilities for settings and/or other configuration information can be reduced for a given situation.
  • An example deep learning neural network can be trained on a set of expert classified data, for example. This set of data builds the first parameters for the neural network, and this would be the stage of supervised learning. During the stage of supervised learning, the neural network can be tested whether the desired behavior has been achieved.
  • a desired neural network behavior e.g., a machine has been trained to operate according to a specified threshold, etc.
  • the machine can be deployed for use (e.g., testing the machine with “real” data, etc.).
  • neural network classifications can be confirmed or denied (e.g., by an expert user, expert system, reference database, etc.) to continue to improve neural network behavior.
  • the example neural network is then in a state of transfer learning, as parameters for classification that determine neural network behavior are updated based on ongoing interactions.
  • the neural network can provide direct feedback to another process.
  • the neural network outputs data that is buffered (e.g., via the cloud, etc.) and validated before it is provided to another process.
  • Deep learning machines can utilize transfer learning when interacting with physicians to counteract the small dataset available in the supervised training. These deep learning machines can improve their computer aided diagnosis over time through training and transfer learning. However, a larger dataset results in a more accurate, more robust deployed deep neural network model that can be applied to transform disparate medical data into actionable results (e.g., system configuration/settings, computer-aided diagnosis results, image enhancement, etc.).
  • actionable results e.g., system configuration/settings, computer-aided diagnosis results, image enhancement, etc.
  • visualization of data can be driven by an artificial intelligence framework, and the artificial intelligence framework can provide data for visualization, evaluation, and action.
  • a framework including a) a computer executing one or more deep learning (DL) models and hybrid deep reinforcement learning (RL) models trained on aggregated machine timeseries data converted into the single standardized data structure format and in an ordered arrangement per patient to predict one or more future events and summarize pertinent past machine events related to the predicted one or more future machine events on a consistent input time series data of a patient having the standardized data structure format; and b) a healthcare provider-facing interface of an electronic device for use by a healthcare provider treating the patient configured to display the predicted one or more future machine events and the pertinent past machine events of the patient.
  • DL deep learning
  • RL hybrid deep reinforcement learning
  • machine signals, patient physiological signals, and a combination of machine and patient physiological signals provide improved prediction, detection, and/or classification of events during a medical procedure.
  • the three data contexts are represented in Table 1 below, associated with example artificial intelligence models that can provide a prediction, detection, and/or classification using the respective data source.
  • Data-driven predictions of events related to a medical treatment/procedure help to lower healthcare costs and improve the quality of care.
  • Certain examples involve DL models, hybrid RL models, and DL+Hybrid RL combination models for prediction of such events.
  • data-driven detection and classification of events related to a patient and/or machine helps to lower healthcare costs and improve the quality of care.
  • Certain examples involve DL models, hybrid RL models, and DL+Hybrid RL combination models for detection and classification of such events.
  • machine data can be used with one or more artificial intelligence constructs to form one or more predictions, detections, and/or classifications, for example.
  • Training data is to match collected data, so if live data is being collected during surgery, for example, the model is to be trained on live surgical data also.
  • Training parameters can be mapped to deployed parameters for live, dynamic delivery to a patient scenario (e.g., in the operating room, emergency room, etc.).
  • one-dimensional (1D) time series event data e.g., ECG, EEG, O2, etc.
  • 1D time series event data can be aggregated and processed, for example.
  • one or more medical devices can be applied to extract time-series data with respect to a patient, and one or more monitoring devices can capture and process such data.
  • Benefits to one-dimensional, time-series data modeling include identification of more data-driven events to avoid false alarms (e.g., avoiding false alarm fatigue, etc.), provide quality event detection, etc.
  • Other benefits include improved patient outcomes. Cost-savings can also be realized, such as reducing cost to better predict events such as when to reduce gas, when to take a patient off an oxygen ventilator, when to transfer a patient from operating room (OR) to other care, etc.
  • Other identification methods are threshold based rather than personalized.
  • Certain examples provide personalized modeling, based on a patient's own vitals, machine data from a healthcare procedure, etc. For example, for patient heart rate, a smaller person has a different rate than heavier built person. As such, alarms can differ based on the person rather than conforming to set global thresholds.
  • a model such as a DL model, etc., can determine or predict when to react to an alarm versus turn the alarm off, etc.
  • Certain examples can drive behavior, configuration, etc., of another machine (e.g., based on physiological conditions, a machine can send a notification to another machine to lower anesthesia, reduce ventilator, etc.; detect ventilator dystrophy and react to it, etc.).
  • one or more medical devices 110 e.g., ventilator, anesthesia machine, intravenous (IV) infusion drip, etc.
  • one or more monitoring devices 130 e.g., electrocardiogram (ECG) sensor, blood pressure sensor, respiratory monitor, etc.
  • ECG electrocardiogram
  • Such data can be used to train an AI model, can be processed by a trained AI model, etc.
  • machine data 210 and physiological (e.g., vitals, etc.) data 220 from one or more medical devices 230 , mobile digital health monitors 240 , one or more diagnostic cardiology (DCAR) devices 250 , etc. is provided in a data stream 260 (e.g., continuous streaming, live streaming, periodic streaming, etc.) to a preprocessor 270 to pre-process the data and apply one or more machine learning models to detect events in the data stream 260 , for example.
  • a data stream 260 e.g., continuous streaming, live streaming, periodic streaming, etc.
  • the pre-processed data is provided from the preprocessor 270 to an event predictor 280 , which applies one or more AI models, such as a DL model, a hybrid RL model, a DL+hybrid RL model, etc., to predict future events from the preprocessed data.
  • the event predictor 280 forms an output 290 including one or more insights, alerts, actions, etc., for a system, machine, user, etc.
  • the event predictor 280 can predict, based on model(s) applied to the streaming 1D data, occurrence of event(s) such as heart attack, stroke, high blood pressure, accelerated heart rate, etc., and an actionable alert can be provided by the output 290 to adjust an IV drip, activate a sensor and/or other monitor, change a medication dosage, obtain an image, send data to another machine to adjust its settings/configuration, etc.
  • event(s) such as heart attack, stroke, high blood pressure, accelerated heart rate, etc.
  • an actionable alert can be provided by the output 290 to adjust an IV drip, activate a sensor and/or other monitor, change a medication dosage, obtain an image, send data to another machine to adjust its settings/configuration, etc.
  • FIG. 3 illustrates an example system 300 in which the machine data 210 and the physiological (e.g., vitals, etc.) data 220 from the one or more medical devices 230 , mobile digital health monitors 240 , one or more diagnostic cardiology (DCAR) devices 250 , etc., is provided offline 310 (e.g., once a study and/or other exam has been completed, periodically at a certain time/interval or based on a current size of data collection, etc.) to the preprocessor 270 to pre-process the data and apply one or more machine learning models to detect events in the data set 310 , for example.
  • DCAR diagnostic cardiology
  • the pre-processed data is provided from the preprocessor 270 to an event detector 320 , which applies one or more AI models, such as a DL model, a hybrid RL model, a DL+hybrid RL model, etc., to detect and classify events from the preprocessed data.
  • the event detector 320 forms an annotation output 330 including labeled events, etc.
  • the event detector 320 can detect and classify, based on model(s) applied to the streaming 1D data, occurrence of event(s) such as heart attack, stroke, high blood pressure, accelerated heart rate, etc., and the event(s) can then be labeled to be used as ground truth 330 for training of an AI model, verification by a healthcare professional, adjustment of machine settings/configuration, etc.
  • event(s) such as heart attack, stroke, high blood pressure, accelerated heart rate, etc.
  • CNN convolution neural network
  • RNN recurrent neural network
  • Other machine learning/deep learning/other artificial intelligence networks can be used alone or in combination.
  • Convolutional neural networks are deep artificial neural networks that are used to classify images (e.g., associate a name or label with what object(s) are identified in the image, etc.), cluster images by similarity (e.g., photo search, etc.), and/or perform object recognition within scenes, for example.
  • CNNs can be used to instantiate algorithms that can identify faces, individuals, street signs, tumors, platypuses, and/or many other aspects of visual data, for example.
  • FIG. 4A illustrates an example CNN 400 including layers 402 , 404 , 406 , and 408 .
  • the layers 402 and 404 are connected with neural connections 403 .
  • the layers 404 and 406 are connected with neural connections 405 .
  • the layers 406 and 408 are connected with neural connections 407 .
  • the layer 402 is an input layer that, in the example of FIG. 4A , includes a plurality of nodes.
  • the layers 404 and 406 are hidden layers and include, the example of FIG. 4A , a plurality of nodes.
  • the neural network 400 may include more or less hidden layers 404 , 406 than shown.
  • the layer 408 is an output layer and includes, in the example of FIG. 4A , a node with an output 409 .
  • Each input 401 corresponds to a node of the input layer 402 , and each node of the input layer 402 has a connection 403 to each node of the hidden layer 404 .
  • Each node of the hidden layer 404 has a connection 405 to each node of the hidden layer 406 .
  • Each node of the hidden layer 406 has a connection 407 to the output layer 408 .
  • the output layer 408 has an output 409 to provide an output from the example neural network 400 .
  • connections 403 , 405 , and 407 certain example connections may be given added weight while other example connections may be given less weight in the neural network 400 .
  • Input nodes are activated through receipt of input data via inputs, for example.
  • Nodes of hidden layers 404 and 406 are activated through the forward flow of data through the network 400 via the connections 403 and 405 , respectively.
  • the node of the output layer 408 is activated after data processed in hidden layers 404 and 406 is sent via connections 407 .
  • the output node of the output layer 408 When the output node of the output layer 408 is activated, the node outputs an appropriate value based on processing accomplished in hidden layers 404 and 406 of the neural network 400 .
  • Recurrent networks are a powerful set of artificial neural network algorithms especially useful for processing sequential data such as sound, time series (e.g., sensor) data or written natural language, etc.
  • a recurrent neural network can be implemented similar to a CNN but including one or more connections 412 back to a prior layer, such as shown in the example RNN 410 of FIG. 4B .
  • a reinforcement learning (RL) model is an artificial intelligence model in which an agent takes an action in an environment to maximize a cumulative award.
  • FIG. 4C depicts an example RL network 420 in which an agent 422 operates with respect to an environment 424 .
  • An action 421 of the agent 422 results in a change in a state 423 of the environment 424 .
  • Reinforcement 425 is provided to the agent 422 from the environment 424 to provide a reward and/or other feedback to the agent 422 .
  • the state 423 and reinforcement 425 are incorporated into the agent 422 and influence its next action, for example.
  • Hybrid Reinforcement Models include a Deep Hybrid RL, for example.
  • Reinforcement learning refers to goal-oriented algorithms, which learn how to attain a complex objective (goal) and/or maximize along a particular dimension over many steps/actions. For example, an objective can include to maximize points won in a game over many moves.
  • Reinforcement learning models can start from a blank slate, and, under the right conditions, the model can achieve superior performance. Like a child incentivized by spankings and candy, these algorithms are penalized when they make the wrong decisions and rewarded when they make the right decisions to provide reinforcement.
  • a hybrid deep reinforcement network can be configured as shown in the example 430 of FIG. 4D .
  • a policy 432 drives model-free deep reinforcement learning algorithm(s) 434 to learn tasks associated with processing of data, such as 1D waveform data, etc.
  • Results of the model-free RL algorithm(s) 434 provide feedback to the policy 432 and generate samples 438 for model-based reinforcement algorithm(s) 436 .
  • the model-based RL algorithm(s) 430 operates according to the policy 432 and provides feedback to the policy 432 based on samples from the model-free RL algorithm(s) 434 .
  • Model-based RL algorithm(s) 436 are more sample-efficient and more flexible than task-specific policy(-ies) 432 learned with model-free RL algorithm(s) 434 , for example.
  • model-based RL algorithm(s) 436 is usually worse than model-free RL algorithm(s) 434 due to model bias, for example.
  • model-free RL algorithm(s) 434 are not limited by model accuracy and can therefore achieve better final performance, although at the expense of higher sample complexity.
  • the hybrid deep RL models combined model-based 436 and model-free 434 RL algorithms e.g., model-based algorithm(s) 436 to enable supervised initialization of policy 432 that can be fine-tuned with the model-free algorithm(s) 434 , etc.
  • model-free learning and improved sample efficiency for example.
  • hybrid RL models to facilitate determination and control of input and provide an ability to separate and/or combine information including ECG, spO2, blood pressure, other parameters.
  • Early warning signs of a condition or health issue can be determined and used to alert a patient, clinician, other system, etc.
  • a normal/baseline value can be determined, and deviation from the baseline (e.g., during the course of a surgical operation, etc.) can be determined. Signs of distress can be identified/predicted before an issue becomes critical.
  • a look-up table can be provided to select one or more artificial intelligence networks based on particular available input and desired output. The lookup table can enable rule-based neural network selection to generate appropriate model(s), for example.
  • a transformer or transformer network is a neural network architecture that transforms an input sequence to an output sequence using sequence transduction or neural machine translation (e.g., to process speech recognition, text-to-speech transformation, etc.), for example.
  • the transformer network has memory to remember or otherwise maintain dependencies and connections (e.g., between sounds and words, etc.).
  • the transformer network can include a CNN with one or more attention models to improve speed of translation/transformation.
  • the transformer can be implemented using a series of encoders and decoders (e.g., implemented using a neural network such as a feed forward neural network, CNN, etc., and one or more attention models, etc.). As such, the transformer network transforms one sequence into another sequence using the encoder(s) and decoder(s).
  • a transformer is applied to sequence and time series data.
  • the transformer Compared with an RNN and/or long short-term memory (LSTM) model, the transformer has the following advantages.
  • the transformer applies a self-attention mechanism that directly models relationships between all words in a sentence, regardless of their respective position.
  • the transformer allows for significantly more parallelization.
  • the transformer proposes to encode each position and applying the attention mechanism to relate two distant words of both the inputs and outputs with respect to itself, which then can be parallelized to accelerate training, for example.
  • the transformer requires less computation to train and is a much better fit for modern machine learning hardware, speeding up training by up to an order of magnitude, for example.
  • a graph neural network is a neural network that operates on a graph structure.
  • vertices or nodes are connected by edges, which can be directed or undirected edges, for example.
  • the GNN can be used to classify nodes in the graph structure, for example.
  • each node in the graph can be associated with a label, and node labels can be predicted by the GNN without ground truth.
  • Certain examples include aggregation techniques for detection, classification, and prediction of medical events based on DL processing of time series data. Different signals can be obtained, and different patterns can be identified for different circumstances. From a large aggregated data set, a subset can be identified and processed as relevant for a particular “-ology” or circumstance. Data can be partitioned into a relevant subset. For example, four different hospitals are collecting data, and the data is then partitioned to focus on cardiac data, etc. Partitioning can involve clustering, etc. Metadata can be leveraged, and data can be cleaned to reduce noise, artifacts, outliers, etc. Missing data can be interpolated and/or otherwise generated using generative adversarial networks (GANs), filter, etc. Detection occurs after the fact, while a prediction is determined before an event occurs. In certain examples, prediction occurs in real time (or substantially real time given system processing, storage, and data transmission latency) using available data.
  • GANs generative adversarial networks
  • Post-processing of predicted, detected, and/or classified events can include a dashboard visualization for detection, classification, and/or prediction.
  • post-processing can generate a visualization summarizing events.
  • Post-processing can also generate notifications determined by detection, classification, and/or prediction, for example.
  • an algorithm can be used to select one or more machine learning algorithms to instantiate a network model based on aggregated pre-processed data and a target output.
  • a hybrid RL can be selected for decision making regarding which events to choose from a set of targeted events.
  • a transformer network can be selected for parallel processing and accelerating event generation, for example.
  • a graph neural network can be selected for interpreting targeted events and relations exploration, for example. The neural network and/or other AI model generated by the selected algorithm can operate on the pre-processed data to generate summarized events, etc.
  • data can be pre-processed according to one or more sequential stages to aggregate the data. Stages can include data ingestion and filtration, imputation, aggregation, modeling, and recommendation.
  • data ingestion and filtration can include one or more devices connected to a patient and used to actively capture and filter data related to the patient and/or device operation.
  • a patient undergoing surgery is equipped with an anesthetic device and one or more monitoring devices capturing one or more of the patient's vitals at a periodic interval.
  • the anesthetic device can be viewed as a source of machine events (acted upon the patient), and the captured vitals can be treated as a source of patient data, for example.
  • FIG. 5 illustrates an example visualization 500 of data provided from multiple sources including, an anesthetic device, a monitoring device, etc.
  • a stream of data can have artifacts due to one more issues occurring during and/or after acquisition of data.
  • heart rate and/or ST segment errors can occur due to electrocautery interference, patient movement, etc.
  • Oxygen saturation measurement errors can occur due to dislocation of a sensor, vasopressor use, etc.
  • Non-invasive blood pressure errors can be caused by leaning on the pressure cuff, misplacement of the cuff, etc.
  • Such artifacts are filtered from the stream using one or more statistics (e.g., median, beyond six sigma range, etc.) that can be obtained from the patient (e.g., current) and/or from prior records of patients who have undergone a similar procedure and may have involved one or more normalization techniques with respect to age, gender, weight, body type, etc.
  • one or more statistics e.g., median, beyond six sigma range, etc.
  • the data may have some observation missing and/or removed during a filtration process, etc.
  • This missing information can be imputed with data before being used for training a neural network model, etc.
  • the data can be imputed using one or an ensemble of imputation methods to better represent the missing value. For example, imputation can be performed using a closest fill (e.g., using a back or forward fill with the value closest with respect to time, etc.), collaborative filtering by determining another input that could be a possible candidate, using a generative method trained with data from large sample of patients, etc.
  • a captured stream of data may involve aggregation before being consumed in downstream process(es).
  • Patient data can be aggregated based on demographic (e.g., age, sex, income level, marital status, occupation, race, etc.), occurrence of a specific medical condition, etc.
  • demographic e.g., age, sex, income level, marital status, occupation, race, etc.
  • One or more aggregation methods can be applied to the data, such as K-means/medoids, Gaussian mixture models, density-based aggregation, etc.
  • Aggregated data can be analyzed and used to classify/categorize a patient to determine a relevant data set for training and/or testing of an associated neural network model, for example.
  • data can be clustered according to certain similarity.
  • Medoids are representative objects of a data set or a cluster with a data set whose average dissimilarity to all the objects in the cluster is minimal.
  • a cluster refers to a collection of data points aggregated together because of certain similarities.
  • a target number k can be defined, which refers to a number of centroids desired in the dataset.
  • a centroid is an imaginary or real location representing a center of the cluster. Every data point is allocated to each of the clusters by reducing an in-cluster sum of squares, for example.
  • a K-means algorithm identifies k number of centroids, and then allocates every data point to the nearest cluster, while keeping the centroids as small as possible.
  • the “means” in the K-means refers to an averaging of the data; that is, finding the centroid. In a similar approach, a “median” can be used instead of the middle point. A “goodness” of a given value of k can be assessed with methods such as a silhouette method, Elbow analysis, etc.
  • a Gaussian mixture model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters.
  • a Gaussian mixture model can be viewed as generalized k-means clustering to incorporate information about covariance structure of the data as well as centers of latent Gaussians associated with the data. The generalization can be thought of in the shape the clusters are formed, which in case of GMMs are arbitrary shapes determined by Gaussian parameters of the distribution, for example.
  • DBSCAN Density-based spatial clustering of applications with noise
  • DBSCAN is a data clustering algorithm that can be used in data mining and machine learning. Based on a set of points (e.g., in a bi-dimensional space), DBSCAN groups together points that are close to each other based on a distance measurement (e.g., Euclidean distance, etc.) and a minimum number of points. DBSCAN also marks as outliers points that are in low-density regions. Using DBSCAN involves two control parameters, Epsilon(distance) and minimum points to form a cluster, for example. DBSCAN can be used for situations in which there are highly irregular shapes that are not processable using a mean/centroid-based method, for example.
  • a recommender system or a recommendation system is a subclass of information filtering system that seeks to predict the “rating” or “preference” a user would give to an item.
  • the recommender system operates on an input to apply collaborative filtering and/or content-based filtering to generate a predictive or recommended output. For example, collaborative filtering builds a model based on past behavior as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in.
  • Content-based filtering approaches utilize a series of discrete, pre-tagged characteristics of an item to recommend additional items with similar properties. In the healthcare context, such collaborative and/or content-based filtering can be used to predict and/or categorize an event and/or classify a patient based on the event(s), etc.
  • certain examples provide a plurality of methods that can be used to determine a cohort to which the patient belongs. Based on the cohort, relevant samples can be extracted to train and inference a model for a given patient. For example, when looking at a particular patient and trying to inference for the particular patient, an appropriate cohort can be determined to enable retrieval of an associated subset of records previously obtained and/or from a live stream of data. In certain examples, a top N records are used for training and inferencing.
  • patients and associated patient data can be post-processed. For example, given that a clinician attends to more than one patient at a given point of time, patients and associated data can be summarized, prioritized, and grouped for easy and quick inferencing of events/outcomes.
  • patients can be prioritized based on a clinical outcome determined according to one or more pre-determined rules. Patients can also be prioritized based on variance of vitals from a nominal value of the cohort to which the patient belongs, where the cohort is determined by one or more aggregation methods, for example.
  • aggregation can be used to provide a high-level summarization of one or more patients being treated.
  • Summarization can also involve aggregation of one or more events occurring in parallel for ease of interpretability.
  • This process of summarization can also be modeled as a learned behavior based on the learning of how a clinician prefers to look at the summarization, for example.
  • trained, deployed AI models can be applied to 1D patient data to convert the patient time series data into a visual indication of a comparative value of the data.
  • processing the 1D time series patient data using an AI model quantifies, qualifies, and/or otherwise compares the data to a normal value or values, a threshold, a trend, other criterion(-ia) to generate a color-coded, patterned, and/or shaded representation of the underlying time series (e.g., waveform, etc.) data.
  • Data can be clustered for a particular patient, and patients can be clustered for a particular group, such as a hospital, department, ward, clinician, office, enterprise, condition, etc.
  • patient(s) and event(s) can be determined from the group of available patients and events for which a clinician and/or healthcare system/device is to be notified for immediate attention, for example.
  • a visualization can be generated from the prioritized data to enable understandable, actionable, display and interaction with the data.
  • Each patient is represented by a block (also referred to as a cluster or set), and each line (also referred as a bar, strip, stripe, or segment) in the block represents a different 1D data point.
  • a color/pattern/representation of that line conveys an indication of its value/relative value/urgency/categorization/etc. to allow a user to visually appreciate an impact/importance of that data element.
  • certain examples provide an interactive graphical view to visualize patterns, abnormalities, etc., in a large data set across multiple patients, which transforms raw results into visually appreciable indicators.
  • Using a graphical view helps to improve and further enable comparisons between patients, deviation from a reference or standard, identification of patterns, other comparative analysis, etc.
  • a block of patient information can be magnified to drill down into particular waveforms, other particular data, etc., represented by the colored/patterned line(s) in the top level interface.
  • Patterns of the visualization and/or underlying 1D data can be provided for display and interaction via the user interface, as input to another system for diagnosis, treatment, system configuration, stored, etc.
  • certain examples gather 1D time series (e.g., waveform) data from one or more medical devices (e.g., ECG, EEG, ventilator, etc.) and a patient via one or more monitoring devices.
  • Physiological data and other 1D time series signals can be indicative of a physiological condition associated with a body part from which the data is obtained (e.g., because the signal corresponds to electrical activity of the body part, etc.).
  • the time series physiological signal data, machine data, etc. can be processed used by clinicians for decision making regarding a patient, medical equipment, etc.
  • a variety of waveforms e.g., ECG, heart rate (HR), respiratory gas movement, central venous pressure, arterial pressure, oxygen fraction, waveform capnography, etc.
  • waveforms e.g., ECG, heart rate (HR), respiratory gas movement, central venous pressure, arterial pressure, oxygen fraction, waveform capnography, etc.
  • a data view such as example data view 600
  • the patient data can be normalized to provide a graphical representation of relative and/or other comparative values.
  • a normalized value can be converted from an alphanumeric value into a graphical representation of that value (e.g. a color, a pattern, a texture, etc.), and a group or set of values for a patient can be represented as a group or cluster of graphical representations (e.g., a set of colored lines, a combination of patterns and/or textures, etc.) in a block for that particular patient.
  • a graphical user interface can display and provide access to graphical representations for a set or group of patients shown together for visual comparison, interaction, individual processing, comparative processing, sorting, grouping, separation, etc.
  • the graphical user interface (GUI) view of multiple patients can be organized/arranged according to one or more criterion (e.g., duration, location, condition, etc.).
  • such a GUI can arrange blocks or clusters of patient data such that each patient's block is distinct from other adjacent patient blocks.
  • patient blocks or “cases” can be arranged around (e.g., anchored by, displayed with respect to, etc.) a normalization point or common event/threshold, such as an emergency start event, etc.
  • a normalization point or common event/threshold such as an emergency start event, etc.
  • an occurrence of an emergency event such as a stroke, heart attack, low blood pressure, low blood sugar, etc., can be indicated in each of a plurality of patients and used to normalize the patient data blocks with respect to that emergency event.
  • FIG. 7 illustrates an example graphical user interface 700 including an interactive block representation 710 of patient time series data.
  • each band 720 - 728 in the block 710 corresponds to a particular parameter measured over time using the 1D time series data and transformed into a visual representation of the underlying data.
  • a length of the block representation 710 can be used to identify an outlier, pattern, etc., in comparison to other patient blocks, for example.
  • one or more signals such as electrical signals, gas flow rate and/or volume, liquid flow rate and/or volume, vibration, other mechanical parameter, etc., can be converted into a visual, unit-less representational band 720 - 728 .
  • the data set can be unknown. Where other automations require the data set to be known, with known inputs and expected outputs, certain examples process unknown data to transform the data into a set of visual representations 720 - 728 forming a block 710 characterizing the patient.
  • FIG. 8 depicts an example interface 800 including representations 810 , 820 of a plurality of patients in a healthcare environment (e.g., a hospital, a ward, a department, a clinic, a doctor's office, etc.).
  • a healthcare environment e.g., a hospital, a ward, a department, a clinic, a doctor's office, etc.
  • each cluster or block 810 , 820 corresponds to a patient
  • each strip/stripe/bar/band/line/segment 812 - 818 , 822 - 828 in the respective block 810 , 820 represents one variable depicted in a normalized color, pattern, and/or other texture format for the corresponding signal.
  • each strip 812 - 818 , 822 - 828 serves as a pointer to underlying 1D data and/or associated records, actions, etc., and each block 810 , 820 provides a snapshot of patient condition.
  • a position of each block 810 , 820 can be anchored with respect to an identified start or reference event 840 (e.g., indicated by a line 840 in the example interface 800 of FIG. 8 ) to expose variation between patients with respect to that event 840 .
  • patient blocks 810 , 820 can be ordered in the tree 830 according to one or more criterion/characteristic (e.g., location, duration, condition, demographic, etc.).
  • a subset of patient data can be removed for each patient case.
  • the top rows 812 - 814 , 822 - 824 e.g., 14 rows, etc.
  • the bottom rows 816 - 818 , 826 - 828 e.g., 29 rows, etc.
  • the blocks 810 , 820 are anchored by the emergence start event 840 and sorted by length of case, for example.
  • one or more patients can be excluded from a “ground truth” set of patient data to be used to train one or more AI models.
  • one or more blocks 810 , 820 that do not align with other blocks 810 , 820 with respect to the event 840 can be excluded from the ground truth data set provided for AI model training and/or testing.
  • Remaining blocks 810 , 820 can be annotated for training, testing, patient evaluation, etc.
  • a clinician, a nurse, etc. can annotate the “clean” data to form a training and/or testing data set.
  • the blocks 810 , 820 can represent 1D data associated with different patients. In other examples, the block 810 , 820 can represent 1D data associated with the same patient acquired at different times.
  • the event 840 is used, for example, to organize patient according to group, location, duration, clinical purpose, etc.
  • the individual “tree” interface 830 can be arranged with a plurality of other tree interfaces to form a “forest” interface.
  • FIG. 9 illustrates an example “forest” or combined interface 900 including a plurality of individual tree interfaces 830 , 910 , 920 .
  • a collection of individual interfaces 830 , 910 , 920 can be compiled to represent a plurality of departments, groups, points in time, instances, etc., of patients in care of a healthcare provider.
  • the forest 900 of trees 830 , 910 , 920 can be arranged for comparison and interaction according to one or more criterion.
  • the composite interface 900 can highlight variability and can pivot on different characteristics, sort on different sizes, etc.
  • Interaction e.g., zoom, drill-down, process, etc.
  • with displayed information can be enabled via the interface 900 and/or its component trees 830 , 910 , 920 of blocks, for example.
  • certain examples provide micro and macro views of multiple patients with respect to multiple variables. For example, given a single variable (e.g., oxygen level below a threshold percentage, etc.) a quick view of applicable patients can be shown along with a time stamp of when a measured value of the variable dropped below (or rose above) a threshold level for the variable. A quick analysis can be conducted with respect to other variables at that time to determine a correlation between the change in one variable with respect to the threshold and change(s) to other variable(s) in the block(s) of patient data.
  • a single variable e.g., oxygen level below a threshold percentage, etc.
  • a quick analysis can be conducted with respect to other variables at that time to determine a correlation between the change in one variable with respect to the threshold and change(s) to other variable(s) in the block(s) of patient data.
  • an interface begins with the composite view 900 (e.g., a static image, a dynamic interactive graphic, etc.) across multiple groups/facilities/locations/enterprises, and the system can focus on a particular tree 830 , 910 , 920 of patient/location data.
  • a portion of the tree can be displayed in its interface 800 , such as by magnifying and displaying real captured data signals in the magnified region, block, etc., 810 , 820 .
  • Another level of magnification can provide access to underlying signal data, etc., for example.
  • Blocks 810 , 820 in the tree 830 can be ordered based on duration, procedure, condition, location, etc., and patients are then organized differently within the graphical user interface 800 .
  • segments 812 - 818 , 822 - 828 in a block 810 , 820 can be ordered based on one or more criterion such as duration, procedure, condition, location, demographic, etc., and patient segments 812 - 818 , 822 - 828 in the block 810 , 820 are then organized for display and interaction according to the criterion(-ia).
  • the “Christmas tree” interface 800 for example, a view of related patients can be provided to enable proper data clean up decisions as a group before diving into the details of particular patients, issues, procedures, etc.
  • the event indicator 840 can be used as a reference point to align the blocks 810 , 820 of data for each patient in the data set to show an event that occurred at that point in time, when in time the particular event occurred for each patient, what was occurring with other patients when a particular patient experienced the event, and/or other comparative visualization and/or analysis, for example.
  • groups of patients can be represented with respect to a particular event 840 (e.g., a particular group, location, duration, clinical purpose, condition, other clinical event, etc.) in one or more trees 830 , 910 , 920 .
  • Stacked signals form a representation of a patient, and patient representations can be organized with respect to each other based on the event and/or other criterion 840 , for example.
  • the event/criterion 840 allows the same set of patient data to be “stacked” or organized in different ways, for example.
  • the trees 830 , 910 , 920 can be formed from different patient data sets, and/or the trees 830 , 910 , 920 can be formed from the same patient data set.
  • the event/criterion indicator 840 , 915 , 925 can represent a same event/criterion across different sets of patient data and/or can represent a changing event/criterion across the same set of patient data. As such, each event 840 , 915 , 925 triggers a different organization of the same patients in the corresponding tree 830 , 910 , 920 , for example. Each different event 840 , 915 , 925 results in a different tree 830 , 910 , 920 with different patient outliers. Thus, when training an AI model to recognize a particular event 840 , 915 , 925 , a different set of ground truth patient data can be identified and stored with outliers removed, for example.
  • different groups of patient data can be formed, processed, transformed into visualizations, and analyzed to determine patient patterns, trends, issues, and appropriate data for AI model training, for example.
  • patients can be arranged in different groups to treat each group of patients separately.
  • patients in cardiology can form one group, and patients with broken bones can form another group.
  • processing and transforming without prior knowledge, the data for a particular group and event/criterion 840 , 915 , 925 into a visualization and grouping, common features can be understood for a particular group, and outliers can be investigated, eliminated, etc.
  • a group of patients can be analyzed with respect to an anesthesia event 840 , 915 , 925 .
  • the event 840 , 915 , 925 can be an anesthesia “on” event or an anesthesia “off” event, for example.
  • anesthesia “off” event a goal is to determine an end to a procedure, so patients can be taken off their anesthesia and moved from a surgical suite to a post-operative recovery area. From the tree 830 , 910 , 920 view, patients undergoing the same procedure can be compared based on the anesthesia off trigger event 840 , 915 , 925 , for example.
  • the same patient undergoing a procedure multiple times or undergoing different procedures with a same trigger event 840 , 915 , 925 can be visually compared.
  • patients are organized with respect to the same event 840 , 915 , 925 such as removal of anesthesia, their procedure duration, responsiveness, and/or other characteristic can be evaluated.
  • patient data can be used to form ground truth or known, verified patient data to be relied upon for training and/or testing an AI model. Patterns or trends in the data can also be analyzed for cause and effect and associated adjustment to patient diagnosis, treatment, etc. Patients not following a pattern (e.g., outliers or anomalies, etc.) can be discarded or ignored for the training/test data set, for example.
  • a group and/or subgroup of patients can be selected to trigger extraction of a data set to output for training, testing, etc., of one or more AI models.
  • Selection of a subset of a tree 830 , 910 , 920 via the interface 900 can trigger extraction and transmission (e.g., to be stored, to be used by a model generator/processor, etc.) of the data set associated with the subset, for example.
  • the data trees can be used to identify and evaluate individual patient information as well as determine group characteristics as with the example interfaces 800 , 900 .
  • a user can formulate a reliable data set for training and/or testing of an AI model and also leverage the data as actionable data for patient diagnosis, treatment, etc.
  • FIGS. 10A-10E illustrate a sequence of user interface screens corresponding to an example workflow for anomaly detection in patient data.
  • a multi-patient view interface 1000 provides representations 1010 - 1020 for a plurality of patients dynamically showing associated vitals and/or other physiological data (e.g., heart rate, blood pressure, oxygen saturation, etc.) including one or more warnings 1030 , 1032 , where applicable, for the respective patient.
  • the multi-patient view 1000 shows a real-time (or substantially real time given memory and/or processor latency, data transmission time, etc.) digest of physiological signals recorded over a period of time (e.g., the last five minutes, last ten minutes, last minute, etc.) for multiple patients.
  • the patients shown in the multi-patient view 1000 can be associated with the patient representations shown in a tree 830 , 910 , 920 , for example.
  • a patient representation 1010 - 1020 can be selected to trigger an expanded single-patient view 1040 , such as shown in the example of FIG. 10B , showing an expanded view of the representation 1020 for the selected patient.
  • a doctor can click one of the displayed patient representations 1010 - 1020 to see more real-time signals from that patient in the single patient view 1040 of the example of FIG. 10B .
  • the signals can convey phases of a patient's care such as inductance, maintenance, and emergence phases of the patient's anesthesia, for example.
  • the single-patient view 1040 can include a prioritized event 1042 .
  • the example single-patient view 1040 can also include a button, icon, or other trigger 1045 to view a patient history for the patient displayed in the single view interface 1040 .
  • a button, icon, or other trigger 1045 to view a patient history for the patient displayed in the single view interface 1040 .
  • collected physiological signals for the patient over a given interval e.g., in the past hour, the past 5 hours, the past 8 hours, etc.
  • An example patient history view 1050 such as shown in the example of FIG.
  • one or more AI constructs can process the 1D time series waveform data to formulate a block 1055 of visual values 1060 - 1068 for display.
  • This view helps identify and highlight anomaly conditions detected by the AI clinical detection models.
  • the patient was detected and highlighted to have both sleep apnea 1070 and seizure 1072 as demonstrated by the anomaly or change 1070 , 1072 in the value of the respective signal 1060 - 1068 .
  • the example interface of FIG. 10C transforms data into visual representations over a certain period of time, such as morning, afternoon, overnight, etc.
  • Signal acquisition and transformation can be repeated at a different time of day, different day, same day of the week but a week later, etc., to provide a plurality of visual representations for comparison.
  • the representations can be compared for the same patient, different patients undergoing the same procedure, etc.
  • the representations can be stacked to form a tree 830 , 910 , 920 , for example.
  • Selecting the indication of seizure 1072 triggers display of an example interface 1080 , shown in FIG. 10 D, to provide further detail regarding the event/anomaly 1072 in the patient data stripe 1068 .
  • the anomaly 1072 is a seizure with respect to a patient
  • the detail interface view 1080 displays the waveform data associated with the anomaly 1072 represented in the processed patient data stripe 1068 .
  • FIG. 10E provides an example of an example graphical user interface 1090 providing a probability of seizure at a certain power over a period of time.
  • a user can trigger processing of the waveform from the interface 1080 of FIG. 10D to generate a results interface 1090 providing an analysis of the processed waveform data.
  • the results can be interactive to drive detection, prediction, evaluation of causation, confidence score, etc.
  • FIGS. 10A-10E illustrates a new, interactive, dynamic user interface to allow correlation, processing, and viewing of a plurality of sets of patient data, focus on one set of patient data, concentration on a subset of such patients, in depth review of a particular patient, and deep dive into source 1D data and associated analysis.
  • the series of interfaces 1000 , 1040 can replace the prior interface upon opening, pop-up and/or otherwise overlay the prior interface upon opening, etc.
  • the interface allows a patient and/or group of patients to be analyzed, diagnosed, treated, etc., and also facilitates transformation of gathered patient data into a verified data set for training, testing, etc., of AI model(s), for example.
  • FIG. 11 illustrates an example time series data visualization system or apparatus 1100 .
  • the example system 1100 can be used to process 1D time series data from one or more patients to generate interactive visualization interfaces, such as the example interfaces of FIGS. 6 - 10 E.
  • the example system 1100 includes a communication interface 1110 , an input processor 1120 , a data processor 1130 , a model builder 1140 , a model deployer 1150 , a visualization processor 1160 , a user interface builder 1170 , and an interaction processor 1180 .
  • the example system 1100 transforms data gathered from one or more medical devices, patient monitors, etc., into interactive graphical representations that provide a visual indication of content, status, severity, relevance, etc.
  • the example system 1100 enables a new form of display and interaction with the interactive graphical representations and underlying time series data via a graphical user interface to manipulate the graphical representations individually, in blocks or clusters, with respect to multiple patients, with respect to a reference event, etc.
  • the example communication interface 1110 is to send and receive data to/from one or more sources such as sensors, other monitoring devices, medical devices, other machines, information systems, imaging systems, archives, etc.
  • the example input processor 1120 is to clean (e.g., remove outlier data, interpolate missing data, adjust data format, etc.), normalize (e.g., with respect to a normal value, reference value, standard value, threshold, etc.) and/or otherwise process incoming data (e.g., monitored patient physiological data, logged machine data, electronic medical record data, etc.) for further processing by the system 1100 .
  • the example data processor 1130 processes the normalized and/or otherwise preprocessed data from the input processor 1120 to complete the normalization of data begun by the input processor, compare data provided by the input processor 1120 and/or directly from the communication interface 1110 , prepare data for modeling (e.g., for training and/or testing a machine learning model, for visualization, for computer-aided diagnosis and/or detection, etc.), etc.
  • the data processor 1130 can process data to convert the data into a graphical representation of relative or normalized values over time for a parameter or characteristic associated with the data (e.g., associated with a stream of 1D time series data, etc.).
  • the visualization processor 1160 converts the data into one or more graphical representations for visual review, comparison, interaction, etc.
  • the example model builder 1140 builds a machine learning model (e.g., trains and tests a supervised machine learning neural network and/or other learning model, etc.) using data from the communication interface 1110 , input processor 1120 , and/or data processor 1130 .
  • the model builder 1140 can leverage normalized data, data transformed into the relative graphical visualization, etc., to train a machine learning model to correlate output(s) with input(s) and test the accuracy of the model.
  • the example model deployer 1150 can deploy an executable network model once the model builder 1140 is satisfied with the training and testing.
  • the deployed model can be used to process data, correlate an output (e.g., a graphical representation, identification of an anomaly, identification of a trend, etc.) with input data, convert waveform data to a relative graphical representation, etc.
  • the visualization processor 1160 converts one-dimensional time-series data into one or more graphical representations for visual review, comparison, interaction, etc.
  • the visualization processor 1160 organizes and correlates graphical representations with respect to a patient, a reference/emergency/triggering event, etc.
  • the example visualization processor 1160 can be used to process the graphical representations of one or more data series (e.g., 1D time series data, other waveform data, other data, etc.) into one or more visual constructs such as blocks/clusters 810 , 820 , strips/bands/lines/segments 812 - 818 , etc.
  • the example visualization processor 1160 can correlate blocks, strips, etc., based on patient, location/organization/cohort, emergency event, other reference event or marker, etc.
  • the example user interface builder 1170 can construct an interactive graphical user interface from the graphical representations, model, and/or other data available in the system 1100 .
  • the interface builder 1170 can generate one or more interfaces such as in the examples of FIGS. 6-10E and can generate a linked combination of interfaces such as shown in the example of FIGS. 10A-10E .
  • the example interaction processor 1180 triggers user interface displays, data manipulation, graphical representation manipulation, processing of data, access to external system(s)/process(es), data transfer, storage, reporting, etc., via the one or more interfaces 700 - 1080 such as shown in the examples of FIGS. 6-10E .
  • FIG. 12 is a flow diagram of an example method 1200 to process 1D time series data.
  • raw time series data is processed.
  • 1D waveform data from one or more sensor attached to and/or otherwise monitoring a patient, a medical device, other equipment, a healthcare environment, etc. can be processed by the example input processor 1120 to identify the data (e.g., type of data, format of data, source of data, etc.) and route the data appropriately.
  • a processing method to be applied to the data is determined.
  • the processing method can be dynamically determined by the data processor 1130 based on the type of the data, source of the data, reason for exam, patient status, type of patient, associated healthcare professional, associated healthcare environment, etc.
  • the processing method can be a bottom-up processing method or a top-down processing method, for example.
  • the data is cleaned.
  • the data can be cleaned by the data processor 1130 to normalize the data with respect to other data and/or a reference/standard value.
  • the data can be cleaned by the data processor 1130 to interpolate missing data in the time series, for example.
  • the data can be cleaned by the data processor 1130 to adjust a format of the data, for example.
  • outliers in the data are identified and filtered. For example, outlier data points that fall beyond a boundary, threshold, standard deviation, etc., are filtered (e.g., removed, separated, reduced, etc.) from the data being processed.
  • a model is built using the data.
  • the example model builder 1140 builds a machine learning model (e.g., trains and tests a supervised machine learning neural network and/or other learning model such as an unsupervised learning model, a deep learning model, a reinforcement learning model, a hybrid reinforcement learning model, etc.) using data from the communication interface 1110 , input processor 1120 , and/or data processor 1130 .
  • the model builder 1140 can leverage normalized data, data transformed into the relative graphical visualization, etc., to train a machine learning model to correlate output(s) with input(s) and test the accuracy of the model.
  • the model is deployed.
  • the example model deployer 1150 can deploy an executable network model once the model builder 1140 is satisfied with the training and testing.
  • the deployed model can be used to process data, correlate an output (e.g., a graphical representation, identification of an anomaly, identification of a trend, etc.) with input data, convert waveform data to a relative graphical representation, etc.
  • feedback is captured from use of the deployed model.
  • feedback can be captured from the deployed model itself, feedback can be captured from an application using the model, feedback can be captured from a human user, etc.
  • the example visualization processor 1160 can be used to process the data to transform the source waveform and/or other 1D time series data into graphical representations.
  • the visualization processor 1160 can normalize and/or otherwise clean the data and transform the 1D data into one or more visual constructs such as blocks/clusters 810 , 820 , strips/lines/bands/segments 812 - 818 , etc.
  • the example visualization processor 1160 can correlate blocks, strips, etc., based on patient, location/organization/cohort, emergency event, other reference event or marker, etc.
  • outliers in the data are identified and filtered. For example, outlier data points that fall beyond a boundary, threshold, standard deviation, etc., are filtered (e.g., removed, separated, reduced, etc.) by the data processor 1130 from the data being processed. Filtering and/or other removal of outliers can be automatic by the data processor 1130 and/or can be triggered by interaction with the interface, data visualization, etc.
  • a model is built using the data.
  • the example model builder 1140 builds a model (e.g., trains and tests a supervised machine learning neural network and/or other learning model such as an unsupervised learning model, a deep learning model, a reinforcement learning model, a hybrid reinforcement learning model, etc.) using data and associated graphical representations to cluster representations for a patient, group patients together in relative alignment around a trigger event (e.g., an emergency condition, an anomaly, a particular physiological value, etc.).
  • the model can thereby learn how and when to group similar or dissimilar graphical representations, highlight anomalies in a visual manner, etc.
  • the model is deployed.
  • the example model deployer 1150 can deploy an executable model once the model builder 1140 is satisfied with the training and testing.
  • the deployed model can be used to process data, correlate an output (e.g., a graphical representation, identification of an anomaly, identification of a trend, etc.) with input data, convert waveform data to a relative graphical representation, comparatively organize graphical representations according to one or more criteria, etc.
  • a graphical visualization can be generated from an output of the model.
  • the model can be used to output prediction and/or detection results based on time-series data, and the output can be visualized graphically such as using the visualization processor 1160 .
  • feedback is captured from use of the deployed model.
  • feedback can be captured from the deployed model itself, feedback can be captured from an application using the model, feedback can be captured from a human user, etc.
  • FIG. 13 is a flow diagram of an example method 1300 for dynamic generation and manipulation of a graphical user interface including visual, graphical representations of one-dimensional time-series data.
  • time-series data is processed to normalize the data with respect to one or more reference values.
  • value(s) of the time-series data waveforms and/or other one-dimensional data stream can be adjusted (e.g., normalized) with respect to a reference value such as a normal value, a standard value, an accepted average value, an expected value, etc.
  • the normalized data then expresses a degree or magnitude of difference from the reference value(s), which enables improved comparison of values, triggering of alerts, highlighting of anomalies, etc.
  • the normalized data is converted into one or more graphical representations of the underlying normalized 1D data.
  • normalized 1D time series data values can be provided to a deep learning model, such as an RL model, DL model, hybrid RL+DL model, etc., to convert the numerical value into a visual, graphical representation such as a line, strip, stripe, segment, bar, or band.
  • normalized heart rate waveform data can be fed into a hybrid RL+DL model to form a contiguous bar or strip graphical representation showing a trend, relative importance, anomaly, etc., in the underlying heart rate waveform data.
  • a set of waveform data for a patient can be converted into a plurality of graphical representations (e.g., heart rate, blood pressure, lung volume, brain activity, etc.), for example.
  • normalized data is converted into a comparative visual representation based on a color, shading, texture, pattern, etc.
  • graphical representations are clustered for a given patient. For example, graphical representations of heart rate, blood pressure, brain wave activity, lung activity, etc., can be gathered together or clustered to be represented as a block of graphical representations for the patient.
  • patient clusters are arranged with respect to a reference event. For example, a reference event, such as a stroke, seizure, fire, etc., can be used to align a plurality of patient clusters for visual comparison as to a point in the collection of data corresponding to the graphical representation at which the reference event occurred.
  • the arranged clusters/blocks are displayed via a graphical user interface.
  • a graphical user interface For example, as shown in the example interfaces of FIGS. 7-10E , blocks of graphical representations are displayed via the user interface for interaction alone, in conjunction with a reference event, in comparison with other blocks/clusters, etc.
  • interaction with the blocks and constituent lines of graphical representation is facilitated via the graphical user interface.
  • a patient cluster or block can be selected for further review/interaction.
  • An individual line of graphical representation can be selected for further review/interaction.
  • multiple blocks for a single patient can be selected and/or blocks representing multiple patients can be selected.
  • An anomaly within a graphical representation of particular 1D data can be selected for review of/interaction with underlying 1D time series data, for example.
  • all or some of the displayed representations can be selected to trigger generation of a data set for training and/or testing of one or more AI models.
  • an action is triggered with respect to underlying data based on the interaction with the graphical representation(s) of the user interface displayed.
  • associated time series data can be processed, combined with other 1D data, transmitted to another process/system, stored/reported in an electronic medical record, converted to an order (e.g., for labs, monitoring, etc.), etc.
  • graphical representations selected to form a data set for training and/or testing of one or more AI models can be annotated via interaction to form a “ground truth” data set for model training, testing, etc.
  • the user interface is updated based on interaction, triggered action, etc.
  • a change in the data, combination of data, further physiological and/or device monitoring, etc. can result in a change in graphical representation, an addition or subtraction of graphical representation, highlighting of an anomaly, identification of a correlation, etc., updated and displayed via the graphical user interface.
  • FIG. 14 is a flow diagram of an example method 1400 to facilitate interaction with graphical representations arranged and displayed via a graphical user interface (e.g., block 1312 of the example of FIG. 13 ).
  • a graphical user interface e.g., block 1312 of the example of FIG. 13 .
  • input with respect to the graphical user interface is processed.
  • the input e.g., user selection, program execution, access by another system or device, etc.
  • interaction with a patient cluster of graphical representations is enabled.
  • a user and/or other program, device, system, etc. can interact with a patient cluster or block 810 , 820 .
  • the block 810 , 820 can be analyzed as a group or set of individual graphical representation lines/strips 812 - 818 , 822 - 828 to determine pattern(s) for a patient, compare patients, reorder and/or otherwise adjust comparative positioning of patient blocks 810 , 820 , etc.
  • patient blocks 810 , 820 can be positioned adjacent to each other to trigger a comparison of values.
  • a reference or triggering event 840 can be activated with respect to patient blocks 810 , 820 to triggered automated alignment of the blocks 810 , 820 with respect to the event indicator 840 .
  • interaction with a graphical representation is enabled.
  • a user and/or other program, device, system, etc. can interact with a strip 812 - 818 , 822 - 828 to drill down to underlying data (e.g., as shown in the examples of FIGS. 6, 10D, 10E , etc.).
  • Strips 812 - 818 can be selected for grouping into a data set for annotation and AI model training and/or testing, etc., for example.
  • interaction with an anomaly in a graphical representation is enabled.
  • a user and/or other program, device, system, etc. can select an anomaly (e.g., the anomaly 1072 in the strip 1068 ) to view underlying signal data (e.g., as shown in the example of FIG. 10D ), trigger analytics processing with respect to the selected anomaly (e.g., as shown in the example of FIG. 10E ), etc.
  • an anomaly or outlier can be excluded from a data set to be formed for AI model training, testing, etc.
  • interaction with the blocks, graphical representation elements, anomaly, etc. is processed.
  • additional data, underlying detail, application execution, rearrangement of elements on the graphical user interface, etc. can be processed based on the interaction at block 1404 , 1406 , and/or 1408 .
  • Control then reverts to block 1314 to trigger action with respect to underlying data based on the interaction.
  • a graphical user interface can transition from any interface shown in FIGS. 5-10E to any other interface shown in FIGS. 5-10E .
  • navigation can begin with the multi-patient view of FIG. 10A , from which a single patient can be selected to access the single patient view of FIG. 10B .
  • demographic data, historic information, vitals, captured signal data, etc. can be displayed.
  • a block graphical representation e.g., FIG. 10C
  • a graphical representation line or band within the block can be selected to show the underlying signal data used to form the graphical representation ( 10 D).
  • the multi-patient tree representational view of FIGS. 8 and/or 9 can be triggered by interaction with the block to show the patient's representation in comparison to graphical representations of other patients, for example.
  • navigation begins with a multi-patient graphical representation such as the tree of FIG. 8 , the forest of FIG. 9 , etc.
  • Selection of a block within the multi-patient graphical representation transforms the display to a single patient representation of the associated block such as shown in the example of FIG. 7 .
  • an individual graphical representation can be selected to display the underlying 1D signal data forming the graphical representation (e.g., FIG. 10D ).
  • selection of the block can trigger generation of a single-patient view, such as the single patient interface view of FIG. 10B , to show information for the patient including signal waveforms forming the graphical representations of the block, for example.
  • components disclosed and described herein can be implemented by hardware, machine readable instructions, software, firmware and/or any combination of hardware, machine readable instructions, software and/or firmware.
  • components disclosed and described herein can be implemented by analog and/or digital circuit(s), logic circuit(s), programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)).
  • At least one of the components is/are hereby expressly defined to include a tangible computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. storing the software and/or firmware.
  • a tangible computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. storing the software and/or firmware.
  • the machine readable instructions include a program for execution by a processor.
  • the program may be embodied in machine readable instructions stored on a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with the processor, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor and/or embodied in firmware or dedicated hardware.
  • a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with the processor, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor and/or embodied in firmware or dedicated hardware.
  • the example program is described with reference to flowchart(s), many other methods of implementing the components disclosed and described herein may alternatively be used.
  • the example process(es) can be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable storage medium such as a hard disk drive, a flash memory, a read-only memory (ROM), a compact disk (CD), a digital versatile disk (DVD), a cache, a random-access memory (RAM) and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information).
  • a tangible computer readable storage medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media.
  • tangible computer readable storage medium and “tangible machine readable storage medium” are used interchangeably.
  • the example process(es) can be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information).
  • coded instructions e.g., computer and/or machine readable instructions
  • a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any
  • non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media.
  • phrase “at least” is used as the transition term in a preamble of a claim, it is open-ended in the same manner as the term “comprising” is open ended.
  • the term “including” is open-ended in the same manner as the term “comprising” is open-ended.
  • FIG. 15 is a block diagram of an example processor platform 1500 structured to execute the instructions of FIGS. 12-14 to implement, for example the example apparatus 1100 of FIG. 11 .
  • the processor platform 1500 can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPadTM), a personal digital assistant (PDA), an Internet appliance, a gaming console, a personal video recorder, a set top box, a headset or other wearable device, or any other type of computing device.
  • a self-learning machine e.g., a neural network
  • a mobile device e.g., a cell phone, a smart phone, a tablet such as an iPadTM
  • PDA personal digital assistant
  • an Internet appliance e.g., a gaming console, a personal video recorder, a set top box, a headset or other wearable device,
  • the processor platform 1500 of the illustrated example includes a processor 1512 .
  • the processor 1512 of the illustrated example is hardware.
  • the processor 1512 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer.
  • the hardware processor may be a semiconductor based (e.g., silicon based) device.
  • the processor 1512 implements the example apparatus 1100 but can also be used to implement other systems disclosed herein such as systems 100 , 200 , 300 , 400 , etc.
  • the processor 1512 of the illustrated example includes a local memory 1513 (e.g., a cache).
  • the processor 1512 of the illustrated example is in communication with a main memory including a volatile memory 1514 and a non-volatile memory 1516 via a bus 1518 .
  • the volatile memory 1514 may be implemented by SDRAM, DRAM, RDRAM®, and/or any other type of random access memory device.
  • the non-volatile memory 1516 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 1514 , 1516 is controlled by a memory controller.
  • the processor platform 1500 of the illustrated example also includes an interface circuit 1520 .
  • the interface circuit 1520 may be implemented by any type of interface standard, such as an Ethernet interface, a USB, a Bluetooth® interface, an NFC interface, and/or a PCI express interface.
  • one or more input devices 1522 are connected to the interface circuit 1520 .
  • the input device(s) 1522 permit(s) a user to enter data and/or commands into the processor 1512 .
  • the input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint, and/or a voice recognition system.
  • One or more output devices 1524 are also connected to the interface circuit 1520 of the illustrated example.
  • the output devices 1524 can be implemented, for example, by display devices (e.g., an LED, an OLED, an LCD, a CRT display, an IPS display, a touchscreen, etc.), a tactile output device, a printer, and/or speaker.
  • the interface circuit 1520 of the illustrated example thus, typically includes a graphics driver card, a graphics driver chip, and/or a graphics driver processor.
  • the interface circuit 1520 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 1526 .
  • the communication can be via, for example, an Ethernet connection, a DSL connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.
  • the processor platform 1500 of the illustrated example also includes one or more mass storage devices 1528 for storing software and/or data.
  • mass storage devices 1528 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and DVD drives.
  • the machine executable instructions 1532 of FIGS. 12-14 may be stored in the mass storage device 1528 , in the volatile memory 1514 , in the non-volatile memory 1516 , and/or on a removable non-transitory computer readable storage medium such as a CD or DVD.
  • a deep learning model can convert one-dimensional data from monitoring of a patient, medical device(s), medical equipment, information system(s), etc., into a comparative graphical representation, such as a gradient-based graphical representation visually indicating a change in value over time for the respective data source/value.
  • the disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer and/or other processor and its associated interface.
  • the apparatus, methods, systems, instructions, and media disclosed herein are not implementable in a human mind and are not able to be manually implemented by a human user.

Abstract

Systems, apparatus, instructions, and methods for medical machine time-series event data processing are disclosed. An example apparatus includes a data processor to process one-dimensional data captured over time with respect to patient(s). The example apparatus includes a visualization processor to transform the processed data into graphical representations and to cluster the graphical representations including the first graphical representation into at least first and second blocks arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion. The example apparatus includes an interaction processor to facilitate interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This patent arises from U.S. Provisional Patent Application Ser. No. 62/838,022, which was filed on Apr. 24, 2019. U.S. Provisional Patent Application Ser. No. 62/838,022 is hereby incorporated herein by reference in its entirety. Priority to U.S. Provisional Patent Application Ser. No. 62/838,022 is hereby claimed.
  • FIELD OF THE DISCLOSURE
  • This disclosure relates generally to medical data visualization and, more particularly, to visualization of medical device event processing.
  • BACKGROUND
  • The statements in this section merely provide background information related to the disclosure and may not constitute prior art.
  • Healthcare environments, such as hospitals or clinics, include information systems, such as hospital information systems (HIS), radiology information systems (RIS), clinical information systems (CIS), and cardiovascular information systems (CVIS), and storage systems, such as picture archiving and communication systems (PACS), library information systems (LIS), and electronic medical records (EMR). Information stored can include patient medication orders, medical histories, imaging data, test results, diagnosis information, management information, and/or scheduling information, for example. A wealth of information is available, but the information can be siloed in various separate systems requiring separate access, search, and retrieval. Correlations between healthcare data remain elusive due to technological limitations on the associated systems.
  • Further, when data is brought together for display, the amount of data can be overwhelming and confusing. Such data overload presents difficulties when trying to display, and competing priorities put a premium in available screen real estate. Existing solutions are deficient in addressing these and other related concerns.
  • BRIEF DESCRIPTION
  • Systems, apparatus, instructions, and methods for medical machine time-series event data processing are disclosed.
  • Certain examples provide a time series data visualization apparatus including a data processor to process one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference. The example apparatus includes a visualization processor to transform the processed data into a plurality of graphical representations visually indicating a change over time in the data and to cluster the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion. The example apparatus includes an interface builder to construct a graphical user interface to display the at least first and second blocks of graphical representations. The example apparatus includes an interaction processor to facilitate interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.
  • Certain examples provide a tangible computer-readable storage medium including instructions that, when executed, cause at least one processor to at least: process one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference; transform the processed data into a plurality of graphical representation visually indicating a change over time in the data; cluster the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion, the first block, the second block, and the indicator to be displayed via a graphical user interface; and facilitate interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.
  • Certain examples provide a computer-implemented method for medical machine time-series event data processing and visualization. The example method includes processing one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference. The example method includes transforming the processed data into a plurality of graphical representations visually indicating a change over time in the data. The example method includes clustering the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion. The example method includes facilitating interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.
  • BRIEF DESCRIPTION OF DRAWINGS
  • FIG. 1 is a block diagram of an example system including medical devices and associated monitoring devices for a patient.
  • FIG. 2 is a block diagram of an example system to process machine and physiological data and apply one or more machine learning models to predict future events from the data.
  • FIG. 3 is a block diagram of an example system to process machine and physiological data and apply one or more machine learning models to detect events that have occurred.
  • FIGS. 4A-4D depict example artificial intelligence models.
  • FIG. 5 illustrates an example visualization of data provided from multiple sources.
  • FIGS. 6-10E illustrate example interfaces displaying one-dimensional patient data and associated analysis for interaction and processing.
  • FIG. 11 illustrates an example time series data visualization system.
  • FIGS. 12-14 illustrate flow diagrams of example methods to process one-dimensional time series data using the example system(s) of FIGS. 1-4 and/or 11.
  • FIG. 15 is a block diagram of an example processor platform capable of executing instructions to implement the example systems and methods disclosed and described herein.
  • DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS
  • In the following detailed description, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration specific examples that may be practiced. These examples are described in sufficient detail to enable one skilled in the art to practice the subject matter, and it is to be understood that other examples may be utilized and that logical, mechanical, electrical and other changes may be made without departing from the scope of the subject matter of this disclosure. The following detailed description is, therefore, provided to describe an exemplary implementation and not to be taken as limiting on the scope of the subject matter described in this disclosure. Certain features from different aspects of the following description may be combined to form yet new aspects of the subject matter discussed below.
  • When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” “the,” and “said” are intended to mean that there are one or more of the elements. The terms “first,” “second,” and the like, do not denote any order, quantity, or importance, but rather are used to distinguish one element from another. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. As the terms “connected to,” “coupled to,” etc. are used herein, one object (e.g., a material, element, structure, member, etc.) can be connected to or coupled to another object regardless of whether the one object is directly connected or coupled to the other object or whether there are one or more intervening objects between the one object and the other object.
  • As used herein, the terms “system,” “unit,” “module,” “engine,” etc., may include a hardware and/or software system that operates to perform one or more functions. For example, a module, unit, or system may include a computer processor, controller, and/or other logic-based device that performs operations based on instructions stored on a tangible and non-transitory computer readable storage medium, such as a computer memory. Alternatively, a module, unit, engine, or system may include a hard-wired device that performs operations based on hard-wired logic of the device. Various modules, units, engines, and/or systems shown in the attached figures may represent the hardware that operates based on software or hardwired instructions, the software that directs hardware to perform the operations, or a combination thereof.
  • As used herein, singular references (e.g., “a”, “an”, “first”, “second”, etc.) do not exclude a plurality. The term “a” or “an” entity, as used herein, refers to one or more of that entity. The terms “a” (or “an”), “one or more”, and “at least one” can be used interchangeably herein. Furthermore, although individually listed, a plurality of means, elements or method actions may be implemented by, e.g., a single unit or processor. Additionally, although individual features may be included in different examples or claims, these may possibly be combined, and the inclusion in different examples or claims does not imply that a combination of features is not feasible and/or advantageous.
  • The term “and/or” when used, for example, in a form such as A, B, and/or C refers to any combination or subset of A, B, C such as (1) A alone, (2) B alone, (3) C alone, (4) A with B, (5) A with C, (6) B with C, and (7) A with B and with C. As used herein in the context of describing structures, components, items, objects, and/or things, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing structures, components, items, objects and/or things, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. As used herein in the context of describing the performance or execution of processes, instructions, actions, activities and/or steps, the phrase “at least one of A and B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B. Similarly, as used herein in the context of describing the performance or execution of processes, instructions, actions, activities, and/or steps, the phrase “at least one of A or B” is intended to refer to implementations including any of (1) at least one A, (2) at least one B, and (3) at least one A and at least one B.
  • In addition, it should be understood that references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.
  • Medical data can be obtained from imaging devices, sensors, laboratory tests, and/or other data sources. Alone or in combination, medical data can assist in diagnosing a patient, treating a patient, forming a profile for a patient population, influencing a clinical protocol, etc. However, to be useful, medical data must be organized properly for analysis and correlation beyond a human's ability to track and reason. Computers and associated software and data constructs can be implemented to transform disparate medical data into actionable results.
  • For example, imaging devices (e.g., gamma camera, positron emission tomography (PET) scanner, computed tomography (CT) scanner, X-Ray machine, magnetic resonance (MR) imaging machine, ultrasound scanner, etc.) generate two-dimensional (2D) and/or three-dimensional (3D) medical images (e.g., native Digital Imaging and Communications in Medicine (DICOM) images) representative of the parts of the body (e.g., organs, tissues, etc.) to diagnose and/or treat diseases. Other devices such as electrocardiogram (ECG) systems, echoencephalograph (EEG), pulse oximetry (SpO2) sensors, blood pressure measuring cuffs, etc., provide one-dimensional waveform and/or time series data regarding a patient.
  • Acquisition, processing, analysis, and storage of time-series data (e.g., one-dimensional waveform data, etc.) obtained from one or more medical machines and/or devices play an important role in diagnosis and treatment of patients in a healthcare environment. Devices involved in the workflow can be configured, monitored, and updated throughout operation of the medical workflow. Machine learning can be used to help configure, monitor, and update the medical workflow and devices.
  • Machine learning techniques, whether deep learning networks or other experiential/observational learning system, can be used to characterize and otherwise interpret, extrapolate, conclude, and/or complete acquired medical data from a patient, for example. Deep learning is a subset of machine learning that uses a set of algorithms to model high-level abstractions in data using a deep graph with multiple processing layers including linear and non-linear transformations. While many machine learning systems are seeded with initial features and/or network weights to be modified through learning and updating of the machine learning network, a deep learning network trains itself to identify “good” features for analysis. Using a multilayered architecture, machines employing deep learning techniques can process raw data better than machines using conventional machine learning techniques. Examining data for groups of highly correlated values or distinctive themes is facilitated using different layers of evaluation or abstraction.
  • To be accurate and robust, machine learning networks must be trained and tested using data that is representative of data that will be processed by the deployed network model. Data that is irrelevant, inaccurate, and/or incomplete can result in a deep learning network model that provides an incorrect output in response to data input. Certain examples provide top-down systems and associated methods to capture and organize data (e.g., group, arrange with respect to an event, etc.), remove outliers, and/or otherwise align data with respect to a clinical event, trigger, other occurrence, etc., to form a ground truth for training, testing, etc., of a learning network model.
  • Certain examples provide automated processing and visualization of data for a group of patients and enable removal of outliers and drilling down into the data to determine patterns, trends, causation, individual patient data, etc. Relevant data can be annotated quickly to form ground truth data for training of one or more artificial intelligence models. For example, a plurality of one-dimensional signal waveforms can be stacked and/or otherwise organized for a patient, and patients can be stacked and/or otherwise organized with respect to each other and with respect to one or more events, criterion, etc. By organizing patients and their associated signals with respect to each other based on one or more events, criterion, etc., different outliers emerge from the group depending on the event, criterion, etc., used to organize the patients. As such, outliers eliminated from the data set can vary depending upon the event, criterion, etc.
  • Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The term “deep learning” is a machine learning technique that utilizes multiple data processing layers to recognize various structures in data sets and classify the data sets with high accuracy. A deep learning network (DLN), also referred to as a deep neural network (DNN), can be a training network (e.g., a training network model or device) that learns patterns based on a plurality of inputs and outputs. A deep learning network/deep neural network can be a deployed network (e.g., a deployed network model or device) that is generated from the training network and provides an output in response to an input.
  • The term “supervised learning” is a deep learning training method in which the machine is provided already classified data from human sources. The term “unsupervised learning” is a deep learning training method in which the machine is not given already classified data but makes the machine useful for abnormality detection. The term “semi-supervised learning” is a deep learning training method in which the machine is provided a small amount of classified data from human sources compared to a larger amount of unclassified data available to the machine.
  • The term “convolutional neural networks” or “CNNs” are biologically inspired networks of interconnected data used in deep learning for detection, segmentation, and recognition of pertinent objects and regions in datasets. CNNs evaluate raw data in the form of multiple arrays, breaking the data in a series of stages, examining the data for learned features.
  • The term “transfer learning” is a process of a machine storing the information used in properly or improperly solving one problem to solve another problem of the same or similar nature as the first. Transfer learning may also be known as “inductive learning”. Transfer learning can make use of data from previous tasks, for example.
  • The term “active learning” is a process of machine learning in which the machine selects a set of examples for which to receive training data, rather than passively receiving examples chosen by an external entity. For example, as a machine learns, the machine can be allowed to select examples that the machine determines will be most helpful for learning, rather than relying only an external human expert or external system to identify and provide examples.
  • The term “computer aided detection” or “computer aided diagnosis” refer to computers that analyze medical data to suggest a possible diagnosis.
  • Deep learning is a class of machine learning techniques employing representation learning methods that allows a machine to be given raw data and determine the representations needed for data classification. Deep learning ascertains structure in data sets using backpropagation algorithms which are used to alter internal parameters (e.g., node weights) of the deep learning machine. Deep learning machines can utilize a variety of multilayer architectures and algorithms. While machine learning, for example, involves an identification of features to be used in training the network, deep learning processes raw data to identify features of interest without the external identification.
  • Deep learning in a neural network environment includes numerous interconnected nodes referred to as neurons. Input neurons, activated from an outside source, activate other neurons based on connections to those other neurons which are governed by the machine parameters. A neural network behaves in a certain manner based on its own parameters. Learning refines the machine parameters, and, by extension, the connections between neurons in the network, such that the neural network behaves in a desired manner.
  • Deep learning that utilizes a convolutional neural network segments data using convolutional filters to locate and identify learned, observable features in the data. Each filter or layer of the CNN architecture transforms the input data to increase the selectivity and invariance of the data. This abstraction of the data allows the machine to focus on the features in the data it is attempting to classify and ignore irrelevant background information.
  • Deep learning operates on the understanding that many datasets include high level features which include low level features. While examining an image, for example, rather than looking for an object, it is more efficient to look for edges which form motifs which form parts, which form the object being sought. These hierarchies of features can be found in many different forms of data such as speech and text, etc.
  • Learned observable features include objects and quantifiable regularities learned by the machine during supervised learning. A machine provided with a large set of well classified data is better equipped to distinguish and extract the features pertinent to successful classification of new data.
  • A deep learning machine that utilizes transfer learning may properly connect data features to certain classifications affirmed by a human expert. Conversely, the same machine can, when informed of an incorrect classification by a human expert, update the parameters for classification. Settings and/or other configuration information, for example, can be guided by learned use of settings and/or other configuration information, and, as a system is used more (e.g., repeatedly and/or by multiple users), a number of variations and/or other possibilities for settings and/or other configuration information can be reduced for a given situation.
  • An example deep learning neural network can be trained on a set of expert classified data, for example. This set of data builds the first parameters for the neural network, and this would be the stage of supervised learning. During the stage of supervised learning, the neural network can be tested whether the desired behavior has been achieved.
  • Once a desired neural network behavior has been achieved (e.g., a machine has been trained to operate according to a specified threshold, etc.), the machine can be deployed for use (e.g., testing the machine with “real” data, etc.). During operation, neural network classifications can be confirmed or denied (e.g., by an expert user, expert system, reference database, etc.) to continue to improve neural network behavior. The example neural network is then in a state of transfer learning, as parameters for classification that determine neural network behavior are updated based on ongoing interactions. In certain examples, the neural network can provide direct feedback to another process. In certain examples, the neural network outputs data that is buffered (e.g., via the cloud, etc.) and validated before it is provided to another process.
  • Deep learning machines can utilize transfer learning when interacting with physicians to counteract the small dataset available in the supervised training. These deep learning machines can improve their computer aided diagnosis over time through training and transfer learning. However, a larger dataset results in a more accurate, more robust deployed deep neural network model that can be applied to transform disparate medical data into actionable results (e.g., system configuration/settings, computer-aided diagnosis results, image enhancement, etc.).
  • In certain examples, visualization of data can be driven by an artificial intelligence framework, and the artificial intelligence framework can provide data for visualization, evaluation, and action. Certain examples provide a framework including a) a computer executing one or more deep learning (DL) models and hybrid deep reinforcement learning (RL) models trained on aggregated machine timeseries data converted into the single standardized data structure format and in an ordered arrangement per patient to predict one or more future events and summarize pertinent past machine events related to the predicted one or more future machine events on a consistent input time series data of a patient having the standardized data structure format; and b) a healthcare provider-facing interface of an electronic device for use by a healthcare provider treating the patient configured to display the predicted one or more future machine events and the pertinent past machine events of the patient.
  • In certain examples, machine signals, patient physiological signals, and a combination of machine and patient physiological signals provide improved prediction, detection, and/or classification of events during a medical procedure. The three data contexts are represented in Table 1 below, associated with example artificial intelligence models that can provide a prediction, detection, and/or classification using the respective data source. Data-driven predictions of events related to a medical treatment/procedure help to lower healthcare costs and improve the quality of care. Certain examples involve DL models, hybrid RL models, and DL+Hybrid RL combination models for prediction of such events. Similarly, data-driven detection and classification of events related to a patient and/or machine helps to lower healthcare costs and improve the quality of care. Certain examples involve DL models, hybrid RL models, and DL+Hybrid RL combination models for detection and classification of such events.
  • As shown below, machine data, patient monitoring data, and a combination of machine and monitoring data can be used with one or more artificial intelligence constructs to form one or more predictions, detections, and/or classifications, for example.
  • Data Source Prediction/Detection/Classification
    Machine Data DL
    Hybrid RL
    DL + Hybrid RL
    Monitoring (Patient data) DL
    Hybrid RL
    DL + Hybrid RL
    Machine + Monitoring DL
    Data Hybrid RL
    DL + Hybrid RL

    Table 1. Data source and associated prediction, detection, and/or classification model examples.
  • Certain examples deploy learned models in a live system for patient monitoring. Training data is to match collected data, so if live data is being collected during surgery, for example, the model is to be trained on live surgical data also. Training parameters can be mapped to deployed parameters for live, dynamic delivery to a patient scenario (e.g., in the operating room, emergency room, etc.). Also, one-dimensional (1D) time series event data (e.g., ECG, EEG, O2, etc.) is processed differently by a model than a 2D or 3D image. 1D time series event data can be aggregated and processed, for example.
  • Thus, as shown below, one or more medical devices can be applied to extract time-series data with respect to a patient, and one or more monitoring devices can capture and process such data. Benefits to one-dimensional, time-series data modeling include identification of more data-driven events to avoid false alarms (e.g., avoiding false alarm fatigue, etc.), provide quality event detection, etc. Other benefits include improved patient outcomes. Cost-savings can also be realized, such as reducing cost to better predict events such as when to reduce gas, when to take a patient off an oxygen ventilator, when to transfer a patient from operating room (OR) to other care, etc.
  • Other identification methods are threshold based rather than personalized. Certain examples provide personalized modeling, based on a patient's own vitals, machine data from a healthcare procedure, etc. For example, for patient heart rate, a smaller person has a different rate than heavier built person. As such, alarms can differ based on the person rather than conforming to set global thresholds. A model, such as a DL model, etc., can determine or predict when to react to an alarm versus turn the alarm off, etc. Certain examples can drive behavior, configuration, etc., of another machine (e.g., based on physiological conditions, a machine can send a notification to another machine to lower anesthesia, reduce ventilator, etc.; detect ventilator dystrophy and react to it, etc.).
  • As shown in an example system 100 of FIG. 1, one or more medical devices 110 (e.g., ventilator, anesthesia machine, intravenous (IV) infusion drip, etc.) administer to a patient 120 while one or more monitoring devices 130 (e.g., electrocardiogram (ECG) sensor, blood pressure sensor, respiratory monitor, etc.) gather data regarding patient vitals, patient activity, medical device operation, etc. Such data can be used to train an AI model, can be processed by a trained AI model, etc.
  • Certain examples provide systems and methods for deep learning and hybrid reinforcement learning-based event prediction, detection, and/or classification. For example, as shown in an example system 200 of FIG. 2, machine data 210 and physiological (e.g., vitals, etc.) data 220 from one or more medical devices 230, mobile digital health monitors 240, one or more diagnostic cardiology (DCAR) devices 250, etc., is provided in a data stream 260 (e.g., continuous streaming, live streaming, periodic streaming, etc.) to a preprocessor 270 to pre-process the data and apply one or more machine learning models to detect events in the data stream 260, for example. The pre-processed data is provided from the preprocessor 270 to an event predictor 280, which applies one or more AI models, such as a DL model, a hybrid RL model, a DL+hybrid RL model, etc., to predict future events from the preprocessed data. The event predictor 280 forms an output 290 including one or more insights, alerts, actions, etc., for a system, machine, user, etc. For example, the event predictor 280 can predict, based on model(s) applied to the streaming 1D data, occurrence of event(s) such as heart attack, stroke, high blood pressure, accelerated heart rate, etc., and an actionable alert can be provided by the output 290 to adjust an IV drip, activate a sensor and/or other monitor, change a medication dosage, obtain an image, send data to another machine to adjust its settings/configuration, etc.
  • In certain examples, detection and event classification can also be facilitated using deep learning and hybrid reinforcement learning. FIG. 3 illustrates an example system 300 in which the machine data 210 and the physiological (e.g., vitals, etc.) data 220 from the one or more medical devices 230, mobile digital health monitors 240, one or more diagnostic cardiology (DCAR) devices 250, etc., is provided offline 310 (e.g., once a study and/or other exam has been completed, periodically at a certain time/interval or based on a current size of data collection, etc.) to the preprocessor 270 to pre-process the data and apply one or more machine learning models to detect events in the data set 310, for example. The pre-processed data is provided from the preprocessor 270 to an event detector 320, which applies one or more AI models, such as a DL model, a hybrid RL model, a DL+hybrid RL model, etc., to detect and classify events from the preprocessed data. The event detector 320 forms an annotation output 330 including labeled events, etc. For example, the event detector 320 can detect and classify, based on model(s) applied to the streaming 1D data, occurrence of event(s) such as heart attack, stroke, high blood pressure, accelerated heart rate, etc., and the event(s) can then be labeled to be used as ground truth 330 for training of an AI model, verification by a healthcare professional, adjustment of machine settings/configuration, etc.
  • In certain examples, a convolution neural network (CNN) and recurrent neural network (RNN) can be used alone or in combination to process data and extract event prediction. Other machine learning/deep learning/other artificial intelligence networks can be used alone or in combination.
  • Convolutional neural networks are deep artificial neural networks that are used to classify images (e.g., associate a name or label with what object(s) are identified in the image, etc.), cluster images by similarity (e.g., photo search, etc.), and/or perform object recognition within scenes, for example. CNNs can be used to instantiate algorithms that can identify faces, individuals, street signs, tumors, platypuses, and/or many other aspects of visual data, for example. FIG. 4A illustrates an example CNN 400 including layers 402, 404, 406, and 408. The layers 402 and 404 are connected with neural connections 403. The layers 404 and 406 are connected with neural connections 405. The layers 406 and 408 are connected with neural connections 407. Data flows forward via inputs 401 from the input layer 402 to the output layer 408 and to an output 409.
  • The layer 402 is an input layer that, in the example of FIG. 4A, includes a plurality of nodes. The layers 404 and406 are hidden layers and include, the example of FIG. 4A, a plurality of nodes. The neural network 400 may include more or less hidden layers 404, 406 than shown. The layer 408 is an output layer and includes, in the example of FIG. 4A, a node with an output 409. Each input 401 corresponds to a node of the input layer 402, and each node of the input layer 402 has a connection 403 to each node of the hidden layer 404. Each node of the hidden layer 404 has a connection 405 to each node of the hidden layer 406. Each node of the hidden layer 406 has a connection 407 to the output layer 408. The output layer 408 has an output 409 to provide an output from the example neural network 400.
  • Of connections 403, 405, and 407 certain example connections may be given added weight while other example connections may be given less weight in the neural network 400. Input nodes are activated through receipt of input data via inputs, for example. Nodes of hidden layers 404 and 406 are activated through the forward flow of data through the network 400 via the connections 403 and 405, respectively. The node of the output layer 408 is activated after data processed in hidden layers 404 and 406 is sent via connections 407. When the output node of the output layer 408 is activated, the node outputs an appropriate value based on processing accomplished in hidden layers 404 and 406 of the neural network 400.
  • Recurrent networks are a powerful set of artificial neural network algorithms especially useful for processing sequential data such as sound, time series (e.g., sensor) data or written natural language, etc. A recurrent neural network can be implemented similar to a CNN but including one or more connections 412 back to a prior layer, such as shown in the example RNN 410 of FIG. 4B.
  • A reinforcement learning (RL) model is an artificial intelligence model in which an agent takes an action in an environment to maximize a cumulative award. FIG. 4C depicts an example RL network 420 in which an agent 422 operates with respect to an environment 424. An action 421 of the agent 422 results in a change in a state 423 of the environment 424. Reinforcement 425 is provided to the agent 422 from the environment 424 to provide a reward and/or other feedback to the agent 422. The state 423 and reinforcement 425 are incorporated into the agent 422 and influence its next action, for example.
  • Hybrid Reinforcement Models include a Deep Hybrid RL, for example. Reinforcement learning refers to goal-oriented algorithms, which learn how to attain a complex objective (goal) and/or maximize along a particular dimension over many steps/actions. For example, an objective can include to maximize points won in a game over many moves. Reinforcement learning models can start from a blank slate, and, under the right conditions, the model can achieve superior performance. Like a child incentivized by spankings and candy, these algorithms are penalized when they make the wrong decisions and rewarded when they make the right decisions to provide reinforcement. A hybrid deep reinforcement network can be configured as shown in the example 430 of FIG. 4D.
  • As shown in the example 430 of FIG. 4D, a policy 432 drives model-free deep reinforcement learning algorithm(s) 434 to learn tasks associated with processing of data, such as 1D waveform data, etc. Results of the model-free RL algorithm(s) 434 provide feedback to the policy 432 and generate samples 438 for model-based reinforcement algorithm(s) 436. The model-based RL algorithm(s) 430 operates according to the policy 432 and provides feedback to the policy 432 based on samples from the model-free RL algorithm(s) 434. Model-based RL algorithm(s) 436 are more sample-efficient and more flexible than task-specific policy(-ies) 432 learned with model-free RL algorithm(s) 434, for example. However, asymptotic performance of model-based RL algorithm(s) 436 is usually worse than model-free RL algorithm(s) 434 due to model bias, for example. For example, model-free RL algorithm(s) 434 are not limited by model accuracy and can therefore achieve better final performance, although at the expense of higher sample complexity. The hybrid deep RL models combined model-based 436 and model-free 434 RL algorithms (e.g., model-based algorithm(s) 436 to enable supervised initialization of policy 432 that can be fine-tuned with the model-free algorithm(s) 434, etc.) to accelerate model-free learning and improved sample efficiency, for example.
  • Certain examples apply hybrid RL models to facilitate determination and control of input and provide an ability to separate and/or combine information including ECG, spO2, blood pressure, other parameters. Early warning signs of a condition or health issue can be determined and used to alert a patient, clinician, other system, etc. A normal/baseline value can be determined, and deviation from the baseline (e.g., during the course of a surgical operation, etc.) can be determined. Signs of distress can be identified/predicted before an issue becomes critical. In certain examples, a look-up table can be provided to select one or more artificial intelligence networks based on particular available input and desired output. The lookup table can enable rule-based neural network selection to generate appropriate model(s), for example.
  • Other neural networks include transformer networks, graph neural networks, etc. A transformer or transformer network is a neural network architecture that transforms an input sequence to an output sequence using sequence transduction or neural machine translation (e.g., to process speech recognition, text-to-speech transformation, etc.), for example. The transformer network has memory to remember or otherwise maintain dependencies and connections (e.g., between sounds and words, etc.). For example, the transformer network can include a CNN with one or more attention models to improve speed of translation/transformation. The transformer can be implemented using a series of encoders and decoders (e.g., implemented using a neural network such as a feed forward neural network, CNN, etc., and one or more attention models, etc.). As such, the transformer network transforms one sequence into another sequence using the encoder(s) and decoder(s).
  • In certain examples, a transformer is applied to sequence and time series data. Compared with an RNN and/or long short-term memory (LSTM) model, the transformer has the following advantages. The transformer applies a self-attention mechanism that directly models relationships between all words in a sentence, regardless of their respective position. The transformer allows for significantly more parallelization. The transformer proposes to encode each position and applying the attention mechanism to relate two distant words of both the inputs and outputs with respect to itself, which then can be parallelized to accelerate training, for example. Thus, the transformer requires less computation to train and is a much better fit for modern machine learning hardware, speeding up training by up to an order of magnitude, for example.
  • A graph neural network (GNN) is a neural network that operates on a graph structure. In a graph, vertices or nodes are connected by edges, which can be directed or undirected edges, for example. The GNN can be used to classify nodes in the graph structure, for example. For example, each node in the graph can be associated with a label, and node labels can be predicted by the GNN without ground truth. Given a partially labeled graph, for example, labels for unlabeled nodes can be predicted.
  • Certain examples include aggregation techniques for detection, classification, and prediction of medical events based on DL processing of time series data. Different signals can be obtained, and different patterns can be identified for different circumstances. From a large aggregated data set, a subset can be identified and processed as relevant for a particular “-ology” or circumstance. Data can be partitioned into a relevant subset. For example, four different hospitals are collecting data, and the data is then partitioned to focus on cardiac data, etc. Partitioning can involve clustering, etc. Metadata can be leveraged, and data can be cleaned to reduce noise, artifacts, outliers, etc. Missing data can be interpolated and/or otherwise generated using generative adversarial networks (GANs), filter, etc. Detection occurs after the fact, while a prediction is determined before an event occurs. In certain examples, prediction occurs in real time (or substantially real time given system processing, storage, and data transmission latency) using available data.
  • Post-processing of predicted, detected, and/or classified events can include a dashboard visualization for detection, classification, and/or prediction. For example, post-processing can generate a visualization summarizing events. Post-processing can also generate notifications determined by detection, classification, and/or prediction, for example.
  • In certain examples, an algorithm can be used to select one or more machine learning algorithms to instantiate a network model based on aggregated pre-processed data and a target output. For example, a hybrid RL can be selected for decision making regarding which events to choose from a set of targeted events. A transformer network can be selected for parallel processing and accelerating event generation, for example. A graph neural network can be selected for interpreting targeted events and relations exploration, for example. The neural network and/or other AI model generated by the selected algorithm can operate on the pre-processed data to generate summarized events, etc.
  • In certain examples, data can be pre-processed according to one or more sequential stages to aggregate the data. Stages can include data ingestion and filtration, imputation, aggregation, modeling, and recommendation. For example, data ingestion and filtration can include one or more devices connected to a patient and used to actively capture and filter data related to the patient and/or device operation. For example, a patient undergoing surgery is equipped with an anesthetic device and one or more monitoring devices capturing one or more of the patient's vitals at a periodic interval. The anesthetic device can be viewed as a source of machine events (acted upon the patient), and the captured vitals can be treated as a source of patient data, for example.
  • FIG. 5 illustrates an example visualization 500 of data provided from multiple sources including, an anesthetic device, a monitoring device, etc. Such a stream of data can have artifacts due to one more issues occurring during and/or after acquisition of data. For example, heart rate and/or ST segment errors can occur due to electrocautery interference, patient movement, etc. Oxygen saturation measurement errors can occur due to dislocation of a sensor, vasopressor use, etc. Non-invasive blood pressure errors can be caused by leaning on the pressure cuff, misplacement of the cuff, etc. Such artifacts are filtered from the stream using one or more statistics (e.g., median, beyond six sigma range, etc.) that can be obtained from the patient (e.g., current) and/or from prior records of patients who have undergone a similar procedure and may have involved one or more normalization techniques with respect to age, gender, weight, body type, etc.
  • In certain examples, the data may have some observation missing and/or removed during a filtration process, etc. This missing information can be imputed with data before being used for training a neural network model, etc. The data can be imputed using one or an ensemble of imputation methods to better represent the missing value. For example, imputation can be performed using a closest fill (e.g., using a back or forward fill with the value closest with respect to time, etc.), collaborative filtering by determining another input that could be a possible candidate, using a generative method trained with data from large sample of patients, etc.
  • In certain examples, a captured stream of data may involve aggregation before being consumed in downstream process(es). Patient data can be aggregated based on demographic (e.g., age, sex, income level, marital status, occupation, race, etc.), occurrence of a specific medical condition, etc. One or more aggregation methods can be applied to the data, such as K-means/medoids, Gaussian mixture models, density-based aggregation, etc. Aggregated data can be analyzed and used to classify/categorize a patient to determine a relevant data set for training and/or testing of an associated neural network model, for example.
  • For example, using K-means/medoids, data can be clustered according to certain similarity. Medoids are representative objects of a data set or a cluster with a data set whose average dissimilarity to all the objects in the cluster is minimal. A cluster refers to a collection of data points aggregated together because of certain similarities. A target number k can be defined, which refers to a number of centroids desired in the dataset. A centroid is an imaginary or real location representing a center of the cluster. Every data point is allocated to each of the clusters by reducing an in-cluster sum of squares, for example. As such, a K-means algorithm identifies k number of centroids, and then allocates every data point to the nearest cluster, while keeping the centroids as small as possible. The “means” in the K-means refers to an averaging of the data; that is, finding the centroid. In a similar approach, a “median” can be used instead of the middle point. A “goodness” of a given value of k can be assessed with methods such as a silhouette method, Elbow analysis, etc.
  • In certain examples, a Gaussian mixture model (GMM) is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. A Gaussian mixture model can be viewed as generalized k-means clustering to incorporate information about covariance structure of the data as well as centers of latent Gaussians associated with the data. The generalization can be thought of in the shape the clusters are formed, which in case of GMMs are arbitrary shapes determined by Gaussian parameters of the distribution, for example.
  • Density-based spatial clustering of applications with noise (DBSCAN) is a data clustering algorithm that can be used in data mining and machine learning. Based on a set of points (e.g., in a bi-dimensional space), DBSCAN groups together points that are close to each other based on a distance measurement (e.g., Euclidean distance, etc.) and a minimum number of points. DBSCAN also marks as outliers points that are in low-density regions. Using DBSCAN involves two control parameters, Epsilon(distance) and minimum points to form a cluster, for example. DBSCAN can be used for situations in which there are highly irregular shapes that are not processable using a mean/centroid-based method, for example.
  • In certain examples, a recommender system or a recommendation system is a subclass of information filtering system that seeks to predict the “rating” or “preference” a user would give to an item. The recommender system operates on an input to apply collaborative filtering and/or content-based filtering to generate a predictive or recommended output. For example, collaborative filtering builds a model based on past behavior as well as similar decisions made by other users. This model is then used to predict items (or ratings for items) that the user may have an interest in. Content-based filtering approaches utilize a series of discrete, pre-tagged characteristics of an item to recommend additional items with similar properties. In the healthcare context, such collaborative and/or content-based filtering can be used to predict and/or categorize an event and/or classify a patient based on the event(s), etc.
  • Thus, certain examples provide a plurality of methods that can be used to determine a cohort to which the patient belongs. Based on the cohort, relevant samples can be extracted to train and inference a model for a given patient. For example, when looking at a particular patient and trying to inference for the particular patient, an appropriate cohort can be determined to enable retrieval of an associated subset of records previously obtained and/or from a live stream of data. In certain examples, a top N records are used for training and inferencing.
  • In certain examples, patients and associated patient data can be post-processed. For example, given that a clinician attends to more than one patient at a given point of time, patients and associated data can be summarized, prioritized, and grouped for easy and quick inferencing of events/outcomes.
  • For example, patients can be prioritized based on a clinical outcome determined according to one or more pre-determined rules. Patients can also be prioritized based on variance of vitals from a nominal value of the cohort to which the patient belongs, where the cohort is determined by one or more aggregation methods, for example.
  • Additionally, aggregation can be used to provide a high-level summarization of one or more patients being treated. Summarization can also involve aggregation of one or more events occurring in parallel for ease of interpretability. This process of summarization can also be modeled as a learned behavior based on the learning of how a clinician prefers to look at the summarization, for example.
  • As such, trained, deployed AI models can be applied to 1D patient data to convert the patient time series data into a visual indication of a comparative value of the data. For example, processing the 1D time series patient data using an AI model, such as one or more models disclosed above, quantifies, qualifies, and/or otherwise compares the data to a normal value or values, a threshold, a trend, other criterion(-ia) to generate a color-coded, patterned, and/or shaded representation of the underlying time series (e.g., waveform, etc.) data. Data can be clustered for a particular patient, and patients can be clustered for a particular group, such as a hospital, department, ward, clinician, office, enterprise, condition, etc.
  • Using the prioritization, patient(s) and event(s) can be determined from the group of available patients and events for which a clinician and/or healthcare system/device is to be notified for immediate attention, for example. In certain examples, a visualization can be generated from the prioritized data to enable understandable, actionable, display and interaction with the data.
  • Review of large data sets across multiple patients is time-consuming and tedious. There can be large volumes of data, often with little context, and the data is to be used to train AI models for detection, classification, prediction, etc. Rather than a bottom up approach, which involves manual review of data for hours including garbage data (e.g., a machine was left on for hours generating garbage data, etc.) that is difficult to sort out from useful data, certain examples provide a top down approach through a “Christmas tree” display to visualize multiple criteria/events for multiple patients and easily, visually identify gross outliers when viewing the entire landscape via the visualization interface. The user interface view can be sorted and/or otherwise arranged according to different condition, location, demographic, other criterion, etc., to arrange patient segments accordingly. Each patient is represented by a block (also referred to as a cluster or set), and each line (also referred as a bar, strip, stripe, or segment) in the block represents a different 1D data point. A color/pattern/representation of that line conveys an indication of its value/relative value/urgency/categorization/etc. to allow a user to visually appreciate an impact/importance of that data element.
  • As such, certain examples provide an interactive graphical view to visualize patterns, abnormalities, etc., in a large data set across multiple patients, which transforms raw results into visually appreciable indicators. Using a graphical view helps to improve and further enable comparisons between patients, deviation from a reference or standard, identification of patterns, other comparative analysis, etc. In certain examples, a block of patient information can be magnified to drill down into particular waveforms, other particular data, etc., represented by the colored/patterned line(s) in the top level interface. Patterns of the visualization and/or underlying 1D data can be provided for display and interaction via the user interface, as input to another system for diagnosis, treatment, system configuration, stored, etc.
  • Thus, certain examples gather 1D time series (e.g., waveform) data from one or more medical devices (e.g., ECG, EEG, ventilator, etc.) and a patient via one or more monitoring devices. Physiological data and other 1D time series signals can be indicative of a physiological condition associated with a body part from which the data is obtained (e.g., because the signal corresponds to electrical activity of the body part, etc.). As such, the time series physiological signal data, machine data, etc., can be processed used by clinicians for decision making regarding a patient, medical equipment, etc. As shown in the example of FIG. 6, a variety of waveforms (e.g., ECG, heart rate (HR), respiratory gas movement, central venous pressure, arterial pressure, oxygen fraction, waveform capnography, etc.) can be captured with respect to a patient.
  • A data view, such as example data view 600, can be generated and provided for a particular patient from the gathered, processed data set, for example. In certain examples, the patient data can be normalized to provide a graphical representation of relative and/or other comparative values. For example, a normalized value can be converted from an alphanumeric value into a graphical representation of that value (e.g. a color, a pattern, a texture, etc.), and a group or set of values for a patient can be represented as a group or cluster of graphical representations (e.g., a set of colored lines, a combination of patterns and/or textures, etc.) in a block for that particular patient. Additionally, a graphical user interface can display and provide access to graphical representations for a set or group of patients shown together for visual comparison, interaction, individual processing, comparative processing, sorting, grouping, separation, etc. The graphical user interface (GUI) view of multiple patients can be organized/arranged according to one or more criterion (e.g., duration, location, condition, etc.).
  • In certain examples, such a GUI can arrange blocks or clusters of patient data such that each patient's block is distinct from other adjacent patient blocks. In certain examples, patient blocks or “cases” can be arranged around (e.g., anchored by, displayed with respect to, etc.) a normalization point or common event/threshold, such as an emergency start event, etc. For example, an occurrence of an emergency event, such as a stroke, heart attack, low blood pressure, low blood sugar, etc., can be indicated in each of a plurality of patients and used to normalize the patient data blocks with respect to that emergency event.
  • FIG. 7 illustrates an example graphical user interface 700 including an interactive block representation 710 of patient time series data. As shown in the example of FIG. 7, each band 720-728 in the block 710 corresponds to a particular parameter measured over time using the 1D time series data and transformed into a visual representation of the underlying data. A length of the block representation 710 can be used to identify an outlier, pattern, etc., in comparison to other patient blocks, for example. As shown in the example of FIG. 7, one or more signals such as electrical signals, gas flow rate and/or volume, liquid flow rate and/or volume, vibration, other mechanical parameter, etc., can be converted into a visual, unit-less representational band 720-728. In the representation 710 of FIG. 7, the data set can be unknown. Where other automations require the data set to be known, with known inputs and expected outputs, certain examples process unknown data to transform the data into a set of visual representations 720-728 forming a block 710 characterizing the patient.
  • FIG. 8 depicts an example interface 800 including representations 810, 820 of a plurality of patients in a healthcare environment (e.g., a hospital, a ward, a department, a clinic, a doctor's office, etc.). As shown in the example of FIG. 8, each cluster or block 810, 820 corresponds to a patient, and each strip/stripe/bar/band/line/segment 812-818, 822-828 in the respective block 810, 820 represents one variable depicted in a normalized color, pattern, and/or other texture format for the corresponding signal. As such, the set of blocks 810, 820 form a “Christmas tree” 830 of colors/patterns/textures providing a visual indication of patient condition, trend, pattern, etc. In certain examples, each strip 812-818, 822-828 serves as a pointer to underlying 1D data and/or associated records, actions, etc., and each block 810, 820 provides a snapshot of patient condition.
  • As shown in the example of FIG. 8, a position of each block 810, 820 can be anchored with respect to an identified start or reference event 840 (e.g., indicated by a line 840 in the example interface 800 of FIG. 8) to expose variation between patients with respect to that event 840. In certain examples, patient blocks 810, 820 can be ordered in the tree 830 according to one or more criterion/characteristic (e.g., location, duration, condition, demographic, etc.).
  • For example, a subset of patient data (e.g., less than ten minutes, etc.) can be removed for each patient case. In certain examples, the top rows 812-814, 822-824 (e.g., 14 rows, etc.) for each block 810, 820 are categorical and the bottom rows 816-818, 826-828 (e.g., 29 rows, etc.) in each block 810, 820 are numeric. The blocks 810, 820 are anchored by the emergence start event 840 and sorted by length of case, for example.
  • In certain examples, one or more patients can be excluded from a “ground truth” set of patient data to be used to train one or more AI models. For example, one or more blocks 810, 820 that do not align with other blocks 810, 820 with respect to the event 840 can be excluded from the ground truth data set provided for AI model training and/or testing. Remaining blocks 810, 820 can be annotated for training, testing, patient evaluation, etc. For example, a clinician, a nurse, etc., can annotate the “clean” data to form a training and/or testing data set.
  • In certain examples, the blocks 810, 820 can represent 1D data associated with different patients. In other examples, the block 810, 820 can represent 1D data associated with the same patient acquired at different times. The event 840 is used, for example, to organize patient according to group, location, duration, clinical purpose, etc.
  • In certain examples, the individual “tree” interface 830 can be arranged with a plurality of other tree interfaces to form a “forest” interface. FIG. 9 illustrates an example “forest” or combined interface 900 including a plurality of individual tree interfaces 830, 910, 920. Via the example composite interface 900, a collection of individual interfaces 830, 910, 920 can be compiled to represent a plurality of departments, groups, points in time, instances, etc., of patients in care of a healthcare provider. The forest 900 of trees 830, 910, 920 can be arranged for comparison and interaction according to one or more criterion. For example, the composite interface 900 can highlight variability and can pivot on different characteristics, sort on different sizes, etc. Interaction (e.g., zoom, drill-down, process, etc.) with displayed information can be enabled via the interface 900 and/or its component trees 830, 910, 920 of blocks, for example.
  • As such, certain examples provide micro and macro views of multiple patients with respect to multiple variables. For example, given a single variable (e.g., oxygen level below a threshold percentage, etc.) a quick view of applicable patients can be shown along with a time stamp of when a measured value of the variable dropped below (or rose above) a threshold level for the variable. A quick analysis can be conducted with respect to other variables at that time to determine a correlation between the change in one variable with respect to the threshold and change(s) to other variable(s) in the block(s) of patient data.
  • In certain examples, an interface begins with the composite view 900 (e.g., a static image, a dynamic interactive graphic, etc.) across multiple groups/facilities/locations/enterprises, and the system can focus on a particular tree 830, 910, 920 of patient/location data. Within a selected tree 830, a portion of the tree can be displayed in its interface 800, such as by magnifying and displaying real captured data signals in the magnified region, block, etc., 810, 820. Another level of magnification can provide access to underlying signal data, etc., for example. Blocks 810, 820 in the tree 830 can be ordered based on duration, procedure, condition, location, etc., and patients are then organized differently within the graphical user interface 800. Similarly, segments 812-818, 822-828 in a block 810, 820 can be ordered based on one or more criterion such as duration, procedure, condition, location, demographic, etc., and patient segments 812-818, 822-828 in the block 810, 820 are then organized for display and interaction according to the criterion(-ia). Using the “Christmas tree” interface 800, for example, a view of related patients can be provided to enable proper data clean up decisions as a group before diving into the details of particular patients, issues, procedures, etc. The event indicator 840 can be used as a reference point to align the blocks 810, 820 of data for each patient in the data set to show an event that occurred at that point in time, when in time the particular event occurred for each patient, what was occurring with other patients when a particular patient experienced the event, and/or other comparative visualization and/or analysis, for example.
  • As shown in the example of FIG. 9, groups of patients can be represented with respect to a particular event 840 (e.g., a particular group, location, duration, clinical purpose, condition, other clinical event, etc.) in one or more trees 830, 910, 920. Stacked signals form a representation of a patient, and patient representations can be organized with respect to each other based on the event and/or other criterion 840, for example. The event/criterion 840 allows the same set of patient data to be “stacked” or organized in different ways, for example. For example, the trees 830, 910, 920 can be formed from different patient data sets, and/or the trees 830, 910, 920 can be formed from the same patient data set. The event/ criterion indicator 840, 915, 925 can represent a same event/criterion across different sets of patient data and/or can represent a changing event/criterion across the same set of patient data. As such, each event 840, 915, 925 triggers a different organization of the same patients in the corresponding tree 830, 910, 920, for example. Each different event 840, 915, 925 results in a different tree 830, 910, 920 with different patient outliers. Thus, when training an AI model to recognize a particular event 840, 915, 925, a different set of ground truth patient data can be identified and stored with outliers removed, for example.
  • Thus, different groups of patient data can be formed, processed, transformed into visualizations, and analyzed to determine patient patterns, trends, issues, and appropriate data for AI model training, for example. Based on issue, condition, and/or purpose, for example, patients can be arranged in different groups to treat each group of patients separately. For example, patients in cardiology can form one group, and patients with broken bones can form another group. By processing and transforming, without prior knowledge, the data for a particular group and event/ criterion 840, 915, 925 into a visualization and grouping, common features can be understood for a particular group, and outliers can be investigated, eliminated, etc.
  • For example, a group of patients can be analyzed with respect to an anesthesia event 840, 915, 925. The event 840, 915, 925 can be an anesthesia “on” event or an anesthesia “off” event, for example. With an anesthesia “off” event, a goal is to determine an end to a procedure, so patients can be taken off their anesthesia and moved from a surgical suite to a post-operative recovery area. From the tree 830, 910, 920 view, patients undergoing the same procedure can be compared based on the anesthesia off trigger event 840, 915, 925, for example. Alternatively or in addition, the same patient undergoing a procedure multiple times or undergoing different procedures with a same trigger event 840, 915, 925 can be visually compared. When patients are organized with respect to the same event 840, 915, 925 such as removal of anesthesia, their procedure duration, responsiveness, and/or other characteristic can be evaluated. Based on the evaluation, such patient data can be used to form ground truth or known, verified patient data to be relied upon for training and/or testing an AI model. Patterns or trends in the data can also be analyzed for cause and effect and associated adjustment to patient diagnosis, treatment, etc. Patients not following a pattern (e.g., outliers or anomalies, etc.) can be discarded or ignored for the training/test data set, for example.
  • In certain examples, using the “forest” interface 900 of FIG. 9, a group and/or subgroup of patients can be selected to trigger extraction of a data set to output for training, testing, etc., of one or more AI models. Selection of a subset of a tree 830, 910, 920 via the interface 900 can trigger extraction and transmission (e.g., to be stored, to be used by a model generator/processor, etc.) of the data set associated with the subset, for example.
  • In certain examples, the data trees can be used to identify and evaluate individual patient information as well as determine group characteristics as with the example interfaces 800, 900. As such, a user can formulate a reliable data set for training and/or testing of an AI model and also leverage the data as actionable data for patient diagnosis, treatment, etc.
  • FIGS. 10A-10E illustrate a sequence of user interface screens corresponding to an example workflow for anomaly detection in patient data. As shown in the example of FIG. 10A, a multi-patient view interface 1000 provides representations 1010-1020 for a plurality of patients dynamically showing associated vitals and/or other physiological data (e.g., heart rate, blood pressure, oxygen saturation, etc.) including one or more warnings 1030, 1032, where applicable, for the respective patient. For example, the multi-patient view 1000 shows a real-time (or substantially real time given memory and/or processor latency, data transmission time, etc.) digest of physiological signals recorded over a period of time (e.g., the last five minutes, last ten minutes, last minute, etc.) for multiple patients. The patients shown in the multi-patient view 1000 can be associated with the patient representations shown in a tree 830, 910, 920, for example.
  • Using the example interface 1000, a patient representation 1010-1020 can be selected to trigger an expanded single-patient view 1040, such as shown in the example of FIG. 10B, showing an expanded view of the representation 1020 for the selected patient. For example, a doctor can click one of the displayed patient representations 1010-1020 to see more real-time signals from that patient in the single patient view 1040 of the example of FIG. 10B. The signals can convey phases of a patient's care such as inductance, maintenance, and emergence phases of the patient's anesthesia, for example.
  • Whereas the multi-patient view 1000 may have a prioritized patient 1020, the single-patient view 1040 can include a prioritized event 1042. The example single-patient view 1040 can also include a button, icon, or other trigger 1045 to view a patient history for the patient displayed in the single view interface 1040. By clicking on the history data button 1045 in the single-patient view 1040, collected physiological signals for the patient over a given interval (e.g., in the past hour, the past 5 hours, the past 8 hours, etc.) is displayed. An example patient history view 1050, such as shown in the example of FIG. 10C, provides a holistic, qualitative graphical visualization of the collected patient waveform data over the designated time period (e.g., set by user response, set by preference, set by default, set by data availability, etc.). Thus, rather than looking at numbers or looking at particular waveforms, one or more AI constructs (e.g., hybrid RL, DL, DL+Hybrid RL, etc.) can process the 1D time series waveform data to formulate a block 1055 of visual values 1060-1068 for display. This view helps identify and highlight anomaly conditions detected by the AI clinical detection models. In the example of FIG. 10C, the patient was detected and highlighted to have both sleep apnea 1070 and seizure 1072 as demonstrated by the anomaly or change 1070, 1072 in the value of the respective signal 1060-1068.
  • The example interface of FIG. 10C transforms data into visual representations over a certain period of time, such as morning, afternoon, overnight, etc. Signal acquisition and transformation can be repeated at a different time of day, different day, same day of the week but a week later, etc., to provide a plurality of visual representations for comparison. The representations can be compared for the same patient, different patients undergoing the same procedure, etc. The representations can be stacked to form a tree 830, 910, 920, for example.
  • Selecting the indication of seizure 1072 triggers display of an example interface 1080, shown in FIG.10D, to provide further detail regarding the event/anomaly 1072 in the patient data stripe 1068. In the example of FIG. 10D, the anomaly 1072 is a seizure with respect to a patient, and the detail interface view 1080 displays the waveform data associated with the anomaly 1072 represented in the processed patient data stripe 1068.
  • FIG. 10E provides an example of an example graphical user interface 1090 providing a probability of seizure at a certain power over a period of time. As such, a user can trigger processing of the waveform from the interface 1080 of FIG. 10D to generate a results interface 1090 providing an analysis of the processed waveform data. In certain examples, the results can be interactive to drive detection, prediction, evaluation of causation, confidence score, etc.
  • Thus, the example of FIGS. 10A-10E illustrates a new, interactive, dynamic user interface to allow correlation, processing, and viewing of a plurality of sets of patient data, focus on one set of patient data, concentration on a subset of such patients, in depth review of a particular patient, and deep dive into source 1D data and associated analysis. In certain examples, the series of interfaces 1000, 1040, can replace the prior interface upon opening, pop-up and/or otherwise overlay the prior interface upon opening, etc. The interface allows a patient and/or group of patients to be analyzed, diagnosed, treated, etc., and also facilitates transformation of gathered patient data into a verified data set for training, testing, etc., of AI model(s), for example.
  • FIG. 11 illustrates an example time series data visualization system or apparatus 1100. The example system 1100 can be used to process 1D time series data from one or more patients to generate interactive visualization interfaces, such as the example interfaces of FIGS. 6-10E. The example system 1100 includes a communication interface 1110, an input processor 1120, a data processor 1130, a model builder 1140, a model deployer 1150, a visualization processor 1160, a user interface builder 1170, and an interaction processor 1180. The example system 1100 transforms data gathered from one or more medical devices, patient monitors, etc., into interactive graphical representations that provide a visual indication of content, status, severity, relevance, etc. The example system 1100 enables a new form of display and interaction with the interactive graphical representations and underlying time series data via a graphical user interface to manipulate the graphical representations individually, in blocks or clusters, with respect to multiple patients, with respect to a reference event, etc.
  • The example communication interface 1110 is to send and receive data to/from one or more sources such as sensors, other monitoring devices, medical devices, other machines, information systems, imaging systems, archives, etc. The example input processor 1120 is to clean (e.g., remove outlier data, interpolate missing data, adjust data format, etc.), normalize (e.g., with respect to a normal value, reference value, standard value, threshold, etc.) and/or otherwise process incoming data (e.g., monitored patient physiological data, logged machine data, electronic medical record data, etc.) for further processing by the system 1100.
  • The example data processor 1130 processes the normalized and/or otherwise preprocessed data from the input processor 1120 to complete the normalization of data begun by the input processor, compare data provided by the input processor 1120 and/or directly from the communication interface 1110, prepare data for modeling (e.g., for training and/or testing a machine learning model, for visualization, for computer-aided diagnosis and/or detection, etc.), etc. In certain examples, the data processor 1130 can process data to convert the data into a graphical representation of relative or normalized values over time for a parameter or characteristic associated with the data (e.g., associated with a stream of 1D time series data, etc.). In other examples, the visualization processor 1160 converts the data into one or more graphical representations for visual review, comparison, interaction, etc.
  • The example model builder 1140 builds a machine learning model (e.g., trains and tests a supervised machine learning neural network and/or other learning model, etc.) using data from the communication interface 1110, input processor 1120, and/or data processor 1130. For example, the model builder 1140 can leverage normalized data, data transformed into the relative graphical visualization, etc., to train a machine learning model to correlate output(s) with input(s) and test the accuracy of the model. The example model deployer 1150 can deploy an executable network model once the model builder 1140 is satisfied with the training and testing. The deployed model can be used to process data, correlate an output (e.g., a graphical representation, identification of an anomaly, identification of a trend, etc.) with input data, convert waveform data to a relative graphical representation, etc.
  • In certain examples, the visualization processor 1160 converts one-dimensional time-series data into one or more graphical representations for visual review, comparison, interaction, etc. In other examples, the visualization processor 1160 organizes and correlates graphical representations with respect to a patient, a reference/emergency/triggering event, etc. The example visualization processor 1160 can be used to process the graphical representations of one or more data series (e.g., 1D time series data, other waveform data, other data, etc.) into one or more visual constructs such as blocks/ clusters 810, 820, strips/bands/lines/segments 812-818, etc. The example visualization processor 1160 can correlate blocks, strips, etc., based on patient, location/organization/cohort, emergency event, other reference event or marker, etc.
  • The example user interface builder 1170 can construct an interactive graphical user interface from the graphical representations, model, and/or other data available in the system 1100. For example, the interface builder 1170 can generate one or more interfaces such as in the examples of FIGS. 6-10E and can generate a linked combination of interfaces such as shown in the example of FIGS. 10A-10E. The example interaction processor 1180 triggers user interface displays, data manipulation, graphical representation manipulation, processing of data, access to external system(s)/process(es), data transfer, storage, reporting, etc., via the one or more interfaces 700-1080 such as shown in the examples of FIGS. 6-10E.
  • FIG. 12 is a flow diagram of an example method 1200 to process 1D time series data. At block 1202, raw time series data is processed. For example, 1D waveform data from one or more sensor attached to and/or otherwise monitoring a patient, a medical device, other equipment, a healthcare environment, etc., can be processed by the example input processor 1120 to identify the data (e.g., type of data, format of data, source of data, etc.) and route the data appropriately.
  • At block 1204, a processing method to be applied to the data is determined. The processing method can be dynamically determined by the data processor 1130 based on the type of the data, source of the data, reason for exam, patient status, type of patient, associated healthcare professional, associated healthcare environment, etc. The processing method can be a bottom-up processing method or a top-down processing method, for example. When the processing method is to be a bottom-up processing method, at block 1206, the data is cleaned. For example, the data can be cleaned by the data processor 1130 to normalize the data with respect to other data and/or a reference/standard value. The data can be cleaned by the data processor 1130 to interpolate missing data in the time series, for example. The data can be cleaned by the data processor 1130 to adjust a format of the data, for example. At block 1208, outliers in the data are identified and filtered. For example, outlier data points that fall beyond a boundary, threshold, standard deviation, etc., are filtered (e.g., removed, separated, reduced, etc.) from the data being processed.
  • At block 1210, a model is built using the data. For example, the example model builder 1140 builds a machine learning model (e.g., trains and tests a supervised machine learning neural network and/or other learning model such as an unsupervised learning model, a deep learning model, a reinforcement learning model, a hybrid reinforcement learning model, etc.) using data from the communication interface 1110, input processor 1120, and/or data processor 1130. For example, the model builder 1140 can leverage normalized data, data transformed into the relative graphical visualization, etc., to train a machine learning model to correlate output(s) with input(s) and test the accuracy of the model.
  • At block 1212, the model is deployed. For example, the example model deployer 1150 can deploy an executable network model once the model builder 1140 is satisfied with the training and testing. The deployed model can be used to process data, correlate an output (e.g., a graphical representation, identification of an anomaly, identification of a trend, etc.) with input data, convert waveform data to a relative graphical representation, etc.
  • At block 1214, feedback is captured from use of the deployed model. For example, feedback can be captured from the deployed model itself, feedback can be captured from an application using the model, feedback can be captured from a human user, etc.
  • When the processing method is to be a top-down processing method, at block 1216, the data is visualized. For example, the example visualization processor 1160 can be used to process the data to transform the source waveform and/or other 1D time series data into graphical representations. The visualization processor 1160 can normalize and/or otherwise clean the data and transform the 1D data into one or more visual constructs such as blocks/ clusters 810, 820, strips/lines/bands/segments 812-818, etc. The example visualization processor 1160 can correlate blocks, strips, etc., based on patient, location/organization/cohort, emergency event, other reference event or marker, etc. As such, multiple blocks for a single patient and/or blocks for multiple patients can be visualized and organized for data filtering, selection, etc. At block 1218, outliers in the data are identified and filtered. For example, outlier data points that fall beyond a boundary, threshold, standard deviation, etc., are filtered (e.g., removed, separated, reduced, etc.) by the data processor 1130 from the data being processed. Filtering and/or other removal of outliers can be automatic by the data processor 1130 and/or can be triggered by interaction with the interface, data visualization, etc.
  • At block 1220, a model is built using the data. For example, the example model builder 1140 builds a model (e.g., trains and tests a supervised machine learning neural network and/or other learning model such as an unsupervised learning model, a deep learning model, a reinforcement learning model, a hybrid reinforcement learning model, etc.) using data and associated graphical representations to cluster representations for a patient, group patients together in relative alignment around a trigger event (e.g., an emergency condition, an anomaly, a particular physiological value, etc.). The model can thereby learn how and when to group similar or dissimilar graphical representations, highlight anomalies in a visual manner, etc.
  • At block 1222, the model is deployed. For example, the example model deployer 1150 can deploy an executable model once the model builder 1140 is satisfied with the training and testing. The deployed model can be used to process data, correlate an output (e.g., a graphical representation, identification of an anomaly, identification of a trend, etc.) with input data, convert waveform data to a relative graphical representation, comparatively organize graphical representations according to one or more criteria, etc. As such, a graphical visualization can be generated from an output of the model. The model can be used to output prediction and/or detection results based on time-series data, and the output can be visualized graphically such as using the visualization processor 1160.
  • At block 1214, feedback is captured from use of the deployed model. For example, feedback can be captured from the deployed model itself, feedback can be captured from an application using the model, feedback can be captured from a human user, etc.
  • FIG. 13 is a flow diagram of an example method 1300 for dynamic generation and manipulation of a graphical user interface including visual, graphical representations of one-dimensional time-series data. At block 1302, time-series data is processed to normalize the data with respect to one or more reference values. For example, value(s) of the time-series data waveforms and/or other one-dimensional data stream can be adjusted (e.g., normalized) with respect to a reference value such as a normal value, a standard value, an accepted average value, an expected value, etc. The normalized data then expresses a degree or magnitude of difference from the reference value(s), which enables improved comparison of values, triggering of alerts, highlighting of anomalies, etc.
  • At block 1304, the normalized data is converted into one or more graphical representations of the underlying normalized 1D data. For example, normalized 1D time series data values can be provided to a deep learning model, such as an RL model, DL model, hybrid RL+DL model, etc., to convert the numerical value into a visual, graphical representation such as a line, strip, stripe, segment, bar, or band. For example, normalized heart rate waveform data can be fed into a hybrid RL+DL model to form a contiguous bar or strip graphical representation showing a trend, relative importance, anomaly, etc., in the underlying heart rate waveform data. A set of waveform data for a patient can be converted into a plurality of graphical representations (e.g., heart rate, blood pressure, lung volume, brain activity, etc.), for example. As such, normalized data is converted into a comparative visual representation based on a color, shading, texture, pattern, etc.
  • At block 1306, graphical representations are clustered for a given patient. For example, graphical representations of heart rate, blood pressure, brain wave activity, lung activity, etc., can be gathered together or clustered to be represented as a block of graphical representations for the patient. At block 1308, patient clusters are arranged with respect to a reference event. For example, a reference event, such as a stroke, seizure, fire, etc., can be used to align a plurality of patient clusters for visual comparison as to a point in the collection of data corresponding to the graphical representation at which the reference event occurred.
  • At block 1310, the arranged clusters/blocks are displayed via a graphical user interface. For example, as shown in the example interfaces of FIGS. 7-10E, blocks of graphical representations are displayed via the user interface for interaction alone, in conjunction with a reference event, in comparison with other blocks/clusters, etc. At block 1312, interaction with the blocks and constituent lines of graphical representation is facilitated via the graphical user interface. For example, a patient cluster or block can be selected for further review/interaction. An individual line of graphical representation can be selected for further review/interaction. For example, multiple blocks for a single patient can be selected and/or blocks representing multiple patients can be selected. An anomaly within a graphical representation of particular 1D data can be selected for review of/interaction with underlying 1D time series data, for example. In certain examples, all or some of the displayed representations can be selected to trigger generation of a data set for training and/or testing of one or more AI models.
  • At block 1314, an action is triggered with respect to underlying data based on the interaction with the graphical representation(s) of the user interface displayed. For example, associated time series data can be processed, combined with other 1D data, transmitted to another process/system, stored/reported in an electronic medical record, converted to an order (e.g., for labs, monitoring, etc.), etc. In certain examples, graphical representations selected to form a data set for training and/or testing of one or more AI models can be annotated via interaction to form a “ground truth” data set for model training, testing, etc. At block 1316, the user interface is updated based on interaction, triggered action, etc. For example, a change in the data, combination of data, further physiological and/or device monitoring, etc., can result in a change in graphical representation, an addition or subtraction of graphical representation, highlighting of an anomaly, identification of a correlation, etc., updated and displayed via the graphical user interface.
  • FIG. 14 is a flow diagram of an example method 1400 to facilitate interaction with graphical representations arranged and displayed via a graphical user interface (e.g., block 1312 of the example of FIG. 13). At block 1402, input with respect to the graphical user interface is processed. The input (e.g., user selection, program execution, access by another system or device, etc.) can trigger interaction with one or more elements of the graphical user interface.
  • At block 1404, interaction with a patient cluster of graphical representations is enabled. For example, a user and/or other program, device, system, etc., can interact with a patient cluster or block 810, 820. The block 810, 820 can be analyzed as a group or set of individual graphical representation lines/strips 812-818, 822-828 to determine pattern(s) for a patient, compare patients, reorder and/or otherwise adjust comparative positioning of patient blocks 810, 820, etc. For example, patient blocks 810, 820 can be positioned adjacent to each other to trigger a comparison of values. A reference or triggering event 840 can be activated with respect to patient blocks 810, 820 to triggered automated alignment of the blocks 810, 820 with respect to the event indicator 840.
  • At block 1406, interaction with a graphical representation is enabled. For example, a user and/or other program, device, system, etc., can interact with a strip 812-818, 822-828 to drill down to underlying data (e.g., as shown in the examples of FIGS. 6, 10D, 10E, etc.). Strips 812-818 can be selected for grouping into a data set for annotation and AI model training and/or testing, etc., for example. At block 1408, interaction with an anomaly in a graphical representation is enabled. For example, a user and/or other program, device, system, etc., can select an anomaly (e.g., the anomaly 1072 in the strip 1068) to view underlying signal data (e.g., as shown in the example of FIG. 10D), trigger analytics processing with respect to the selected anomaly (e.g., as shown in the example of FIG. 10E), etc. Alternatively or in addition, an anomaly or outlier can be excluded from a data set to be formed for AI model training, testing, etc.
  • At block 1410, interaction with the blocks, graphical representation elements, anomaly, etc., is processed. For example, additional data, underlying detail, application execution, rearrangement of elements on the graphical user interface, etc., can be processed based on the interaction at block 1404, 1406, and/or 1408. Control then reverts to block 1314 to trigger action with respect to underlying data based on the interaction.
  • Thus, certain examples provide a variety of displays and associated interactions to drive information retrieval, analysis, combination, correlation, patient care, and other healthcare workflows. In brief, as disclosed and described herein, it is envisioned that a graphical user interface can transition from any interface shown in FIGS. 5-10E to any other interface shown in FIGS. 5-10E.
  • For example, navigation can begin with the multi-patient view of FIG. 10A, from which a single patient can be selected to access the single patient view of FIG. 10B. In the single patient view, demographic data, historic information, vitals, captured signal data, etc., can be displayed. From the single patient view, a block graphical representation (e.g., FIG. 10C) can be displayed to visualize collected 1D signal data of the tree in a holistic, “block” graphical representation format for analysis, selection for AI model training/testing, etc. From the block representation of the single patient, a graphical representation line or band within the block can be selected to show the underlying signal data used to form the graphical representation (10D). Alternatively or in addition, the multi-patient tree representational view of FIGS. 8 and/or 9 can be triggered by interaction with the block to show the patient's representation in comparison to graphical representations of other patients, for example.
  • In another example, navigation begins with a multi-patient graphical representation such as the tree of FIG. 8, the forest of FIG. 9, etc. Selection of a block within the multi-patient graphical representation transforms the display to a single patient representation of the associated block such as shown in the example of FIG. 7. From the single block, an individual graphical representation can be selected to display the underlying 1D signal data forming the graphical representation (e.g., FIG. 10D). Alternatively or in addition, selection of the block can trigger generation of a single-patient view, such as the single patient interface view of FIG. 10B, to show information for the patient including signal waveforms forming the graphical representations of the block, for example.
  • Other variations of graphical user interface transformation are envisioned, such as beginning with a multi-patient tree representation of FIGS. 8 and/or 9 and interacting with one or more blocks of the tree to transform the interface into the multi-patient view of FIG. 10A. From the multi-patient view, the single patient view of FIG. 10B can be selected, and interaction with signal values, etc., in the single patient view can trigger display of the single patient representation of FIG. 7 and/or back to the multi-patient representation of FIGS. 8 and/or 9.
  • While example implementations are disclosed and described herein, processes and/or devices disclosed and described herein can be combined, divided, re-arranged, omitted, eliminated and/or implemented in any other way. Further, components disclosed and described herein can be implemented by hardware, machine readable instructions, software, firmware and/or any combination of hardware, machine readable instructions, software and/or firmware. Thus, for example, components disclosed and described herein can be implemented by analog and/or digital circuit(s), logic circuit(s), programmable processor(s), application specific integrated circuit(s) (ASIC(s)), programmable logic device(s) (PLD(s)) and/or field programmable logic device(s) (FPLD(s)). When reading any of the apparatus or system claims of this patent to cover a purely software and/or firmware implementation, at least one of the components is/are hereby expressly defined to include a tangible computer readable storage device or storage disk such as a memory, a digital versatile disk (DVD), a compact disk (CD), a Blu-ray disk, etc. storing the software and/or firmware.
  • Flowcharts representative of example machine readable instructions for implementing components are disclosed and described herein. In the examples, the machine readable instructions include a program for execution by a processor. The program may be embodied in machine readable instructions stored on a tangible computer readable storage medium such as a CD-ROM, a floppy disk, a hard drive, a digital versatile disk (DVD), a Blu-ray disk, or a memory associated with the processor, but the entire program and/or parts thereof could alternatively be executed by a device other than the processor and/or embodied in firmware or dedicated hardware. Further, although the example program is described with reference to flowchart(s), many other methods of implementing the components disclosed and described herein may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined. Although the flowchart(s) depict example operations in an illustrated order, these operations are not exhaustive and are not limited to the illustrated order. In addition, various changes and modifications may be made by one skilled in the art within the spirit and scope of the disclosure. For example, blocks illustrated in the flowchart may be performed in an alternative order or may be performed in parallel.
  • As mentioned above, the example process(es) can be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a tangible computer readable storage medium such as a hard disk drive, a flash memory, a read-only memory (ROM), a compact disk (CD), a digital versatile disk (DVD), a cache, a random-access memory (RAM) and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term tangible computer readable storage medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, “tangible computer readable storage medium” and “tangible machine readable storage medium” are used interchangeably. Additionally or alternatively, the example process(es) can be implemented using coded instructions (e.g., computer and/or machine readable instructions) stored on a non-transitory computer and/or machine readable medium such as a hard disk drive, a flash memory, a read-only memory, a compact disk, a digital versatile disk, a cache, a random-access memory and/or any other storage device or storage disk in which information is stored for any duration (e.g., for extended time periods, permanently, for brief instances, for temporarily buffering, and/or for caching of the information). As used herein, the term non-transitory computer readable medium is expressly defined to include any type of computer readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media. As used herein, when the phrase “at least” is used as the transition term in a preamble of a claim, it is open-ended in the same manner as the term “comprising” is open ended. In addition, the term “including” is open-ended in the same manner as the term “comprising” is open-ended.
  • FIG. 15 is a block diagram of an example processor platform 1500 structured to execute the instructions of FIGS. 12-14 to implement, for example the example apparatus 1100 of FIG. 11. The processor platform 1500 can be, for example, a server, a personal computer, a workstation, a self-learning machine (e.g., a neural network), a mobile device (e.g., a cell phone, a smart phone, a tablet such as an iPad™), a personal digital assistant (PDA), an Internet appliance, a gaming console, a personal video recorder, a set top box, a headset or other wearable device, or any other type of computing device.
  • The processor platform 1500 of the illustrated example includes a processor 1512. The processor 1512 of the illustrated example is hardware. For example, the processor 1512 can be implemented by one or more integrated circuits, logic circuits, microprocessors, GPUs, DSPs, or controllers from any desired family or manufacturer. The hardware processor may be a semiconductor based (e.g., silicon based) device. In this example, the processor 1512 implements the example apparatus 1100 but can also be used to implement other systems disclosed herein such as systems 100, 200, 300, 400, etc.
  • The processor 1512 of the illustrated example includes a local memory 1513 (e.g., a cache). The processor 1512 of the illustrated example is in communication with a main memory including a volatile memory 1514 and a non-volatile memory 1516 via a bus 1518. The volatile memory 1514 may be implemented by SDRAM, DRAM, RDRAM®, and/or any other type of random access memory device. The non-volatile memory 1516 may be implemented by flash memory and/or any other desired type of memory device. Access to the main memory 1514, 1516 is controlled by a memory controller.
  • The processor platform 1500 of the illustrated example also includes an interface circuit 1520. The interface circuit 1520 may be implemented by any type of interface standard, such as an Ethernet interface, a USB, a Bluetooth® interface, an NFC interface, and/or a PCI express interface.
  • In the illustrated example, one or more input devices 1522 are connected to the interface circuit 1520. The input device(s) 1522 permit(s) a user to enter data and/or commands into the processor 1512. The input device(s) can be implemented by, for example, an audio sensor, a microphone, a camera (still or video), a keyboard, a button, a mouse, a touchscreen, a track-pad, a trackball, isopoint, and/or a voice recognition system.
  • One or more output devices 1524 are also connected to the interface circuit 1520 of the illustrated example. The output devices 1524 can be implemented, for example, by display devices (e.g., an LED, an OLED, an LCD, a CRT display, an IPS display, a touchscreen, etc.), a tactile output device, a printer, and/or speaker. The interface circuit 1520 of the illustrated example, thus, typically includes a graphics driver card, a graphics driver chip, and/or a graphics driver processor.
  • The interface circuit 1520 of the illustrated example also includes a communication device such as a transmitter, a receiver, a transceiver, a modem, a residential gateway, a wireless access point, and/or a network interface to facilitate exchange of data with external machines (e.g., computing devices of any kind) via a network 1526. The communication can be via, for example, an Ethernet connection, a DSL connection, a telephone line connection, a coaxial cable system, a satellite system, a line-of-site wireless system, a cellular telephone system, etc.
  • The processor platform 1500 of the illustrated example also includes one or more mass storage devices 1528 for storing software and/or data. Examples of such mass storage devices 1528 include floppy disk drives, hard drive disks, compact disk drives, Blu-ray disk drives, RAID systems, and DVD drives.
  • The machine executable instructions 1532 of FIGS. 12-14 may be stored in the mass storage device 1528, in the volatile memory 1514, in the non-volatile memory 1516, and/or on a removable non-transitory computer readable storage medium such as a CD or DVD.
  • From the foregoing, it will be appreciated that example methods, apparatus and articles of manufacture have been disclosed that improve graphical user interface generation, configuration, interaction, and display. The disclosed apparatus, systems, methods, and articles of manufacture improve the efficiency and effectiveness of the processor system, memory, and other associated circuitry by leverage artificial intelligence models, transformations of waveform and/or other time-series data into comparative graphical representations, comparative analysis of patient data, etc. In certain examples, a deep learning model can convert one-dimensional data from monitoring of a patient, medical device(s), medical equipment, information system(s), etc., into a comparative graphical representation, such as a gradient-based graphical representation visually indicating a change in value over time for the respective data source/value. The disclosed methods, apparatus and articles of manufacture are accordingly directed to one or more improvement(s) in the functioning of a computer and/or other processor and its associated interface. The apparatus, methods, systems, instructions, and media disclosed herein are not implementable in a human mind and are not able to be manually implemented by a human user.
  • Although certain example methods, apparatus and articles of manufacture have been described herein, the scope of coverage of this patent is not limited thereto. On the contrary, this patent covers all methods, apparatus and articles of manufacture fairly falling within the scope of the claims of this patent.

Claims (21)

What is claimed is:
1. A time series data visualization apparatus comprising:
a data processor to process one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference;
a visualization processor to transform the processed data into a plurality of graphical representations visually indicating a change over time in the data and to cluster the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion;
an interface builder to construct a graphical user interface to display the at least first and second blocks of graphical representations; and
an interaction processor to facilitate interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.
2. The apparatus of claim 1, wherein each graphical representation is displayed as a bar using at least one of a color, a pattern, a texture, or a gradient.
3. The apparatus of claim 1, wherein the one-dimensional data is to be captured from at least one of a sensor monitoring a physiological signal of the patient or a medical device operating with respect to a patient.
4. The apparatus of claim 1, wherein the interaction to extract the data set for processing is to include selecting at least a subset of the first and second blocks for at least one of training, testing, or validation of an artificial intelligence model.
5. The apparatus of claim 1, wherein selection of the first block is to trigger display of a single patient view including one or more waveform signals associated with the first block.
6. The apparatus of claim 1, wherein the processing of the extracted data set is to include analyzing a pattern of data for one or more patients associated with the extracted data set.
7. The apparatus of claim 1, wherein the indicator of the criterion includes a visual indication of an event, and wherein the first block and the second block represent at least one of a) two occurrences of the event for one patient or b) one occurrence of the event for two patients.
8. The apparatus of claim 1, wherein the indicator is a first indicator and the criterion is a first criterion, and wherein the first block and the second block arranged with respect to the first indicator of the first criterion form a first tree representation, the first tree displayed via the graphical user interface with a second tree, the second tree including the first block and the second block arranged with respect to a second indicator of a second criterion.
9. The apparatus of claim 1, wherein the interface builder is to build a multi-patient view to be displayed via the graphical user interface, wherein the interaction processor is to facilitate selection of a patient within the multi-patient view to trigger a single-patient view displayed by the interface builder via the graphical user interface, wherein the interaction processor is to facilitate interaction with the single-patient view to trigger display of the first block from the single-patient view via the graphical user interface and to facilitate selection of a first graphical representation within the first block to display, via the graphical user interface, the one-dimensional data associated with the first graphical representation.
10. The apparatus of claim 1, wherein the interaction processor is to facilitate selection of a patient within a multi-patient interface view to trigger display of the first block via the graphical user interface by the interface builder, wherein the interaction processor is to facilitate selection of the first block via the graphical user interface to trigger display, by the interface builder via the graphical user interface, of a single-patient view including one-dimensional data associated with the first block.
11. At least one tangible computer-readable storage medium comprising instructions that, when executed, cause at least one processor to at least:
process one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference;
transform the processed data into a plurality of graphical representations visually indicating a change over time in the data;
cluster the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion, the first block, the second block, and the indicator to be displayed via a graphical user interface; and
facilitate interaction, via the graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.
12. The at least one computer-readable storage medium of claim 11, wherein the instructions, when executed, cause the at least one processor to display each graphical representation as a bar using at least one of a color, a pattern, a texture, or a gradient.
13. The at least one computer-readable storage medium of claim 11, wherein the one-dimensional data is to be captured from at least one of a sensor monitoring a physiological signal of the patient or a medical device operating with respect to a patient.
14. The at least one computer-readable storage medium of claim 13, wherein the interaction to extract the data set for processing is to include selecting at least a subset of the first and second blocks for at least one of training, testing, or validation of an artificial intelligence model.
15. The at least one computer-readable storage medium of claim 11, wherein the instructions, when executed, cause the processor, in response to selection of the first block, to trigger display of a single patient view including one or more waveform signals associated with the first block.
16. The at least one computer-readable storage medium of claim 11, wherein the processing of the extracted data set is to include analyzing a pattern of data for one or more patients associated with the extracted data set.
17. The at least one computer-readable storage medium of claim 11, wherein the indicator of the criterion includes a visual indication of an event, and wherein the first block and the second block represent at least one of a) two occurrences of the event for one patient or b) one occurrence of the event for two patients.
18. The at least one computer-readable storage medium of claim 11, wherein the indicator is a first indicator and the criterion is a first criterion, and wherein the instructions, when executed, cause the at least one processor to arrange the first block and the second block with respect to a first indicator of the first criterion to form a first tree representation, the first tree to be displayed via the graphical user interface with a second tree, the second tree including the first block and the second block arranged with respect to a second indicator of a second criterion.
19. The at least one computer-readable storage medium of claim 11, wherein the instructions, when executed, cause the at least one processor to:
displaying, based on selection of the first block, the first block via the graphical user interface; and
displaying, based on selection of the first block via the graphical user interface, a single-patient view including one-dimensional data associated with the first block.
20. A computer-implemented method for medical machine time-series event data processing and visualization, the method comprising:
processing one-dimensional data captured over time with respect to one or more patients, the data processed to normalize the data with respect to a reference;
transforming the processed data into a plurality of graphical representations visually indicating a change over time in the data;
clustering the plurality of graphical representations into at least a first block and a second block arranged with respect to an indicator of a criterion to provide a visual comparison of the first block and the second block with respect to the criterion; and
facilitating interaction, via a graphical user interface, with the first and second blocks of graphical representations to extract a data set for processing from at least a subset of the first and second blocks.
21. The method of claim 20, further including:
displaying, based on selection of the first block, the first block via the graphical user interface; and
displaying, based on selection of the first block via the graphical user interface, a single-patient view including one-dimensional data associated with the first block.
US16/656,034 2019-04-24 2019-10-17 Visualization of medical device event processing Abandoned US20200342968A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/656,034 US20200342968A1 (en) 2019-04-24 2019-10-17 Visualization of medical device event processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962838022P 2019-04-24 2019-04-24
US16/656,034 US20200342968A1 (en) 2019-04-24 2019-10-17 Visualization of medical device event processing

Publications (1)

Publication Number Publication Date
US20200342968A1 true US20200342968A1 (en) 2020-10-29

Family

ID=72917105

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/656,034 Abandoned US20200342968A1 (en) 2019-04-24 2019-10-17 Visualization of medical device event processing
US16/697,736 Active 2040-03-02 US11404145B2 (en) 2019-04-24 2019-11-27 Medical machine time-series event data processor

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/697,736 Active 2040-03-02 US11404145B2 (en) 2019-04-24 2019-11-27 Medical machine time-series event data processor

Country Status (3)

Country Link
US (2) US20200342968A1 (en)
KR (1) KR102480192B1 (en)
CN (1) CN111863236A (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200243178A1 (en) * 2018-04-10 2020-07-30 Mobile Innovations Llc Advanced health monitoring system and method
US20210034974A1 (en) * 2019-07-31 2021-02-04 Royal Bank Of Canada Devices and methods for reinforcement learning visualization using immersive environments
CN112508170A (en) * 2020-11-19 2021-03-16 中南大学 Multi-correlation time sequence prediction system and method based on generation countermeasure network
CN113053530A (en) * 2021-04-15 2021-06-29 北京理工大学 Medical time series data comprehensive information extraction method
US20210201930A1 (en) * 2019-12-27 2021-07-01 Robert Bosch Gmbh Ontology-aware sound classification
US11055838B2 (en) * 2019-12-09 2021-07-06 GE Precision Healthcare LLC Systems and methods for detecting anomalies using image based modeling
CN113127716A (en) * 2021-04-29 2021-07-16 南京大学 Sentiment time sequence anomaly detection method based on saliency map
US20210226841A1 (en) * 2019-10-30 2021-07-22 T-Mobile Usa, Inc. Network fault detection and quality of service improvement systems and methods
US20210290139A1 (en) * 2019-11-13 2021-09-23 Industry Academic Cooperation Foundation Of Yeungnam University Apparatus and method for cardiac signal processing, monitoring system comprising the same
US20210304020A1 (en) * 2020-03-17 2021-09-30 MeetKai, Inc. Universal client api for ai services
CN113505716A (en) * 2021-07-16 2021-10-15 重庆工商大学 Training method of vein recognition model, and recognition method and device of vein image
CN113535399A (en) * 2021-07-15 2021-10-22 电子科技大学 NFV resource scheduling method, device and system
US11164044B2 (en) * 2019-12-20 2021-11-02 Capital One Services, Llc Systems and methods for tagging datasets using models arranged in a series of nodes
US11216621B2 (en) * 2020-04-29 2022-01-04 Vannevar Labs, Inc. Foreign language machine translation of documents in a variety of formats
US20220031208A1 (en) * 2020-07-29 2022-02-03 Covidien Lp Machine learning training for medical monitoring systems
CN114611015A (en) * 2022-03-25 2022-06-10 阿里巴巴达摩院(杭州)科技有限公司 Interactive information processing method and device and cloud server
CN114712643A (en) * 2022-02-21 2022-07-08 深圳先进技术研究院 Mechanical ventilation man-machine asynchronous detection method and device based on graph neural network
CN114913968A (en) * 2022-07-15 2022-08-16 深圳市三维医疗设备有限公司 Medical equipment state monitoring system and method based on artificial intelligence
US11483370B2 (en) * 2019-03-14 2022-10-25 Hewlett-Packard Development Company, L.P. Preprocessing sensor data for machine learning
US20220359050A1 (en) * 2019-08-19 2022-11-10 Apricity Health, LLC System and method for digital therapeutics implementing a digital deep layer patient profile
US20220378377A1 (en) * 2021-05-28 2022-12-01 Strados Labs, Inc. Augmented artificial intelligence system and methods for physiological data processing
WO2022212771A3 (en) * 2021-03-31 2022-12-29 Sirona Medical, Inc. Systems and methods for artificial intelligence-assisted image analysis
US11556678B2 (en) * 2018-12-20 2023-01-17 Dassault Systemes Designing a 3D modeled object via user-interaction
US20230019194A1 (en) * 2021-07-16 2023-01-19 Dell Products, L.P. Deep Learning in a Virtual Reality Environment
WO2023036633A1 (en) * 2021-09-07 2023-03-16 Koninklijke Philips N.V. Systems and methods for evaluating reliability of a patient early warning score
US11763949B1 (en) * 2022-02-01 2023-09-19 Allegheny Singer Research Institute Computer-based tools and techniques for optimizing emergency medical treatment
CN117012374A (en) * 2023-10-07 2023-11-07 之江实验室 Medical follow-up system and method integrating event map and deep reinforcement learning
US11869497B2 (en) 2020-03-10 2024-01-09 MeetKai, Inc. Parallel hypothetical reasoning to power a multi-lingual, multi-turn, multi-domain virtual assistant
US11921712B2 (en) 2020-10-05 2024-03-05 MeetKai, Inc. System and method for automatically generating question and query pairs
US11983909B2 (en) 2019-03-14 2024-05-14 Hewlett-Packard Development Company, L.P. Responding to machine learning requests from multiple clients

Families Citing this family (74)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018176000A1 (en) 2017-03-23 2018-09-27 DeepScale, Inc. Data synthesis for autonomous control systems
US11157441B2 (en) 2017-07-24 2021-10-26 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US10671349B2 (en) 2017-07-24 2020-06-02 Tesla, Inc. Accelerated mathematical engine
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11215999B2 (en) 2018-06-20 2022-01-04 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11361457B2 (en) 2018-07-20 2022-06-14 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
CN115512173A (en) 2018-10-11 2022-12-23 特斯拉公司 System and method for training machine models using augmented data
US11196678B2 (en) 2018-10-25 2021-12-07 Tesla, Inc. QOS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US10997461B2 (en) 2019-02-01 2021-05-04 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US10956755B2 (en) 2019-02-19 2021-03-23 Tesla, Inc. Estimating object properties using visual image data
KR102572006B1 (en) 2019-02-21 2023-08-31 시어터 인코포레이티드 Systems and methods for analysis of surgical video
US20200273560A1 (en) 2019-02-21 2020-08-27 Theator inc. Surgical image analysis to determine insurance reimbursement
US11697799B2 (en) 2019-04-15 2023-07-11 Ossium Health, Inc. System and method for extraction and cryopreservation of bone marrow
US11373093B2 (en) 2019-06-26 2022-06-28 International Business Machines Corporation Detecting and purifying adversarial inputs in deep learning computing systems
US11042799B2 (en) 2019-08-20 2021-06-22 International Business Machines Corporation Cohort based adversarial attack detection
US11017902B2 (en) * 2019-10-25 2021-05-25 Wise IOT Solutions System and method for processing human related data including physiological signals to make context aware decisions with distributed machine learning at edge and cloud
US11651194B2 (en) * 2019-11-27 2023-05-16 Nvidia Corp. Layout parasitics and device parameter prediction using graph neural networks
EP3836085A1 (en) * 2019-12-13 2021-06-16 Sony Corporation Multi-view three-dimensional positioning
KR20210080919A (en) * 2019-12-23 2021-07-01 한국전자통신연구원 Method and Apparatus for De-identification of Data
US11902327B2 (en) * 2020-01-06 2024-02-13 Microsoft Technology Licensing, Llc Evaluating a result of enforcement of access control policies instead of enforcing the access control policies
JP7244443B2 (en) * 2020-01-06 2023-03-22 株式会社東芝 Information processing device, information processing method, and computer program
US20210209486A1 (en) * 2020-01-08 2021-07-08 Intuit Inc. System and method for anomaly detection for time series data
US20210232956A1 (en) * 2020-01-27 2021-07-29 GAVS Technologies Pvt. Ltd. Event correlation based on pattern recognition and machine learning
US20210287805A1 (en) * 2020-03-11 2021-09-16 National Taiwan University Systems and methods for prognosis prediction of acute myeloid leukemia patients
DE102020203848A1 (en) * 2020-03-25 2021-09-30 Siemens Healthcare Gmbh Method and device for controlling a medical device
US20210313050A1 (en) 2020-04-05 2021-10-07 Theator inc. Systems and methods for assigning surgical teams to prospective surgical procedures
US10853563B1 (en) * 2020-04-22 2020-12-01 Moveworks, Inc. Method and system for configuring form filling application to minimize form filling effort
US20210330259A1 (en) * 2020-04-28 2021-10-28 Vita Innovations, Inc. Vital-monitoring mask
US11551039B2 (en) * 2020-04-28 2023-01-10 Microsoft Technology Licensing, Llc Neural network categorization accuracy with categorical graph neural networks
KR102216236B1 (en) * 2020-05-27 2021-02-17 주식회사 메디오 Method for providing medical information related to health contents based on AI(artificial intelligence) and apparatus for performing the method
CN111753543B (en) * 2020-06-24 2024-03-12 北京百度网讯科技有限公司 Medicine recommendation method, device, electronic equipment and storage medium
EP4181675A4 (en) 2020-07-18 2024-04-24 Ossium Health Inc Permeation of whole vertebral bodies with a cryoprotectant using vacuum assisted diffusion
US20220015657A1 (en) * 2020-07-20 2022-01-20 X Development Llc Processing eeg data with twin neural networks
US20220039735A1 (en) * 2020-08-06 2022-02-10 X Development Llc Attention encoding stack in eeg trial aggregation
US20220084686A1 (en) * 2020-09-11 2022-03-17 International Business Machines Corporation Intelligent processing of bulk historic patient data
US11763947B2 (en) * 2020-10-14 2023-09-19 Etiometry Inc. System and method for providing clinical decision support
EP4228406A1 (en) 2020-10-14 2023-08-23 Ossium Health, Inc. Systems and methods for extraction and cryopreservation of bone marrow
US11195616B1 (en) * 2020-10-15 2021-12-07 Stasis Labs, Inc. Systems and methods using ensemble machine learning techniques for future event detection
CN112270451B (en) * 2020-11-04 2022-05-24 中国科学院重庆绿色智能技术研究院 Monitoring and early warning method and system based on reinforcement learning
US20220183748A1 (en) * 2020-12-16 2022-06-16 Biosense Webster (Israel) Ltd. Accurate tissue proximity
CN112612871B (en) * 2020-12-17 2023-09-15 浙江大学 Multi-event detection method based on sequence generation model
CN117279650A (en) 2020-12-18 2023-12-22 奥瑟姆健康公司 Cell therapy method
CN112699113B (en) * 2021-01-12 2022-08-05 上海交通大学 Industrial manufacturing process operation monitoring system driven by time sequence data stream
JP2022127818A (en) * 2021-02-22 2022-09-01 三菱電機株式会社 Data analysis device, data analysis system, and program
WO2022182163A1 (en) * 2021-02-24 2022-09-01 (주) 제이엘케이 Method and device for automatically providing data processing, artificial intelligence model generation, and performance enhancement
CN112948716B (en) * 2021-03-05 2023-02-28 桂林电子科技大学 Continuous interest point package recommendation method based on multi-head attention mechanism
CN112966773B (en) * 2021-03-24 2022-05-31 山西大学 Unmanned aerial vehicle flight condition mode identification method and system
US20220335347A1 (en) * 2021-04-15 2022-10-20 Business Objects Software Ltd Time-series anomaly prediction and alert
CN115688873A (en) * 2021-07-23 2023-02-03 伊姆西Ip控股有限责任公司 Graph data processing method, device and computer program product
CN113642512B (en) * 2021-08-30 2023-10-24 深圳先进技术研究院 Breathing machine asynchronous detection method, device, equipment and storage medium
CN113868941A (en) * 2021-09-02 2021-12-31 深圳先进技术研究院 Training method and device for ventilation man-machine asynchronous detection model based on DQN reinforcement learning
CN113920213B (en) * 2021-09-27 2022-07-05 深圳技术大学 Multi-layer magnetic resonance imaging method and device based on long-distance attention model reconstruction
CN113749622A (en) * 2021-09-30 2021-12-07 杭州电子科技大学 Low ventilation and apnea automatic identification system based on graph convolution neural network
WO2023060399A1 (en) * 2021-10-11 2023-04-20 GE Precision Healthcare LLC Medical devices and methods of making medical devices for providing annotations to data
US11493665B1 (en) * 2021-10-19 2022-11-08 OspreyData, Inc. Machine learning approach for automated probabilistic well operation optimization
CN114366030B (en) * 2021-12-31 2024-04-09 中国科学院苏州生物医学工程技术研究所 Intelligent auxiliary system and method for anesthesia operation
US20230351517A1 (en) * 2022-05-02 2023-11-02 Optum, Inc. System for predicting healthcare spend and generating fund use recommendations
US20230380771A1 (en) * 2022-05-26 2023-11-30 X Development Llc Classifying time series using reconstruction errors
US20240013902A1 (en) * 2022-07-07 2024-01-11 CalmWave, Inc. Information Management System and Method
KR20240037437A (en) * 2022-09-14 2024-03-22 재단법인 아산사회복지재단 Method and device for generating synthetic patient dataset using local differential privacy based generative adversarial networks
WO2024080977A1 (en) * 2022-10-11 2024-04-18 Carefusion 303, Inc. Intelligent infusion based on anticipating procedural events
WO2024081343A1 (en) * 2022-10-14 2024-04-18 The Johns Hopkins University Systems and methods for acoustic-based diagnosis
WO2024085414A1 (en) * 2022-10-17 2024-04-25 삼성전자주식회사 Electronic device and control method thereof
CN117012348B (en) * 2023-05-26 2024-01-19 常州萨柏美格医用气体设备有限公司 Visual operation management method and system for medical gas
CN116759041B (en) * 2023-08-22 2023-12-22 之江实验室 Medical time sequence data generation method and device considering diagnosis and treatment event relationship
CN117421548B (en) * 2023-12-18 2024-03-12 四川互慧软件有限公司 Method and system for treating loss of physiological index data based on convolutional neural network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013036677A1 (en) * 2011-09-06 2013-03-14 The Regents Of The University Of California Medical informatics compute cluster
US9053222B2 (en) * 2002-05-17 2015-06-09 Lawrence A. Lynn Patient safety processor
US20190371475A1 (en) * 2018-06-05 2019-12-05 Koninklijke Philips N.V. Generating and applying subject event timelines

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050119534A1 (en) * 2003-10-23 2005-06-02 Pfizer, Inc. Method for predicting the onset or change of a medical condition
US9700219B2 (en) * 2013-10-17 2017-07-11 Siemens Healthcare Gmbh Method and system for machine learning based assessment of fractional flow reserve
US10490309B1 (en) * 2014-08-27 2019-11-26 Cerner Innovation, Inc. Forecasting clinical events from short physiologic timeseries
US20210076966A1 (en) * 2014-09-23 2021-03-18 Surgical Safety Technologies Inc. System and method for biometric data capture for event prediction
CN113571187A (en) * 2014-11-14 2021-10-29 Zoll医疗公司 Medical premonitory event estimation system and externally worn defibrillator
CN107615395B (en) * 2015-03-26 2021-02-05 外科安全技术公司 Operating room black box apparatus, system, method and computer readable medium for event and error prediction
CN113421652A (en) * 2015-06-02 2021-09-21 推想医疗科技股份有限公司 Method for analyzing medical data, method for training model and analyzer
US9652712B2 (en) * 2015-07-27 2017-05-16 Google Inc. Analyzing health events using recurrent neural networks
US20180107791A1 (en) * 2016-10-17 2018-04-19 International Business Machines Corporation Cohort detection from multimodal data and machine learning
US10074038B2 (en) * 2016-11-23 2018-09-11 General Electric Company Deep learning medical systems and methods for image reconstruction and quality evaluation
WO2018128927A1 (en) * 2017-01-05 2018-07-12 The Trustees Of Princeton University Hierarchical health decision support system and method
US10825167B2 (en) * 2017-04-28 2020-11-03 Siemens Healthcare Gmbh Rapid assessment and outcome analysis for medical patients
US10706534B2 (en) * 2017-07-26 2020-07-07 Scott Anderson Middlebrooks Method and apparatus for classifying a data point in imaging data
KR20200003407A (en) * 2017-07-28 2020-01-09 구글 엘엘씨 Systems and methods for predicting and summarizing medical events from electronic health records
KR101843066B1 (en) * 2017-08-23 2018-05-15 주식회사 뷰노 Method for classifying data via data augmentation of the data for machine-learning and apparatus using the same
KR101848321B1 (en) * 2017-10-27 2018-04-20 주식회사 뷰노 Method for facilitating dignosis of subject based on fovea image thereof and apparatus using the same
CN107909621A (en) * 2017-11-16 2018-04-13 深圳市唯特视科技有限公司 It is a kind of based on it is twin into confrontation network medical image synthetic method
WO2019117563A1 (en) * 2017-12-15 2019-06-20 삼성전자 주식회사 Integrated predictive analysis apparatus for interactive telehealth and operating method therefor
US10937540B2 (en) * 2017-12-21 2021-03-02 International Business Machines Coporation Medical image classification based on a generative adversarial network trained discriminator
CN108309263A (en) * 2018-02-24 2018-07-24 乐普(北京)医疗器械股份有限公司 Multi-parameter monitoring data analysing method and multi-parameter monitoring system
EP3547226A1 (en) * 2018-03-28 2019-10-02 Koninklijke Philips N.V. Cross-modal neural networks for prediction
US20190286990A1 (en) * 2018-03-19 2019-09-19 AI Certain, Inc. Deep Learning Apparatus and Method for Predictive Analysis, Classification, and Feature Detection
CN108491497B (en) * 2018-03-20 2020-06-02 苏州大学 Medical text generation method based on generation type confrontation network technology
US10810754B2 (en) * 2018-04-24 2020-10-20 Ford Global Technologies, Llc Simultaneous localization and mapping constraints in generative adversarial networks for monocular depth estimation
EP3573068A1 (en) * 2018-05-24 2019-11-27 Siemens Healthcare GmbH System and method for an automated clinical decision support system
CN109378064B (en) * 2018-10-29 2021-02-02 南京医基云医疗数据研究院有限公司 Medical data processing method, device electronic equipment and computer readable medium
CN109376862A (en) * 2018-10-29 2019-02-22 中国石油大学(华东) A kind of time series generation method based on generation confrontation network
EP3895178A4 (en) * 2018-12-11 2022-09-14 K Health Inc. System and method for providing health information
CN109522973A (en) * 2019-01-17 2019-03-26 云南大学 Medical big data classification method and system based on production confrontation network and semi-supervised learning
US11593716B2 (en) * 2019-04-11 2023-02-28 International Business Machines Corporation Enhanced ensemble model diversity and learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9053222B2 (en) * 2002-05-17 2015-06-09 Lawrence A. Lynn Patient safety processor
WO2013036677A1 (en) * 2011-09-06 2013-03-14 The Regents Of The University Of California Medical informatics compute cluster
US20190371475A1 (en) * 2018-06-05 2019-12-05 Koninklijke Philips N.V. Generating and applying subject event timelines

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200243178A1 (en) * 2018-04-10 2020-07-30 Mobile Innovations Llc Advanced health monitoring system and method
US11556678B2 (en) * 2018-12-20 2023-01-17 Dassault Systemes Designing a 3D modeled object via user-interaction
US11983909B2 (en) 2019-03-14 2024-05-14 Hewlett-Packard Development Company, L.P. Responding to machine learning requests from multiple clients
US11483370B2 (en) * 2019-03-14 2022-10-25 Hewlett-Packard Development Company, L.P. Preprocessing sensor data for machine learning
US20210034974A1 (en) * 2019-07-31 2021-02-04 Royal Bank Of Canada Devices and methods for reinforcement learning visualization using immersive environments
US11720792B2 (en) * 2019-07-31 2023-08-08 Royal Bank Of Canada Devices and methods for reinforcement learning visualization using immersive environments
US20220359050A1 (en) * 2019-08-19 2022-11-10 Apricity Health, LLC System and method for digital therapeutics implementing a digital deep layer patient profile
US20210226841A1 (en) * 2019-10-30 2021-07-22 T-Mobile Usa, Inc. Network fault detection and quality of service improvement systems and methods
US11805006B2 (en) * 2019-10-30 2023-10-31 T-Mobile Usa, Inc. Network fault detection and quality of service improvement systems and methods
US20210290139A1 (en) * 2019-11-13 2021-09-23 Industry Academic Cooperation Foundation Of Yeungnam University Apparatus and method for cardiac signal processing, monitoring system comprising the same
US11055838B2 (en) * 2019-12-09 2021-07-06 GE Precision Healthcare LLC Systems and methods for detecting anomalies using image based modeling
US11164044B2 (en) * 2019-12-20 2021-11-02 Capital One Services, Llc Systems and methods for tagging datasets using models arranged in a series of nodes
US20220101057A1 (en) * 2019-12-20 2022-03-31 Capital One Services, Llc Systems and methods for tagging datasets using models arranged in a series of nodes
US20210201930A1 (en) * 2019-12-27 2021-07-01 Robert Bosch Gmbh Ontology-aware sound classification
US11295756B2 (en) * 2019-12-27 2022-04-05 Robert Bosch Gmbh Ontology-aware sound classification
US11869497B2 (en) 2020-03-10 2024-01-09 MeetKai, Inc. Parallel hypothetical reasoning to power a multi-lingual, multi-turn, multi-domain virtual assistant
US20210304020A1 (en) * 2020-03-17 2021-09-30 MeetKai, Inc. Universal client api for ai services
US11216621B2 (en) * 2020-04-29 2022-01-04 Vannevar Labs, Inc. Foreign language machine translation of documents in a variety of formats
US20220129646A1 (en) * 2020-04-29 2022-04-28 Vannevar Labs, Inc. Foreign language machine translation of documents in a variety of formats
US11640233B2 (en) * 2020-04-29 2023-05-02 Vannevar Labs, Inc. Foreign language machine translation of documents in a variety of formats
US20220031208A1 (en) * 2020-07-29 2022-02-03 Covidien Lp Machine learning training for medical monitoring systems
US11921712B2 (en) 2020-10-05 2024-03-05 MeetKai, Inc. System and method for automatically generating question and query pairs
CN112508170A (en) * 2020-11-19 2021-03-16 中南大学 Multi-correlation time sequence prediction system and method based on generation countermeasure network
WO2022212771A3 (en) * 2021-03-31 2022-12-29 Sirona Medical, Inc. Systems and methods for artificial intelligence-assisted image analysis
CN113053530A (en) * 2021-04-15 2021-06-29 北京理工大学 Medical time series data comprehensive information extraction method
CN113127716A (en) * 2021-04-29 2021-07-16 南京大学 Sentiment time sequence anomaly detection method based on saliency map
US20220378377A1 (en) * 2021-05-28 2022-12-01 Strados Labs, Inc. Augmented artificial intelligence system and methods for physiological data processing
CN113535399A (en) * 2021-07-15 2021-10-22 电子科技大学 NFV resource scheduling method, device and system
CN113505716A (en) * 2021-07-16 2021-10-15 重庆工商大学 Training method of vein recognition model, and recognition method and device of vein image
US20230019194A1 (en) * 2021-07-16 2023-01-19 Dell Products, L.P. Deep Learning in a Virtual Reality Environment
WO2023036633A1 (en) * 2021-09-07 2023-03-16 Koninklijke Philips N.V. Systems and methods for evaluating reliability of a patient early warning score
US11763949B1 (en) * 2022-02-01 2023-09-19 Allegheny Singer Research Institute Computer-based tools and techniques for optimizing emergency medical treatment
CN114712643A (en) * 2022-02-21 2022-07-08 深圳先进技术研究院 Mechanical ventilation man-machine asynchronous detection method and device based on graph neural network
CN114611015A (en) * 2022-03-25 2022-06-10 阿里巴巴达摩院(杭州)科技有限公司 Interactive information processing method and device and cloud server
CN114913968A (en) * 2022-07-15 2022-08-16 深圳市三维医疗设备有限公司 Medical equipment state monitoring system and method based on artificial intelligence
CN117012374A (en) * 2023-10-07 2023-11-07 之江实验室 Medical follow-up system and method integrating event map and deep reinforcement learning

Also Published As

Publication number Publication date
CN111863236A (en) 2020-10-30
US20200337648A1 (en) 2020-10-29
US11404145B2 (en) 2022-08-02
KR20200124610A (en) 2020-11-03
KR102480192B1 (en) 2022-12-21
US20200342362A1 (en) 2020-10-29

Similar Documents

Publication Publication Date Title
US20200342968A1 (en) Visualization of medical device event processing
Ali et al. A systematic literature review of artificial intelligence in the healthcare sector: Benefits, challenges, methodologies, and functionalities
JP5694178B2 (en) Patient safety processor
US11061537B2 (en) Interactive human visual and timeline rotor apparatus and associated methods
US20190156947A1 (en) Automated information collection and evaluation of clinical data
CN111492437A (en) Method and system for supporting medical decision
Ng et al. The role of artificial intelligence in enhancing clinical nursing care: A scoping review
JP7222882B2 (en) Application of deep learning for medical image evaluation
Gupta et al. An overview of clinical decision support system (CDSS) as a computational tool and its applications in public health
US11984201B2 (en) Medical machine synthetic data and corresponding event generation
Nasarian et al. Designing interpretable ML system to enhance trust in healthcare: A systematic review to proposed responsible clinician-AI-collaboration framework
NVPS et al. Deep Learning for Personalized Health Monitoring and Prediction: A Review
Mishra Personalized functional health and fall risk prediction using electronic health records and in-home sensor data
Karki et al. DIABETES AND DIABETIC RETINOPATHY DETECTION
Saripalli Scalable and Data Efficient Deep Reinforcement Learning Methods for Healthcare Applications
MK et al. A Comprehensive Survey of Arti cialIntelligence in Precision Healthcare: Shedding Light on Interpretability
Siddiqui QUANTIFYING TRUST IN DEEP LEARNING WITH OBJECTIVE EXPLAINABLE AI METHODS FOR ECG CLASSIFICATION
Ang et al. Healthcare Data Handling with Machine Learning Systems: A Framework
Almutairi An Optimized Feature Selection and Hyperparameter Tuning Framework for Automated Heart Disease Diagnosis.
Hagan Predictive Analytics in an Intensive Care Unit by Processing Streams of Physiological Data in Real-time
Saini et al. Wireless Sensor Networks and IoT Revolutionizing Healthcare: Advancements, Applications, and Future Directions
Xu et al. A Comprehensive Review on Synergy of Multi-Modal Data and AI Technologies in Medical Diagnosis
Reddy Translational Application of Artificial Intelligence in Healthcare:-A Textbook
Jethani Machine Learning for Knowledge Discovery: Modeling and Explaining High-Dimensional Healthcare Data
Aman Disease Prediction using Deep Learning Algorithms in Healthcare Sector

Legal Events

Date Code Title Description
AS Assignment

Owner name: GE PRECISION HEALTHCARE LLC, WISCONSIN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AVINASH, GOPAL B.;MA, ZILI;PATI, DIBYAJYOTI;AND OTHERS;REEL/FRAME:050752/0194

Effective date: 20191016

AS Assignment

Owner name: GE PRECISION HEALTHCARE LLC, WISCONSIN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AVINASH, GOPAL B.;ZHAO, QIAN;MA, ZILI;AND OTHERS;SIGNING DATES FROM 20191016 TO 20191025;REEL/FRAME:050842/0593

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION