WO2023158762A1 - Ai-powered devices and methods to provide image and sensor informed early warning of changes in health - Google Patents

Ai-powered devices and methods to provide image and sensor informed early warning of changes in health Download PDF

Info

Publication number
WO2023158762A1
WO2023158762A1 PCT/US2023/013257 US2023013257W WO2023158762A1 WO 2023158762 A1 WO2023158762 A1 WO 2023158762A1 US 2023013257 W US2023013257 W US 2023013257W WO 2023158762 A1 WO2023158762 A1 WO 2023158762A1
Authority
WO
WIPO (PCT)
Prior art keywords
data set
artificial intelligence
data
representation
intelligence algorithm
Prior art date
Application number
PCT/US2023/013257
Other languages
French (fr)
Inventor
Daniel K. Sodickson
Hersh CHANDARANA
Sumit Chopra
Original Assignee
New York University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New York University filed Critical New York University
Publication of WO2023158762A1 publication Critical patent/WO2023158762A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Definitions

  • the present application relates generally to an imaging or sensing system. More specifically, the present application relates to an imaging or sensing system using artificial intelligence to process imaging and sensing data and create early warning of changes in health.
  • Imaging systems are used to acquire images of patients. Many different modes of imaging may be used to acquire images of patients. Medical imaging is used routinely to diagnose disease, to guide therapy, and/or to monitor the progress of disease in high-risk individuals. For example, imaging systems may include Magnetic Resonance Imaging (MRI), Computed Tomography (CT), Positron Emission Tomography (PET), Nuclear Imaging, ultrasound, and X-ray, among others. Imaging systems are highly valuable tools in research and clinical care.
  • MRI Magnetic Resonance Imaging
  • CT Computed Tomography
  • PET Positron Emission Tomography
  • Nuclear Imaging nuclear Imaging
  • ultrasound ultrasound
  • X-ray X-ray
  • Image quality may vary depending on the imaging system. Particularly for cross- sectional imaging devices like MRI, CT, and PET, there is generally a trade-off with cost increasing, mobility decreasing, and system size increasing as image quality increases. Further, image quality generally increases as imaging “power,” such as magnetic field strength in MRI, increases, and also as imaging time, such as duration of patient exposure to the imaging system for image capture, increases.
  • MRI imaging systems may cost millions of dollars, require minutes to hours to acquire images, occupy dozens of feet of floor space, and/or only be available at hospitals, outpatient centers, or research facilities.
  • imaging systems tend to be immobile and located at hospital or research facilities.
  • imaging systems are spatially sparse, that is, there is a low density of imaging systems relative to either population or land mass.
  • this focus on generating high quality images results in the collection of imaging data from patients tending to occur infrequently, such as only upon visits to doctors.
  • overall imaging data for a given patient tends to be spatially and temporally sparse.
  • Sensing systems are used to acquire data relating to a patient.
  • Various modes of sensing exist, and sensing systems may include chemical sensors, sensors of physical properties such as pressure, temperature, impedance, or mechanical strain, and sensors of various other properties that may characterize bodies.
  • Some examples include, but are not limited to, bioimpedance sensors, skin conductance sensors, electrocardiograms (EKGs), electromyelograms (EMGs), electroencephalograms (EEGs), radar sensors, near infrared (NIR) sensors, and accelerometers.
  • imaging systems which may be composed of arrays of carefully-coordinated sensors, individual sensor systems do not generally probe the spatial organization of bodies or systems with high resolution.
  • At least one embodiment relates to a system for processing data.
  • the system includes a first monitoring device configured to generate a first data set associated with a patient, a database having historical data corresponding to previous patient data sets, and one or more processors operatively coupled to the first monitoring device.
  • the one or more processors is configured to receive the first data set from the first monitoring device, generate, by an artificial intelligence algorithm using the first data set, a first representation, process, by the artificial intelligence algorithm using the historical data, the first data set to define a first processed data set, and generate, by the artificial intelligence algorithm, a first output based on at least one of the first representation and the first processed data.
  • At least one embodiment relates to a non-transitory processor-readable medium storing code representing instructions to be executed by one or more processors.
  • the instructions including code to cause the one or more processors to receive, from a first monitoring device configured to generate a first data set associated with a patient, the first dataset, generate, by an artificial intelligence algorithm using the first data set, a first representation, process, by the artificial intelligence algorithm using historical data from a database, the first data set to define a first processed data set, the historical data corresponding to previous patient data sets, and generate, by the artificial intelligence algorithm, a first output based on at least one of the first representation and the first processed data.
  • At least one embodiment relates to a method includes receiving, from a first monitoring device configured to generate a first data set associated with a patient, the first data set, and generating, by an artificial intelligence algorithm using the first data, a first representation.
  • the method further includes processing, by the artificial intelligence algorithm using historical data, the first data set to define a first processed data set, and generating, by the artificial intelligence algorithm, a first output based on at least one of the first representation and the first processed data.
  • FIG. 1 is a schematic illustration of one embodiment of a Type 1 imaging system.
  • FIG. 2 is a schematic illustration of one embodiment of a Type 2 imaging system.
  • FIG. 3 is a schematic illustration of one embodiment of a Type 3 sensing system.
  • FIG. 4 is a method for processing monitoring device data, according to an embodiment.
  • FIG. 5 is a schematic illustration of one embodiment of a health diagnostic system configured to communicate with a Type 1, Type 2 and/or Type 3 imaging/sensing system.
  • the present disclosure relates to an imaging and/or sensing system and methods for detecting disease or changes in health.
  • the imaging and/or sensing systems is configured to interact with a patient to generate a patient data set for the sensing system, where the generated patient data set may include a temporal component, for example, an MRI data set generated at a particular visit to a hospital.
  • the system includes imaging and/or sensing devices, artificial intelligences applied to images and the sets of data acquired from the imaging and/or sensing devices, and representations and outputs resulting from applying the artificial intelligence to the images and the sets of data.
  • an imaging system 100 is depicted according to an example embodiment.
  • the imaging system 100 is a Type 1 imaging system that provides improved imaging in traditional radiology settings.
  • Traditional radiology settings may include hospitals, outpatient imaging facilities, and image-aware physicians’ offices.
  • the imaging system 100 includes an imaging device 110, historical data 112, a representation 114, an image 120, a set of data 122, an artificial intelligence 130, and an output 140.
  • the imaging system 100 is configured to use the artificial intelligence 130 to generate the representation 114 and the output 140, the output 140 providing an image-informed early warning of changes in the health of a patient.
  • the imaging system 100 may be used repeatedly over time for an individual patient.
  • the imaging device 110 is a device that is configured to acquire images of a patient.
  • the imaging device 110 may be an MRI, CT, PET, Nuclear Imaging, ultrasound, X-ray machine, and/or a machine associated with another imaging modality.
  • the image 120 is an image of a patient acquired via use of the imaging device 110.
  • the set of data 122 may be information related to the image 120 or a separate image that shares properties with the image 120.
  • the set of data 122 may be acquired by the imaging device 110 or a separate imaging device 110 [0024]
  • the imaging system 100 further includes historical data 112.
  • the historical data 112 includes previous information acquired by and/or created by the imaging system 100.
  • the historical data 112 may include previous images, sets of data, representations, and/or outputs.
  • the artificial intelligence 130 is configured to use the historical data 112 to generate the representation 114, inform the acquisition of new images and/or set of data, and to allow for the detection of changes between images, sets of data, representations, and/or sub-comb
  • the representation 114 is a model of an individual patient’s baseline state of health.
  • the representation 114 can take the form of a feature vector derived using the artificial intelligence 130, and generated from the image 120, and/or the set of data 122.
  • the artificial intelligence 130 distills the image 120 and/or the set of data 122 into the representation 114.
  • the representation 114 may be updated by the imaging system 100 when the patient undergoes a new imaging exam and a new image 120 and/or set of data 122 is acquired.
  • An individual representation, or a cumulative set of representations generated over time may be used to assess risk, characterize change, improve image quality, and/or establish trajectories of health for a patient over time.
  • Multiple representations for an individual subject that vary over time are included in the historical data 112.
  • updating a representation 114 with information from a new image 120 allows for the imaging system 100 to acquire a smaller quantity of time-consuming imaging data that makes up the image 120 than would otherwise be needed to generate a representation in the absence of previous images from the historical data 112.
  • the representations of the patient in the imaging system 100 may also inform how the imaging system 100 is configured to acquire new images 120 and/or sets of data 122, and/or how the imaging system 100 is configured to generate new outputs 140.
  • the artificial intelligence 130 is configured to receive the image 120 and/or the set of data 122 to generate the representation 114 and output 140.
  • the artificial intelligence 130 may be stored as part of the imaging device 110 or on a separate computing device.
  • the artificial intelligence 130 may be trained on the historical data 112 including previous images, sets of data, representations, and/or outputs from the imaging system 100.
  • the artificial intelligence 130 may use various machine learning models to generate the representation 114 and output 140.
  • the machine learning model includes representation learning to reduce high-dimensional data to low-dimensional data, making it easier to discover patterns and anomalies.
  • representation learning is supervised.
  • representation learning is self-supervised.
  • the artificial intelligence 130 may use a convolutional neural network structure.
  • the artificial intelligence 130 may use an autoencoder network structure.
  • the artificial intelligence 130 may use a Siamese neural network structure.
  • the artificial intelligence 130 provides the output 140.
  • the output may include automated image classification or interpretation, an imaged with increased speed, and/or improved imaging quality, among other outputs 140.
  • the output 140 may then be provided back to the historical data 112 of the imaging system 100.
  • the artificial intelligence 130 uses the representation 140 and previous representations in historical data 112 to generate a configuration for data acquisition for the imaging device 110 as an output 140.
  • a neural network can be used to learn which data points are most important, and these data points can be used to choose what types of data to acquire.
  • the output 140 may be an assessment of change, in which the artificial intelligence 130 assesses a new image 120, set of data 122, and/or representation 114 in comparison to previous images, sets of data, and representations from the historical data 112 to flag changes in individual subjects and/or to identify early trajectories toward known diseases or development of risk indicators.
  • known diseases may include cancer or a neurodegenerative disease. This allows for automatic assessment to detect of change over time, and detection of subtle shifts that may not be discernible by a radiologist.
  • the artificial intelligence 130 may be used to compare an image 120 to previous images from the historical data 112 to generate an output 140 of a change map.
  • a change map shows the differences between previous images from the historical data 112 and the image 120 that is in the process of being acquired by the imaging device 110.
  • the change map shows no evidence of changes between the previous images and image 120 that is in the process of being acquired by the imaging device 110, then the acquisition of image 120 that is in the process of being acquired by the imaging device 110 may be stopped in a short time.
  • the imaging device 110 may continue with the acquisition of the image 120.
  • the imaging system 100 may, for the creation of the output 140 of the change map, use any of the previous images from the historical data 112 to compare to the image 120 that is in the process of being acquired by the imaging device 110. Therefore, the artificial intelligence 130 allows for the imaging device 110 to gather only what is needed, reducing scanning time and computing resources. Change maps may also be used to assess the evolution of disease or response to therapy.
  • the artificial intelligence 130 may be used to generate an image with improved image quality as the output 140.
  • the artificial intelligence 130 may use the historical data 112 and/or representation 114 to restore images that may be degraded, or of a lower initial quality, from less expensive and low-performing imaging systems to a higher quality expected from a more expensive, high-performance system.
  • the artificial intelligence 130 may incorporate representations to fill in missing information, providing an improved image.
  • the imaging system 100 is tailored for an output 140 of an early detection of pancreatic cancer.
  • Pancreatic cancer is an illness that is often detected too late in the progression of the illness for therapy to be effective.
  • the previous images from historical data 112 may be used to tailor acquisition by the imaging device 110 of new images 120 and sets of data 122 to hunt for subtle changes in pancreas features indicative of encroaching cancer.
  • the artificial intelligence 130 may be configured to have been trained to identify changes in pancreas features between image 120 and a previous image from the historical data 112.
  • an imaging system 200 is depicted according to an example embodiment.
  • the imaging system 200 is a Type 2 imaging system that enables rapid screening of patients in non-specialty settings.
  • Non-specialty settings may include nontraditional settings for advanced imaging, such as commercial settings like pharmacies or supermarkets, places of work, personal homes, or primary care physicians ‘offices.
  • a Type 2 imaging system is trained on high-end images from a Type 1 imaging system and provides outputs 240 of targeted answers for non-experts.
  • the imaging system 200 includes an imaging device 210, historical data 212, a representation 214, an image 220, a set of data 222, an artificial intelligence 230, data from an imaging system 100, and an output 240.
  • the imaging system 200 is configured to use the artificial intelligence 230 to provide an image- informed early warning of changes in the health of a patient.
  • the imaging system 200 may be used repeatedly over time for an individual patient.
  • the imaging system 200 includes an imaging device 210 that is a stripped-down lower-performance scanner, such as a point-of-care scanner and/or a cheaper, more accessible medical imaging device.
  • the imaging device 210 is a low-field MRI machine, portable CT machine, and/or handheld ultrasound device.
  • the imaging device 210 is a device that is configured to acquire images of a patient to be used in the imaging system 200.
  • the image 220 is an image of a patient acquired via the imaging device 210.
  • the image 220, the set of data 222, the historical data 212, the artificial intelligence 230, the representation 214, and the output 240 of imaging system 200 are functionally and/or structurally similar to their respective components of imaging system 100 of FIG.1.
  • the image 220 may be an image acquired via MRI, CT, PET, Nuclear Imaging, ultrasound, and/or X-ray imaging, among other imaging modalities.
  • a set of data 222 may be information related to an image 220 or may be another, separate image that shares the properties of an image 220.
  • the set of data 222 may be acquired by a device that is separate from the scanner 210 or the device may be the scanner 210 itself.
  • the imaging system 200 includes historical data 212.
  • the historical data 212 includes previous information acquired by and/or created by the imaging system 200, such as by another Type 2 imaging system, and/or by a Type 1 imaging system 100, or Type 3 sensing system 300 (discussed further in reference in FIG.3).
  • the historical data 212 may include previous images, sets of data, representations, and/or outputs.
  • the artificial intelligence 230 is configured to use the historical data 212 to generate the representation 214, inform the acquisition of new images and/or set of data, and to allow for the detection of changes between images, sets of data, representations, and/or sub-combinations thereof over time.
  • the representation 214 is a model of an individual patient’s baseline state of health.
  • the representation 214 is a feature vector derived using the artificial intelligence 230, and generated from the image 220, and/or the set of data 222.
  • the artificial intelligence 230 distills the image 220 and/or the set of data 222 into the representation 214.
  • the representation 214 may be updated by the imaging system 200 when the patient undergoes a new imaging exam and a new image 220 and/or set of data 222 is acquired.
  • An individual representation, or a cumulative set of representations generated over time may be used to assess risk, characterize change, improve image quality, and/or establish trajectories of health for a patient over time.
  • Multiple representations for an individual subject that vary over time are included in the historical data 212.
  • updating a representation 214 with information from a new image 220 allows for the imaging system 200 to acquire a smaller quantity of time-consuming imaging data that makes up an image 220 than would otherwise be needed to generate a representation 214 in the absence of previous images from the historical data 212.
  • the representations of the patient in the imaging system 200 may inform how the imaging system 200 is configured to acquire new images 220 and/or sets of data 222, and/or how the imaging system 200 is configured to generate new outputs 240.
  • the artificial intelligence 230 is configured to receive the image 220 or the set of data 222 to generate the representation 214 and output 240.
  • the artificial intelligence is further configured to be trained on historical data 212, including images, sets of data, representations, and/or outputs from imaging system 200, as well as historical data 112 from imaging system 100.
  • the artificial intelligence 230 may be stored as part of the imaging device 210 or on a separate computing device.
  • the artificial intelligence 230 may use various machine learning models to generate the representation 214 and output 240.
  • the machine learning model includes representation learning to reduce high-dimensional data to low-dimensional data, making it easier to discover patterns and anomalies.
  • representation learning is supervised, such as in a convolutional neural network.
  • representation learning is unsupervised, such as in an autoencoder network.
  • the artificial intelligence 230 may use a Siamese neural network structure.
  • the artificial intelligence 230 provides the output 240.
  • the output may include automated image interpretation, increased imaging speed, and/or improved imaging quality, among other outputs 240.
  • the output 240 may then be provided back to the historical data 212 of the imaging device 210.
  • the output 240 may be change detection, in which the artificial intelligence 230 assesses the new image 220, the set of data 222, and/or the representation 214 in comparison to previous images, sets of data, and representations from the historical data 212 to flag changes in individual subjects and/or to identify early trajectories toward known diseases or development of risk indicators.
  • known diseases may include cancer or a neurodegenerative disease. This allows for automatic assessment to detect of change over time, and detection of subtle shifts that may not be discernible by a radiologist.
  • the artificial intelligence 130 may be used to compare an image 220 to previous images from the historical data 212 to generate an output 240 of a change map.
  • a change map shows the differences between previous images from the historical data 212 and the image 220 that is in the process of being acquired by the imaging device 210.
  • the change map shows no evidence of changes between the previous images and image 220 that is in the process of being acquired by the imaging device 210, then the acquisition of image 220 that is in the process of being acquired by the imaging device 210 may be stopped in a short time.
  • the imaging device 210 may continue with the acquisition of the image 220.
  • the imaging system 200 may, for the creation of the output 240 of the change map, use any of the previous images from the historical data 212 to compare to the image 220 that is in the process of being acquired by the imaging device 210. Therefore, the artificial intelligence 230 allows for the imaging device 210 to gather only what is needed, reducing scanning time and computing resources. Change maps may also be used to assess the evolution of disease or response to therapy.
  • the artificial intelligence 230 may generate an output 240 of an image with improved image quality.
  • the artificial intelligence 230 may use the historical data 212 and/or representations 214 to restore images 220 that may be degraded to a higher quality more familiar from expensive, high performance machines as used in Type 1 imaging systems.
  • the artificial intelligence 230 may generate an output 240 of a clinical answer.
  • the artificial intelligence 230 may be configured to have had end-to-end training of a single neural network to allow the imaging system 200 to go directly from raw data of an image 220 and/or a set of data 222 to an output 240 that is a clinical answer rather than an image.
  • the clinical answer is an indicator of a concerning change and/or an index of suspicion.
  • a clinical answer may be a yes-or-no answer to indicate the presence or absence of prostate cancer when using the imaging device 210 for prostate imaging.
  • a clinical answer may be a yes-or-no answer to indicate the presence or absence of cerebral bleeds when using the imaging device 210 for cerebral imaging.
  • a healthcare provider may then direct the patient toward a specialist healthcare provider for Type 1 imaging from an imaging system 100. Therefore, non-imaging experts can provide effective, and more routine screening for diseases.
  • the use of Type 2 imaging to apply artificial intelligence 230 to provide outputs 240 of clinical answers in place of images may facilitate the population-level screening of patients for the risk of known diseases such as prostate cancer or breast cancer, among others, with inexpensive imaging devices.
  • a sensing system 300 is depicted according to an example embodiment.
  • a sensing system 300 is a Type 3 imaging system that involves continuous health monitoring at the point of care, at work, and/or at home.
  • a Type 3 sensing system 300 may be a stand-alone wearable and/or environmental sensor array and may be trained on information from Type 1 and/or Type 2 imaging systems and may then provide an early warning of changes.
  • a Type 3 imaging system uses sensor data that is trained in the imaging settings of Type 1 and/or Type 2 imaging systems and is correlated with the imaging results of Type 1 and/or Type 2 imaging systems, with the Type 3 imaging system standing in place of advanced imaging for the purpose of early warning of disease.
  • the sensing system 300 includes a sensing device 310, a set of data 322, an artificial intelligence 330, data from an imaging system 100 and/or an imaging system 200, and an output 340.
  • the sensing system 300 may further include historical data 312 and/or a representation 314.
  • the sensing system 300 is configured to use the artificial intelligence 330 to provide an image-informed early warning of changes in the health of a patient.
  • the sensing system 300 may be used repeatedly over time for an individual patient.
  • the sensing system 300 includes a sensing device 310 that is a sensor and/or an array of sensors that is configured to acquire a set of data 322 of a patient.
  • the device 310 may be a sensor for ultrasound, bioimpedance, electrocardiogram (EKG), electromyography (EMG), electroencephalography (EEG), radiofrequency (RF) pilot tone, ultra-wide band (UWB) radar, or near infrared (NIR), among other penetrating sensor modalities.
  • the device 310 may be an accelerometer, optical camera, three-dimensional/time-of-flight (3D/TOF) camera, or skin conductance sensor, among other sensor modalities.
  • the sensors may be wearable, and monitor health continuously or at regular intervals.
  • the sensors may be incorporated into clothing or attached to a body.
  • a sensing system 300 may have a sensing device 310 that is a smart underwear sensor for an output 340 of a detection of prostate cancer, a smart bra sensor for an output 340 of a detection of a breast cancer, or another wearable or environmental sensor to monitor health states as an output 340.
  • the set of data 322, historical data 340, the representation 314, the artificial intelligence 330, and the output 340 of imaging system 300 are functionally and/or structurally similar to their respective components of imaging system 100 of FIG.1 and imaging system 200 of FIG.2.
  • the sensing system 300 may include historical data 312.
  • the historical data 312 includes previous information acquired by and/or created by the sensing system 300, and/or by a Type 1 imaging system 100 and/or by a Type 2 imaging system 200.
  • the historical data 312 may include prior sensing device 310 generated sets of data, representations, and outputs, or prior sets of data, representations, or outputs from other sensing or imaging systems.
  • the artificial intelligence 330 is configured to use the historical data 312 to generate the representation 314, inform the acquisition of new sets of data, and to allow for the detection of changes between images, sets of data, representations, and/or subcombinations thereof over time.
  • the representation 314 is a model of an individual patient’s baseline state of health.
  • the representation 314 is a feature vector derived using the artificial intelligence 330, and generated from the set of data 322.
  • the artificial intelligence 330 distills set of data 322 into the representation 314.
  • the representation 314 may be updated by the imaging system 300 when a patient’s sensing device acquires a new set of data 322.
  • An individual representation, or a cumulative set of representations generated over time may be used to assess risk, characterize change, improve image quality, and/or establish trajectories of health for a patient over time.
  • the artificial intelligence 330 is configured to receive the set of data 322 to generate the representation 314 and output 340.
  • the artificial intelligence may further configured to be trained on historical data 312, including images, sets of data, representations, and/or outputs from imaging system 300, as well as historical data 112 from imaging system 100 and historical data 212 from imaging system 200.
  • the artificial intelligence 330 may be stored as part of the sensing device 310 or on a separate computing device.
  • the artificial intelligence 330 may use various machine learning models to generate the representation 314 and output 340.
  • the machine learning model includes representation learning to reduce high-dimensional data to low-dimensional data, making it easier to discover patterns and anomalies.
  • representation learning is supervised, such as in a convolutional neural network.
  • representation learning is unsupervised, such as in an autoencoder network.
  • the artificial intelligence 330 may use a Siamese neural network structure.
  • the artificial intelligence 330 provides the output 340, and the artificial intelligence 330 may then provide the output 340 back to the historical data 312 of the sensing system 300 such that the sensing system 300 may use the historical data 312 to inform the acquisition of future sets of data 322 and/or to inform the future operation of an artificial intelligence 330.
  • the sensing system 300 provides indirect tomography through sensing device 310, trained through artificial intelligence 330 with high-end and/or low-end imaging machines from imaging systems 100 and/or imaging systems 200, to provide outputs 340 of spatial- resolved information about changes in tissue anatomy or function and/or early warnings of concerning changes in representations of patients’ states of health.
  • the artificial intelligence 330 may be used for an output 340 of change detection, in which representations 314 are used to flag changes in individual subjects and/or to identify early trajectories toward known diseases or development of risk indicators.
  • the artificial intelligence 330 may compare a new set of data 322 to previous sets of data from the historical data 312 and report an output 340 of whether any differences have been determined between the new set of data 322 and the previous sets of data from the historical data 312.
  • the artificial intelligence 330 may compare a new representation 314 to a previous representation from the historical data 312 and report an output 340 of whether any differences have been determined between the new representation 314 and the previous representation from the historical data 312.
  • the artificial intelligence 330 may be used to compare the set of data 322 to previous sets of data from historical data 312 to generate an output 340 of a change map.
  • a change map shows the differences between the set of data 322 and previous sets of data.
  • the sensing device 310 acquires multiple sets of data for a patient in a sensing system 300
  • multiple sets of data are stored in the historical data 312 and the sensing system 300 may, for the creation of the output 340 of the change map, use any of the previously acquired sets of data from the historical data 312 to compare to the set of data 322 that is in the process of being acquired by the sensing device 310.
  • Change maps may also be used to assess the evolution of disease or response to therapy.
  • the artificial intelligence 330 may be configured to have had end-to-end training of a single neural network to allow the sensing system 300 to go directly from raw data of a set of data 322 to an output 340 that is a clinical answer rather than a set of data.
  • the clinical answer is an indicator of a concerning change and/or an index of suspicion.
  • the sensing system 300 could then direct the patient toward a specialist healthcare provider for Type 1 imaging from an imaging system 100 and/or toward a non-specialist healthcare provider for Type 2 imaging from an imaging system 200.
  • Type 3 imaging to apply artificial intelligence 330 to provide outputs 340 of clinical answers facilitates the continuous health monitoring of patients at the point of care, at work, and/or at home, with inexpensive sensors of device 310.
  • the sensing system 300 includes a device 310 that is an article of smart clothing with wearable sensors that are configured to detect changes from a patient’s baseline health or to provide a warning sign of a disease.
  • the device 310 may be smart underwear, which is underwear that includes sensors configured to generate sets of data 322 continuously.
  • the smart underwear may use the historical data 312 and/or representations that are derived from Type 1 imaging systems 100 and/or Type 2 imaging systems 200 to provide an output 340 that is a clinical answer on the presence or absence of an early stage of prostate cancer in the patient.
  • the device 310 may be a smart bra, which is a bra that includes sensors configured to generate sets of data 322 continuously.
  • the smart bra may use the historical data 312 and/or representations that are derived from Type 1 imaging systems 100 and/or Type 2 imaging systems 200 to provide an output 340 that is a clinical answer on the presence or absence of an early stage of breast cancer in the patient.
  • the device 310 may be a smart hat, which is a hat that includes sensors configured to generate sets of data 322 continuously.
  • the smart hat may use historical data 312 and/or representations that are derived from Type 1 imaging systems 100 and/or Type 2 imaging systems 200 to provide an output 340 that is a clinical answer of an assessment of the brain health of the patient.
  • FIG. 4 is a method 400 for processing monitoring device data, according to an embodiment.
  • the method 400 can be executed by at least one of the imaging system 100 of FIG. 1, the imaging system 200 of FIG. 2, and/or the sensing system 300 of FIG.3.
  • the method 400 includes receiving, from a first monitoring device configured to generate the first data set associated with a patient, the first data set at 401, generating, by an artificial intelligence algorithm using the first data set, a first representation at 402, processing, by the artificial intelligence algorithm using historical data, the first data set to define a first processed data set at 403, the historical data corresponding to pervious patient data sets, and generating, by the artificial intelligence algorithm, a first output based on at least one of the first representation and the first processed data at 404.
  • the method 400 includes receiving, from a first monitoring device configured to generate a first data set associated with a patient, the first data set.
  • the first monitoring device is an imaging device, such as the imaging device 110 of FIG. 1.
  • the first data set includes a set of data and an image.
  • the method 400 includes generating, by an artificial intelligence algorithm using the first data set, a first representation.
  • the artificial intelligence algorithm is functionally and/or structurally similar to the artificial intelligence 130 of FIG. 1.
  • the artificial intelligence algorithm includes representation learning.
  • the method 400 includes processing, by the artificial intelligence algorithm using the first historical data, the first data set to define a first processed data set.
  • the historical data corresponds to previous patient data sets, which may include images, sets of data, representations, and/or outputs.
  • the method 400 includes generating a first output based on at least one of the first representation and the first processed data.
  • the output can include at least one of a clinical answer or a change map.
  • the method 400 includes repeating 401, 402, and 403, with a second monitoring device (e.g., the imaging device 210), as described in reference to FIG. 2.
  • the resulting second data set, second historical data, and second output being utilized by the artificial intelligence.
  • the method 400 includes additionally repeating 401, 402, and 403 with a third monitoring device (e.g., the sensing device 310), as described in reference to FIG. 3.
  • the resulting third data set, third historical data, and third output being utilized by the artificial intelligence.
  • FIG. 5 illustrates an embodiment of a health diagnostic system wherein a Type 1, Type 2, and/or Type 3 system may provide a data set to a first artificial intelligence 530.
  • the artificial intelligence 530 generates a representation of that data set corresponding to the one (or more) data sets from the respective imaging or sensing device at that respective time, such as a particular visit to a hospital.
  • the representation 514 is analyzed by a second artificial intelligence 535, which may be different than the first artificial intelligence 530, such as a different algorithm or an entirely different neural network.
  • the analysis of the representation 514 may utilize historical data from the Type 1, Type 2, and/or Type 3 systems to generate a new output 540.
  • the output 540 may be an indication of a health diagnosis, such as but not limited to an indication of a risk of a condition and/or the change in the risk of a condition.
  • the hardware and data processing components used to implement the various processes, operations, illustrative logics, logical blocks, modules and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a graphical processing unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein.
  • a general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine.
  • a processor also may be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • particular processes and methods may be performed by circuitry that is specific to a given function.
  • the memory e.g., memory, memory unit, storage device
  • the memory may be or include volatile memory or non-volatile memory, and may include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure.
  • the memory is communicably connected to the processor via a processing circuit and includes computer code for executing (e.g., by the processing circuit or the processor) the one or more processes described herein.
  • the present disclosure contemplates systems on any machine-readable media for accomplishing various operations.
  • the embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system.
  • Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon.
  • Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor.
  • machine-readable media can comprise RAM, ROM, erasable programmable read-only memory (EPROM), electronically erasable programmable read-only memory (EEPROM), or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. Combinations of the above are also included within the scope of machine-readable media.
  • Machine-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

At least one embodiment relates to a method that includes receiving, from a first monitoring device configured to generate a first data set associated with a patient, the first data set, and generating, by an artificial intelligence algorithm using the first data set, a first representation. The method further includes processing, by the artificial intelligence algorithm using historical data, the first data set to define a first processed data set, the historical data corresponding to previous patient data, and generating, by the artificial intelligence algorithm, a first output based on at least one of the first representation and the first processed data.

Description

AI-POWERED DEVICES AND METHODS TO PROVIDE IMAGE AND SENSOR INFORMED EARLY WARNING OF CHANGES IN HEALTH
CROSS-REFERENCE TO RELATED APPLICATIONS
[00011 The present application claims the benefit of priority to U.S. Provisional Application No. 63/310,975, filed February 16, 2022, the entire contents of which are incorporated herein by reference.
BACKGROUND
[0002] The present application relates generally to an imaging or sensing system. More specifically, the present application relates to an imaging or sensing system using artificial intelligence to process imaging and sensing data and create early warning of changes in health.
[0003] Imaging systems are used to acquire images of patients. Many different modes of imaging may be used to acquire images of patients. Medical imaging is used routinely to diagnose disease, to guide therapy, and/or to monitor the progress of disease in high-risk individuals. For example, imaging systems may include Magnetic Resonance Imaging (MRI), Computed Tomography (CT), Positron Emission Tomography (PET), Nuclear Imaging, ultrasound, and X-ray, among others. Imaging systems are highly valuable tools in research and clinical care.
[0004] Image quality may vary depending on the imaging system. Particularly for cross- sectional imaging devices like MRI, CT, and PET, there is generally a trade-off with cost increasing, mobility decreasing, and system size increasing as image quality increases. Further, image quality generally increases as imaging “power,” such as magnetic field strength in MRI, increases, and also as imaging time, such as duration of patient exposure to the imaging system for image capture, increases.
[0005] Because of a focus on the need to acquire high quality images directly from imaging a patient with a device, many imaging systems are expensive, slow, difficult to access, and/or require a bulky apparatus. For example, MRI imaging systems may cost millions of dollars, require minutes to hours to acquire images, occupy dozens of feet of floor space, and/or only be available at hospitals, outpatient centers, or research facilities.
[0006] As a result of this focus on acquiring high-quality images driven by large, stationary imaging systems, such imaging systems tend to be immobile and located at hospital or research facilities. As the infrastructure to support such large imaging systems is extensive, and the capital cost of such systems is high, imaging systems are spatially sparse, that is, there is a low density of imaging systems relative to either population or land mass. In addition, this focus on generating high quality images results in the collection of imaging data from patients tending to occur infrequently, such as only upon visits to doctors. Thus, overall imaging data for a given patient tends to be spatially and temporally sparse.
100071 A need exists for improved technology, namely, for affordable, faster, more frequent, higher-quality, and/or physically easily-accessible imaging systems that can be applied to a wide range of contexts for imaging patients. More accessible imaging would also be of value for preventative maintenance of devices and monitoring of the function of other complex systems over time.
[0008] Sensing systems are used to acquire data relating to a patient. Various modes of sensing exist, and sensing systems may include chemical sensors, sensors of physical properties such as pressure, temperature, impedance, or mechanical strain, and sensors of various other properties that may characterize bodies. Some examples include, but are not limited to, bioimpedance sensors, skin conductance sensors, electrocardiograms (EKGs), electromyelograms (EMGs), electroencephalograms (EEGs), radar sensors, near infrared (NIR) sensors, and accelerometers. As compared with imaging systems, which may be composed of arrays of carefully-coordinated sensors, individual sensor systems do not generally probe the spatial organization of bodies or systems with high resolution.
[0009] Therefore, generally, imaging data from high-resolution imaging systems and sensing data have different characteristics, and are difficult to combine into a characterization of health over time. SUMMARY
[0010] At least one embodiment relates to a system for processing data. The system includes a first monitoring device configured to generate a first data set associated with a patient, a database having historical data corresponding to previous patient data sets, and one or more processors operatively coupled to the first monitoring device. The one or more processors is configured to receive the first data set from the first monitoring device, generate, by an artificial intelligence algorithm using the first data set, a first representation, process, by the artificial intelligence algorithm using the historical data, the first data set to define a first processed data set, and generate, by the artificial intelligence algorithm, a first output based on at least one of the first representation and the first processed data.
[0011 | At least one embodiment relates to a non-transitory processor-readable medium storing code representing instructions to be executed by one or more processors. The instructions including code to cause the one or more processors to receive, from a first monitoring device configured to generate a first data set associated with a patient, the first dataset, generate, by an artificial intelligence algorithm using the first data set, a first representation, process, by the artificial intelligence algorithm using historical data from a database, the first data set to define a first processed data set, the historical data corresponding to previous patient data sets, and generate, by the artificial intelligence algorithm, a first output based on at least one of the first representation and the first processed data.
[0012| At least one embodiment relates to a method includes receiving, from a first monitoring device configured to generate a first data set associated with a patient, the first data set, and generating, by an artificial intelligence algorithm using the first data, a first representation. The method further includes processing, by the artificial intelligence algorithm using historical data, the first data set to define a first processed data set, and generating, by the artificial intelligence algorithm, a first output based on at least one of the first representation and the first processed data. This summary is illustrative only and is not intended to be in any way limiting. BRIEF DESCRIPTION OF THE DRAWINGS
[0013] A clear conception of the advantages and features constituting the present disclosure, and of the construction and operation of typical mechanisms provided with the present disclosure, will become more readily apparent by referring to the exemplary, and therefore non-limiting, embodiments illustrated in the drawings accompanying and forming a part of this specification, wherein like reference numerals designate the same elements in the several views, and in which:
[0014] FIG. 1 is a schematic illustration of one embodiment of a Type 1 imaging system.
[0015] FIG. 2 is a schematic illustration of one embodiment of a Type 2 imaging system.
[0016] FIG. 3 is a schematic illustration of one embodiment of a Type 3 sensing system.
[0017] FIG. 4 is a method for processing monitoring device data, according to an embodiment.
[0018] FIG. 5 is a schematic illustration of one embodiment of a health diagnostic system configured to communicate with a Type 1, Type 2 and/or Type 3 imaging/sensing system.
10019] The foregoing and other features of the present disclosure will become apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings.
DETAILED DESCRIPTION
[0020] In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and made part of this disclosure.
[0021] Referring to the figures in general, the present disclosure relates to an imaging and/or sensing system and methods for detecting disease or changes in health. The imaging and/or sensing systems is configured to interact with a patient to generate a patient data set for the sensing system, where the generated patient data set may include a temporal component, for example, an MRI data set generated at a particular visit to a hospital. The system includes imaging and/or sensing devices, artificial intelligences applied to images and the sets of data acquired from the imaging and/or sensing devices, and representations and outputs resulting from applying the artificial intelligence to the images and the sets of data.
[0022] Referring to FIG. 1, an imaging system 100 is depicted according to an example embodiment. The imaging system 100 is a Type 1 imaging system that provides improved imaging in traditional radiology settings. Traditional radiology settings may include hospitals, outpatient imaging facilities, and image-aware physicians’ offices. The imaging system 100 includes an imaging device 110, historical data 112, a representation 114, an image 120, a set of data 122, an artificial intelligence 130, and an output 140. The imaging system 100 is configured to use the artificial intelligence 130 to generate the representation 114 and the output 140, the output 140 providing an image-informed early warning of changes in the health of a patient. The imaging system 100 may be used repeatedly over time for an individual patient.
[0023] The imaging device 110 is a device that is configured to acquire images of a patient. In some examples, the imaging device 110 may be an MRI, CT, PET, Nuclear Imaging, ultrasound, X-ray machine, and/or a machine associated with another imaging modality. The image 120 is an image of a patient acquired via use of the imaging device 110. The set of data 122 may be information related to the image 120 or a separate image that shares properties with the image 120. The set of data 122 may be acquired by the imaging device 110 or a separate imaging device 110 [0024] The imaging system 100 further includes historical data 112. The historical data 112 includes previous information acquired by and/or created by the imaging system 100. For example, the historical data 112 may include previous images, sets of data, representations, and/or outputs. The artificial intelligence 130 is configured to use the historical data 112 to generate the representation 114, inform the acquisition of new images and/or set of data, and to allow for the detection of changes between images, sets of data, representations, and/or sub-combinations thereof over time.
(0025] The representation 114 is a model of an individual patient’s baseline state of health. The representation 114 can take the form of a feature vector derived using the artificial intelligence 130, and generated from the image 120, and/or the set of data 122. The artificial intelligence 130 distills the image 120 and/or the set of data 122 into the representation 114. The representation 114 may be updated by the imaging system 100 when the patient undergoes a new imaging exam and a new image 120 and/or set of data 122 is acquired. An individual representation, or a cumulative set of representations generated over time may be used to assess risk, characterize change, improve image quality, and/or establish trajectories of health for a patient over time.
[0026] Multiple representations for an individual subject that vary over time are included in the historical data 112. As the historical data 112 allows for the usage of and comparisons to previous images, sets of data, and/or representations, updating a representation 114 with information from a new image 120 allows for the imaging system 100 to acquire a smaller quantity of time-consuming imaging data that makes up the image 120 than would otherwise be needed to generate a representation in the absence of previous images from the historical data 112. The representations of the patient in the imaging system 100 may also inform how the imaging system 100 is configured to acquire new images 120 and/or sets of data 122, and/or how the imaging system 100 is configured to generate new outputs 140.
[0027] The artificial intelligence 130 is configured to receive the image 120 and/or the set of data 122 to generate the representation 114 and output 140. The artificial intelligence 130 may be stored as part of the imaging device 110 or on a separate computing device. The artificial intelligence 130 may be trained on the historical data 112 including previous images, sets of data, representations, and/or outputs from the imaging system 100. [0028] The artificial intelligence 130 may use various machine learning models to generate the representation 114 and output 140. In some embodiments, the machine learning model includes representation learning to reduce high-dimensional data to low-dimensional data, making it easier to discover patterns and anomalies. In some embodiments, representation learning is supervised. In some embodiments, representation learning is self-supervised. In some embodiments, the artificial intelligence 130 may use a convolutional neural network structure. In some embodiments, the artificial intelligence 130 may use an autoencoder network structure. In some embodiments, the artificial intelligence 130 may use a Siamese neural network structure.
[0029] The artificial intelligence 130 provides the output 140. The output may include automated image classification or interpretation, an imaged with increased speed, and/or improved imaging quality, among other outputs 140. The output 140 may then be provided back to the historical data 112 of the imaging system 100.
[0030] In some embodiments, the artificial intelligence 130 uses the representation 140 and previous representations in historical data 112 to generate a configuration for data acquisition for the imaging device 110 as an output 140. For example, a neural network can be used to learn which data points are most important, and these data points can be used to choose what types of data to acquire.
[0031] The output 140 may be an assessment of change, in which the artificial intelligence 130 assesses a new image 120, set of data 122, and/or representation 114 in comparison to previous images, sets of data, and representations from the historical data 112 to flag changes in individual subjects and/or to identify early trajectories toward known diseases or development of risk indicators. In some examples, known diseases may include cancer or a neurodegenerative disease. This allows for automatic assessment to detect of change over time, and detection of subtle shifts that may not be discernible by a radiologist.
[0032] In another example, the artificial intelligence 130 may be used to compare an image 120 to previous images from the historical data 112 to generate an output 140 of a change map. A change map shows the differences between previous images from the historical data 112 and the image 120 that is in the process of being acquired by the imaging device 110. In some examples, if the change map shows no evidence of changes between the previous images and image 120 that is in the process of being acquired by the imaging device 110, then the acquisition of image 120 that is in the process of being acquired by the imaging device 110 may be stopped in a short time. In some examples, if the change map shows evidence of differences between the previous images and the image 120 that is in the process of being acquired by the imaging device 110, then the imaging device 110 may continue with the acquisition of the image 120. In some examples, as a patient is imaged multiple times by an imaging system 100, multiple images are stored in the historical data 112, and the imaging system 100 may, for the creation of the output 140 of the change map, use any of the previous images from the historical data 112 to compare to the image 120 that is in the process of being acquired by the imaging device 110. Therefore, the artificial intelligence 130 allows for the imaging device 110 to gather only what is needed, reducing scanning time and computing resources. Change maps may also be used to assess the evolution of disease or response to therapy.
[0033] In another example, the artificial intelligence 130 may be used to generate an image with improved image quality as the output 140. To generate an improved image, the artificial intelligence 130 may use the historical data 112 and/or representation 114 to restore images that may be degraded, or of a lower initial quality, from less expensive and low-performing imaging systems to a higher quality expected from a more expensive, high-performance system. The artificial intelligence 130 may incorporate representations to fill in missing information, providing an improved image.
[0034] In a specific disease example of how an imaging system may be used, the imaging system 100 is tailored for an output 140 of an early detection of pancreatic cancer. Pancreatic cancer is an illness that is often detected too late in the progression of the illness for therapy to be effective. In the imaging system 100, the previous images from historical data 112 may be used to tailor acquisition by the imaging device 110 of new images 120 and sets of data 122 to hunt for subtle changes in pancreas features indicative of encroaching cancer. The artificial intelligence 130 may be configured to have been trained to identify changes in pancreas features between image 120 and a previous image from the historical data 112. [0035] Referring to FIG. 2, an imaging system 200 is depicted according to an example embodiment. The imaging system 200 is a Type 2 imaging system that enables rapid screening of patients in non-specialty settings. Non-specialty settings may include nontraditional settings for advanced imaging, such as commercial settings like pharmacies or supermarkets, places of work, personal homes, or primary care physicians ‘offices. A Type 2 imaging system is trained on high-end images from a Type 1 imaging system and provides outputs 240 of targeted answers for non-experts. The imaging system 200 includes an imaging device 210, historical data 212, a representation 214, an image 220, a set of data 222, an artificial intelligence 230, data from an imaging system 100, and an output 240. The imaging system 200 is configured to use the artificial intelligence 230 to provide an image- informed early warning of changes in the health of a patient. The imaging system 200 may be used repeatedly over time for an individual patient.
(0036] The imaging system 200 includes an imaging device 210 that is a stripped-down lower-performance scanner, such as a point-of-care scanner and/or a cheaper, more accessible medical imaging device. In some embodiments, the imaging device 210 is a low-field MRI machine, portable CT machine, and/or handheld ultrasound device. The imaging device 210 is a device that is configured to acquire images of a patient to be used in the imaging system 200. The image 220 is an image of a patient acquired via the imaging device 210.
[0037] The image 220, the set of data 222, the historical data 212, the artificial intelligence 230, the representation 214, and the output 240 of imaging system 200 are functionally and/or structurally similar to their respective components of imaging system 100 of FIG.1.
[0038] In some examples, the image 220 may be an image acquired via MRI, CT, PET, Nuclear Imaging, ultrasound, and/or X-ray imaging, among other imaging modalities. A set of data 222 may be information related to an image 220 or may be another, separate image that shares the properties of an image 220. The set of data 222 may be acquired by a device that is separate from the scanner 210 or the device may be the scanner 210 itself.
[0039] The imaging system 200 includes historical data 212. The historical data 212 includes previous information acquired by and/or created by the imaging system 200, such as by another Type 2 imaging system, and/or by a Type 1 imaging system 100, or Type 3 sensing system 300 (discussed further in reference in FIG.3). For example, the historical data 212 may include previous images, sets of data, representations, and/or outputs. The artificial intelligence 230 is configured to use the historical data 212 to generate the representation 214, inform the acquisition of new images and/or set of data, and to allow for the detection of changes between images, sets of data, representations, and/or sub-combinations thereof over time.
[0040] The representation 214 is a model of an individual patient’s baseline state of health. The representation 214 is a feature vector derived using the artificial intelligence 230, and generated from the image 220, and/or the set of data 222. The artificial intelligence 230 distills the image 220 and/or the set of data 222 into the representation 214. The representation 214 may be updated by the imaging system 200 when the patient undergoes a new imaging exam and a new image 220 and/or set of data 222 is acquired. An individual representation, or a cumulative set of representations generated over time may be used to assess risk, characterize change, improve image quality, and/or establish trajectories of health for a patient over time.
[0041] Multiple representations for an individual subject that vary over time are included in the historical data 212. As the historical data 212 allows for the usage of and comparisons to previous images, sets of data, and/or representations, updating a representation 214 with information from a new image 220 allows for the imaging system 200 to acquire a smaller quantity of time-consuming imaging data that makes up an image 220 than would otherwise be needed to generate a representation 214 in the absence of previous images from the historical data 212. The representations of the patient in the imaging system 200 may inform how the imaging system 200 is configured to acquire new images 220 and/or sets of data 222, and/or how the imaging system 200 is configured to generate new outputs 240.
[0042] The artificial intelligence 230 is configured to receive the image 220 or the set of data 222 to generate the representation 214 and output 240. The artificial intelligence is further configured to be trained on historical data 212, including images, sets of data, representations, and/or outputs from imaging system 200, as well as historical data 112 from imaging system 100. The artificial intelligence 230 may be stored as part of the imaging device 210 or on a separate computing device. [0043] The artificial intelligence 230 may use various machine learning models to generate the representation 214 and output 240. In some embodiments, the machine learning model includes representation learning to reduce high-dimensional data to low-dimensional data, making it easier to discover patterns and anomalies. In some embodiments, representation learning is supervised, such as in a convolutional neural network. In some embodiments, representation learning is unsupervised, such as in an autoencoder network. In some embodiments, the artificial intelligence 230 may use a Siamese neural network structure. The artificial intelligence 230 provides the output 240. The output may include automated image interpretation, increased imaging speed, and/or improved imaging quality, among other outputs 240. The output 240 may then be provided back to the historical data 212 of the imaging device 210.
[0044] The output 240 may be change detection, in which the artificial intelligence 230 assesses the new image 220, the set of data 222, and/or the representation 214 in comparison to previous images, sets of data, and representations from the historical data 212 to flag changes in individual subjects and/or to identify early trajectories toward known diseases or development of risk indicators. In some examples, known diseases may include cancer or a neurodegenerative disease. This allows for automatic assessment to detect of change over time, and detection of subtle shifts that may not be discernible by a radiologist.
[0045] In another example, the artificial intelligence 130 may be used to compare an image 220 to previous images from the historical data 212 to generate an output 240 of a change map. A change map shows the differences between previous images from the historical data 212 and the image 220 that is in the process of being acquired by the imaging device 210. In some examples, if the change map shows no evidence of changes between the previous images and image 220 that is in the process of being acquired by the imaging device 210, then the acquisition of image 220 that is in the process of being acquired by the imaging device 210 may be stopped in a short time. In some examples, if the change map shows evidence of differences between the previous images and the image 220 that is in the process of being acquired by the imaging device 210, then the imaging device 210 may continue with the acquisition of the image 220. In some examples, as a patient is imaged multiple times by an imaging system 200, multiple images are stored in the historical data 212, and the imaging system 200 may, for the creation of the output 240 of the change map, use any of the previous images from the historical data 212 to compare to the image 220 that is in the process of being acquired by the imaging device 210. Therefore, the artificial intelligence 230 allows for the imaging device 210 to gather only what is needed, reducing scanning time and computing resources. Change maps may also be used to assess the evolution of disease or response to therapy.
[0046] In another example, the artificial intelligence 230 may generate an output 240 of an image with improved image quality. To generate an improved image, the artificial intelligence 230 may use the historical data 212 and/or representations 214 to restore images 220 that may be degraded to a higher quality more familiar from expensive, high performance machines as used in Type 1 imaging systems.
[0047] In another example, the artificial intelligence 230 may generate an output 240 of a clinical answer. The artificial intelligence 230 may be configured to have had end-to-end training of a single neural network to allow the imaging system 200 to go directly from raw data of an image 220 and/or a set of data 222 to an output 240 that is a clinical answer rather than an image. The clinical answer is an indicator of a concerning change and/or an index of suspicion. For example, a clinical answer may be a yes-or-no answer to indicate the presence or absence of prostate cancer when using the imaging device 210 for prostate imaging. In another example, a clinical answer may be a yes-or-no answer to indicate the presence or absence of cerebral bleeds when using the imaging device 210 for cerebral imaging. In some examples, in the event of a positive (e.g., “yes”) clinical answer, a healthcare provider may then direct the patient toward a specialist healthcare provider for Type 1 imaging from an imaging system 100. Therefore, non-imaging experts can provide effective, and more routine screening for diseases. The use of Type 2 imaging to apply artificial intelligence 230 to provide outputs 240 of clinical answers in place of images may facilitate the population-level screening of patients for the risk of known diseases such as prostate cancer or breast cancer, among others, with inexpensive imaging devices.
[0048] Referring to FIG. 3, a sensing system 300 is depicted according to an example embodiment. A sensing system 300 is a Type 3 imaging system that involves continuous health monitoring at the point of care, at work, and/or at home. A Type 3 sensing system 300 may be a stand-alone wearable and/or environmental sensor array and may be trained on information from Type 1 and/or Type 2 imaging systems and may then provide an early warning of changes. A Type 3 imaging system uses sensor data that is trained in the imaging settings of Type 1 and/or Type 2 imaging systems and is correlated with the imaging results of Type 1 and/or Type 2 imaging systems, with the Type 3 imaging system standing in place of advanced imaging for the purpose of early warning of disease. The sensing system 300 includes a sensing device 310, a set of data 322, an artificial intelligence 330, data from an imaging system 100 and/or an imaging system 200, and an output 340. The sensing system 300 may further include historical data 312 and/or a representation 314. The sensing system 300 is configured to use the artificial intelligence 330 to provide an image-informed early warning of changes in the health of a patient. The sensing system 300 may be used repeatedly over time for an individual patient.
[0049] The sensing system 300 includes a sensing device 310 that is a sensor and/or an array of sensors that is configured to acquire a set of data 322 of a patient. In some examples, the device 310 may be a sensor for ultrasound, bioimpedance, electrocardiogram (EKG), electromyography (EMG), electroencephalography (EEG), radiofrequency (RF) pilot tone, ultra-wide band (UWB) radar, or near infrared (NIR), among other penetrating sensor modalities. In some examples, the device 310 may be an accelerometer, optical camera, three-dimensional/time-of-flight (3D/TOF) camera, or skin conductance sensor, among other sensor modalities.
[0050] The sensors may be wearable, and monitor health continuously or at regular intervals. In some embodiments, the sensors may be incorporated into clothing or attached to a body. For example, a sensing system 300 may have a sensing device 310 that is a smart underwear sensor for an output 340 of a detection of prostate cancer, a smart bra sensor for an output 340 of a detection of a breast cancer, or another wearable or environmental sensor to monitor health states as an output 340.
[0051] The set of data 322, historical data 340, the representation 314, the artificial intelligence 330, and the output 340 of imaging system 300 are functionally and/or structurally similar to their respective components of imaging system 100 of FIG.1 and imaging system 200 of FIG.2. [0052] The sensing system 300 may include historical data 312. The historical data 312 includes previous information acquired by and/or created by the sensing system 300, and/or by a Type 1 imaging system 100 and/or by a Type 2 imaging system 200. For example, the historical data 312 may include prior sensing device 310 generated sets of data, representations, and outputs, or prior sets of data, representations, or outputs from other sensing or imaging systems. The artificial intelligence 330 is configured to use the historical data 312 to generate the representation 314, inform the acquisition of new sets of data, and to allow for the detection of changes between images, sets of data, representations, and/or subcombinations thereof over time.
|0053| The representation 314 is a model of an individual patient’s baseline state of health. The representation 314 is a feature vector derived using the artificial intelligence 330, and generated from the set of data 322. The artificial intelligence 330 distills set of data 322 into the representation 314. The representation 314 may be updated by the imaging system 300 when a patient’s sensing device acquires a new set of data 322. An individual representation, or a cumulative set of representations generated over time may be used to assess risk, characterize change, improve image quality, and/or establish trajectories of health for a patient over time.
[0054] The artificial intelligence 330 is configured to receive the set of data 322 to generate the representation 314 and output 340. The artificial intelligence may further configured to be trained on historical data 312, including images, sets of data, representations, and/or outputs from imaging system 300, as well as historical data 112 from imaging system 100 and historical data 212 from imaging system 200. The artificial intelligence 330 may be stored as part of the sensing device 310 or on a separate computing device.
[0055] The artificial intelligence 330 may use various machine learning models to generate the representation 314 and output 340. In some embodiments, the machine learning model includes representation learning to reduce high-dimensional data to low-dimensional data, making it easier to discover patterns and anomalies. In some embodiments, representation learning is supervised, such as in a convolutional neural network. In some embodiments, representation learning is unsupervised, such as in an autoencoder network. In some embodiments, the artificial intelligence 330 may use a Siamese neural network structure. The artificial intelligence 330 provides the output 340, and the artificial intelligence 330 may then provide the output 340 back to the historical data 312 of the sensing system 300 such that the sensing system 300 may use the historical data 312 to inform the acquisition of future sets of data 322 and/or to inform the future operation of an artificial intelligence 330.
[0056] The sensing system 300 provides indirect tomography through sensing device 310, trained through artificial intelligence 330 with high-end and/or low-end imaging machines from imaging systems 100 and/or imaging systems 200, to provide outputs 340 of spatial- resolved information about changes in tissue anatomy or function and/or early warnings of concerning changes in representations of patients’ states of health.
[0057] For example, the artificial intelligence 330 may be used for an output 340 of change detection, in which representations 314 are used to flag changes in individual subjects and/or to identify early trajectories toward known diseases or development of risk indicators. In change detection, the artificial intelligence 330 may compare a new set of data 322 to previous sets of data from the historical data 312 and report an output 340 of whether any differences have been determined between the new set of data 322 and the previous sets of data from the historical data 312. Similarly, the artificial intelligence 330 may compare a new representation 314 to a previous representation from the historical data 312 and report an output 340 of whether any differences have been determined between the new representation 314 and the previous representation from the historical data 312.
[0058] In another example, the artificial intelligence 330 may be used to compare the set of data 322 to previous sets of data from historical data 312 to generate an output 340 of a change map. A change map shows the differences between the set of data 322 and previous sets of data. In some examples, as the sensing device 310 acquires multiple sets of data for a patient in a sensing system 300, multiple sets of data are stored in the historical data 312 and the sensing system 300 may, for the creation of the output 340 of the change map, use any of the previously acquired sets of data from the historical data 312 to compare to the set of data 322 that is in the process of being acquired by the sensing device 310. Change maps may also be used to assess the evolution of disease or response to therapy. [0059] The artificial intelligence 330 may be configured to have had end-to-end training of a single neural network to allow the sensing system 300 to go directly from raw data of a set of data 322 to an output 340 that is a clinical answer rather than a set of data. The clinical answer is an indicator of a concerning change and/or an index of suspicion. In some examples, in the event of a positive (e.g., “yes”) clinical answer, the sensing system 300 could then direct the patient toward a specialist healthcare provider for Type 1 imaging from an imaging system 100 and/or toward a non-specialist healthcare provider for Type 2 imaging from an imaging system 200. The use of Type 3 imaging to apply artificial intelligence 330 to provide outputs 340 of clinical answers facilitates the continuous health monitoring of patients at the point of care, at work, and/or at home, with inexpensive sensors of device 310.
[0060] In another example, the sensing system 300 includes a device 310 that is an article of smart clothing with wearable sensors that are configured to detect changes from a patient’s baseline health or to provide a warning sign of a disease. For example, the device 310 may be smart underwear, which is underwear that includes sensors configured to generate sets of data 322 continuously. The smart underwear may use the historical data 312 and/or representations that are derived from Type 1 imaging systems 100 and/or Type 2 imaging systems 200 to provide an output 340 that is a clinical answer on the presence or absence of an early stage of prostate cancer in the patient. In another example, the device 310 may be a smart bra, which is a bra that includes sensors configured to generate sets of data 322 continuously. The smart bra may use the historical data 312 and/or representations that are derived from Type 1 imaging systems 100 and/or Type 2 imaging systems 200 to provide an output 340 that is a clinical answer on the presence or absence of an early stage of breast cancer in the patient. In another example, the device 310 may be a smart hat, which is a hat that includes sensors configured to generate sets of data 322 continuously. The smart hat may use historical data 312 and/or representations that are derived from Type 1 imaging systems 100 and/or Type 2 imaging systems 200 to provide an output 340 that is a clinical answer of an assessment of the brain health of the patient.
[0061] FIG. 4 is a method 400 for processing monitoring device data, according to an embodiment. The method 400 can be executed by at least one of the imaging system 100 of FIG. 1, the imaging system 200 of FIG. 2, and/or the sensing system 300 of FIG.3. The method 400 includes receiving, from a first monitoring device configured to generate the first data set associated with a patient, the first data set at 401, generating, by an artificial intelligence algorithm using the first data set, a first representation at 402, processing, by the artificial intelligence algorithm using historical data, the first data set to define a first processed data set at 403, the historical data corresponding to pervious patient data sets, and generating, by the artificial intelligence algorithm, a first output based on at least one of the first representation and the first processed data at 404.
[0062] At 401, the method 400 includes receiving, from a first monitoring device configured to generate a first data set associated with a patient, the first data set. In some embodiments, the first monitoring device is an imaging device, such as the imaging device 110 of FIG. 1. In some embodiments, the first data set includes a set of data and an image.
[0063] At 402, the method 400 includes generating, by an artificial intelligence algorithm using the first data set, a first representation. In some embodiments, the artificial intelligence algorithm is functionally and/or structurally similar to the artificial intelligence 130 of FIG. 1. In some embodiments, the artificial intelligence algorithm includes representation learning. In some embodiments, at least one of a convolution neural network, a Siamese neural network, or an autoencoder network.
[0064] At 403, the method 400 includes processing, by the artificial intelligence algorithm using the first historical data, the first data set to define a first processed data set. The historical data corresponds to previous patient data sets, which may include images, sets of data, representations, and/or outputs. At 404, the method 400 includes generating a first output based on at least one of the first representation and the first processed data. In some embodiments, the output can include at least one of a clinical answer or a change map.
[0065] In some embodiments, the method 400 includes repeating 401, 402, and 403, with a second monitoring device (e.g., the imaging device 210), as described in reference to FIG. 2. The resulting second data set, second historical data, and second output being utilized by the artificial intelligence. In some embodiments, the method 400 includes additionally repeating 401, 402, and 403 with a third monitoring device (e.g., the sensing device 310), as described in reference to FIG. 3. The resulting third data set, third historical data, and third output being utilized by the artificial intelligence.
[0066] FIG. 5 illustrates an embodiment of a health diagnostic system wherein a Type 1, Type 2, and/or Type 3 system may provide a data set to a first artificial intelligence 530. The artificial intelligence 530 generates a representation of that data set corresponding to the one (or more) data sets from the respective imaging or sensing device at that respective time, such as a particular visit to a hospital. The representation 514 is analyzed by a second artificial intelligence 535, which may be different than the first artificial intelligence 530, such as a different algorithm or an entirely different neural network. The analysis of the representation 514 may utilize historical data from the Type 1, Type 2, and/or Type 3 systems to generate a new output 540. The output 540 may be an indication of a health diagnosis, such as but not limited to an indication of a risk of a condition and/or the change in the risk of a condition.
[0067] It should be noted that the term “exemplary” and variations thereof, as used herein to describe various embodiments, are intended to indicate that such embodiments are possible examples, representations, or illustrations of possible embodiments (and such terms are not intended to connote that such embodiments are necessarily extraordinary or superlative examples).
[0068] The term “or,” as used herein, is used in its inclusive sense (and not in its exclusive sense) so that when used to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, is understood to convey that an element may be either X, Y, Z; X and Y; X and Z; Y and Z; or X, Y, and Z (i.e., any combination of X, Y, and Z). Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y, and at least one of Z to each be present, unless otherwise indicated.
[0069] The hardware and data processing components used to implement the various processes, operations, illustrative logics, logical blocks, modules and circuits described in connection with the embodiments disclosed herein may be implemented or performed with a general purpose single- or multi-chip processor, a graphical processing unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, or, any conventional processor, controller, microcontroller, or state machine. A processor also may be implemented as a combination of computing devices, such as a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. In some embodiments, particular processes and methods may be performed by circuitry that is specific to a given function. The memory (e.g., memory, memory unit, storage device) may include one or more devices (e.g., RAM, ROM, Flash memory, hard disk storage) for storing data and/or computer code for completing or facilitating the various processes, layers and modules described in the present disclosure. The memory may be or include volatile memory or non-volatile memory, and may include database components, object code components, script components, or any other type of information structure for supporting the various activities and information structures described in the present disclosure. According to an exemplary embodiment, the memory is communicably connected to the processor via a processing circuit and includes computer code for executing (e.g., by the processing circuit or the processor) the one or more processes described herein.
[0070] The present disclosure contemplates systems on any machine-readable media for accomplishing various operations. The embodiments of the present disclosure may be implemented using existing computer processors, or by a special purpose computer processor for an appropriate system, incorporated for this or another purpose, or by a hardwired system. Embodiments within the scope of the present disclosure include program products comprising machine-readable media for carrying or having machine-executable instructions or data structures stored thereon. Such machine-readable media can be any available media that can be accessed by a general purpose or special purpose computer or other machine with a processor. By way of example, such machine-readable media can comprise RAM, ROM, erasable programmable read-only memory (EPROM), electronically erasable programmable read-only memory (EEPROM), or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code in the form of machine-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer or other machine with a processor. Combinations of the above are also included within the scope of machine-readable media. Machine-executable instructions include, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing machines to perform a certain function or group of functions.
[0071 [ It is important to note that the construction and arrangement of the imaging systems as shown in the various exemplary embodiments is illustrative only. Additionally, any element disclosed in one embodiment may be incorporated or utilized with any other embodiment disclosed herein.

Claims

WHAT IS CLAIMED IS:
1. A system for processing data, the system comprising: a first monitoring device configured to generate a first data set associated with a patient; a database having historical data corresponding to one or more previous patient data sets; and one or more processors operatively coupled to the first monitoring device, the one or more processors configured to: receive the first data set from the first monitoring device; generate, by an artificial intelligence algorithm using the first data set, a first representation; process, by the artificial intelligence algorithm using the historical data, the first data set to define a first processed data set; and generate, by the artificial intelligence algorithm, a first output based on at least one of the first representation or the first processed data set.
2. The system of claim 1, wherein the first monitoring device is an imaging device.
3. The system of claim 1, wherein the artificial intelligence algorithm includes representation learning.
4. The system of claim 3, wherein representation learning includes at least one of a convolutional neural network, a Siamese neural network, or an autoencoder network.
5. The system of claim 1, wherein the first output can include at least one of a clinical answer or a change map.
6. The system of claim 1, further comprising: a second monitoring device configured to generate a second data set associated with the patient, wherein the second data set is higher fidelity than the first data set, and wherein the one or more processors are further configured to: receive the second data set from the second monitoring device; generate, by the artificial intelligence algorithm using the second data set, a second representation; process, by the artificial intelligence algorithm using the historical data, the second data set to define a second processed data set; generate, by the artificial intelligence algorithm, a second output based on at least one of the second representation and the second processed data; and update the artificial intelligence algorithm using at the last one of the second output, the second representation, and the second processed data set.
7. The system of claim 6, wherein the second monitoring device is an imaging device.
8. The system of claim 6, further comprising: a third monitoring device configured to generate a third data set associated with the patient, wherein the third data set is higher fidelity than the second data set; and wherein the one or more processors are further configured to: receive the third data set from the third monitoring device; generate, by the artificial intelligence algorithm using the third data set, a third representation; process, by the artificial intelligence algorithm using the historical data, the third data set to define a third processed data set; generate, by the artificial intelligence algorithm, a third output based on at least one of the third representation and the third processed data set; and update the artificial intelligence algorithm using at least one of the third output, the third representation, and the third processed data set.
9. The system of claim 8, wherein the first monitoring device is a sensing device, the second monitoring device is an imaging device, and the third monitoring device is an imaging device.
10. The system of claim 9, wherein the sensing device is configured to continuously monitor the patient.
11. A non-transitory processor-readable medium storing code representing instructions to be executed by one or more processors, the instructions comprising code to cause the one or more processors to: receive, from a first monitoring device configured to generate a first data set associated with a patient, the first data set; generate, by an artificial intelligence algorithm using the first data set, a first representation; process, by the artificial intelligence algorithm using historical data from a database, the first data set to define a first processed data set, the historical data corresponding to previous patient data sets; and generate, by the artificial intelligence algorithm, a first output based on at least one of the first representation and the first processed data.
12. The non-transitory processor-readable medium of claim 11, further comprising code to cause the one or more processors to: receive, from a second monitoring device configured to generate a second data set associated with the patient, a second data set from the second monitoring device; generate, by the artificial intelligence algorithm using the second data set, a second representation; process, by the artificial intelligence algorithm using the historical data, the second data set to define a second processed data generate, by the artificial intelligence algorithm, a second output based on at least one of the second representation and the second processed data; and update the artificial intelligence algorithm using at least one of the second output, the second representation, and the second processed data set.
13. The non-transitory processor-readable medium of claim 12, further comprising code to cause the one or more processors to: receive, from a third monitoring device configured to generate a third data set associated with the patient, a third data set from the third monitoring device; generate, by the artificial intelligence algorithm using the third data set, a third representation; process, by the artificial intelligence algorithm using the historical data, the third data set to define a third processed data set; generate a third output based on at least one of the third representation and the third processed data set; and update the artificial intelligence algorithm using at least one of the third output, the third representation, and the third processed data.
14. The non-transitory processor-readable medium of claim 13, wherein the first monitoring device is a sensing device, the second monitoring device is an imaging device, and the third monitoring device is an imaging device.
15. The non-transitory processor-readable medium of claim 13, wherein the output includes at least one of indirect tomography and a clinical answer.
16. The method comprising: receiving, from a first monitoring device configured to generate a first data set associated with a patient, the first data set; generating, by an artificial intelligence algorithm using the first data set, a first representation; processing, by the artificial intelligence algorithm using historical data, the first data set to define a first processed data set, the historical data corresponding to previous patient data sets; and generating, by the artificial intelligence algorithm, a first output based on at least one of the first representation and the first processed data.
17. The method of claim 16, wherein the artificial intelligence algorithm includes representation learning.
18. The method of claim 16, further comprising: receiving, from a second monitoring device configured to generate a second data set associated with the patient, the second data set from the second monitoring device; generating, by the artificial intelligence algorithm using the second data set, a second representation; processing, by the artificial intelligence algorithm using the historical data, the second data set to define a second processed data set; generating, by the artificial intelligence algorithm, a second output based on at least one of the second representation and the second processed data; and updating the artificial intelligence algorithm using at least one of the second output, the second representation, and the second processed data.
19. The method of claim 17, further comprising: receiving, from a third monitoring device configured to generate a third data set associated with the patient, the third data set from the third monitoring device; generating, by the artificial intelligence algorithm using the third data set, a third representation; processing, by the artificial intelligence algorithm using the historical data, the third data set to define a third processed data set; generating, by the artificial intelligence algorithm, a third output based on at least one of the third representation and the third processed data; and updating the artificial intelligence algorithm using at least one of the third output, the third representation, and the third processed data.
20. The method of claim 19, wherein the first monitoring device is a sensing device, the second monitoring device is an imaging device, and the third monitoring device is an imaging device.
PCT/US2023/013257 2022-02-16 2023-02-16 Ai-powered devices and methods to provide image and sensor informed early warning of changes in health WO2023158762A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263310975P 2022-02-16 2022-02-16
US63/310,975 2022-02-16

Publications (1)

Publication Number Publication Date
WO2023158762A1 true WO2023158762A1 (en) 2023-08-24

Family

ID=87579065

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/013257 WO2023158762A1 (en) 2022-02-16 2023-02-16 Ai-powered devices and methods to provide image and sensor informed early warning of changes in health

Country Status (1)

Country Link
WO (1) WO2023158762A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110004110A1 (en) * 2000-05-30 2011-01-06 Vladimir Shusterman Personalized Monitoring and Healthcare Information Management Using Physiological Basis Functions
US20180303413A1 (en) * 2015-10-20 2018-10-25 Healthymize Ltd System and method for monitoring and determining a medical condition of a user
US20180322253A1 (en) * 2017-05-05 2018-11-08 International Business Machines Corporation Sensor Based Monitoring
US20190307335A1 (en) * 2012-12-03 2019-10-10 Ben F. Bruce Medical analysis and diagnostic system
US20190340470A1 (en) * 2016-11-23 2019-11-07 General Electric Company Deep learning medical systems and methods for image reconstruction and quality evaluation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110004110A1 (en) * 2000-05-30 2011-01-06 Vladimir Shusterman Personalized Monitoring and Healthcare Information Management Using Physiological Basis Functions
US20190307335A1 (en) * 2012-12-03 2019-10-10 Ben F. Bruce Medical analysis and diagnostic system
US20180303413A1 (en) * 2015-10-20 2018-10-25 Healthymize Ltd System and method for monitoring and determining a medical condition of a user
US20190340470A1 (en) * 2016-11-23 2019-11-07 General Electric Company Deep learning medical systems and methods for image reconstruction and quality evaluation
US20180322253A1 (en) * 2017-05-05 2018-11-08 International Business Machines Corporation Sensor Based Monitoring

Similar Documents

Publication Publication Date Title
US11957470B2 (en) Method and system for evaluation of functional cardiac electrophysiology
Mumtaz et al. A machine learning framework involving EEG-based functional connectivity to diagnose major depressive disorder (MDD)
Liang et al. Scalp EEG epileptogenic zone recognition and localization based on long-term recurrent convolutional network
US11918333B2 (en) Method and system to assess disease using phase space tomography and machine learning
Rajeswari et al. Advances in biomedical signal and image processing–A systematic review
KR101604086B1 (en) Integrated computer-aided diagnosis system using heterogeneous bio data for diagnosis and similar patient search from database
US11948688B2 (en) Method and system to assess disease using phase space volumetric objects
CN115691794B (en) Auxiliary analysis method and system for neural diagnosis
EP4266986A1 (en) Method and system for engineering cycle variability-related features from biophysical signals for use in characterizing physiological systems
Wang et al. SEEG-Net: An explainable and deep learning-based cross-subject pathological activity detection method for drug-resistant epilepsy
Poon et al. Special issue on health informatics and personalized medicine
JP7114386B2 (en) Medical diagnostic imaging equipment and modality control equipment
WO2020202173A1 (en) System and method for predicting wellness metrics
WO2023158762A1 (en) Ai-powered devices and methods to provide image and sensor informed early warning of changes in health
Kolekar et al. Biomedical signal and image processing in patient care
Mishra et al. Medical imaging using signal processing: A comprehensive review
Erkuş et al. Detection of abnormalities in heart rate using multiple Fourier transforms
Antonakakis et al. Alterations in dynamic spontaneous network microstates in mild traumatic brain injury: a MEG beamformed dynamic connectivity analysis
Vizza et al. On the analysis of biomedical signals for disease classification
Kazancigil Innovations in Medical Apps and the Integration of Their Data into the Big Data Repositories of Hospital Information Systems for Improved Diagnosis and Treatment in Healthcare
JP7297980B2 (en) Medical information management device
US20230121783A1 (en) Medical image processing apparatus, method, and program
Chandana et al. Fetal Health Prediction Using Machine Learning Approach
WO2022167776A1 (en) Processing an arterial doppler ultrasound waveform
CA3229061A1 (en) Methods and systems for engineering respiration rate-related features from biophysical signals for use in characterizing physiological systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23756891

Country of ref document: EP

Kind code of ref document: A1