WO2025042892A1 - Angiographically derived microvascular obstruction determination - Google Patents

Angiographically derived microvascular obstruction determination Download PDF

Info

Publication number
WO2025042892A1
WO2025042892A1 PCT/US2024/043041 US2024043041W WO2025042892A1 WO 2025042892 A1 WO2025042892 A1 WO 2025042892A1 US 2024043041 W US2024043041 W US 2024043041W WO 2025042892 A1 WO2025042892 A1 WO 2025042892A1
Authority
WO
WIPO (PCT)
Prior art keywords
mvo
data
angiography
indication
processing circuitry
Prior art date
Application number
PCT/US2024/043041
Other languages
French (fr)
Inventor
Stephen G. NASH
James Delahunty
Brian J. Kelly
Original Assignee
Medtronic Vascular, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Medtronic Vascular, Inc. filed Critical Medtronic Vascular, Inc.
Publication of WO2025042892A1 publication Critical patent/WO2025042892A1/en

Links

Abstract

An example medical system includes memory configured to store angiography data of a patient and processing circuitry communicatively coupled to the memory. The processing circuitry is configured to obtain the angiography data. The processing circuitry is configured to execute at least one machine learning model to determine, based at least in part on the angiography data, an indication of microvascular obstruction (MVO). The indication of MVO is indicative of at least one of a likelihood of the patient having MVO or a severity of MVO. The processing circuitry is configured to output the indication of MVO.

Description

ANGIOGRAPHICALLY DERIVED MICRO VASCULAR
OBSTRUCTION DETERMINATION
[0001] This application claims the benefit of U.S. Provisional Application No. 63/578,020, filed August 22, 2023, the entire content of which is hereby incorporated by reference.
TECHNICAL FIELD
[0002] This disclosure relates to the imaging such as imaging used during a medical procedure.
BACKGROUND
[0003] During a medical procedure, a clinician may use an imaging system to be able to visualize internal anatomy of a patient. Such an imaging system may display anatomy, medical instruments, or the like, and may be used to diagnose a patient condition or assist in guiding a clinician in navigating a device inside a patient, such as moving a medical instrument to an intended location inside the patient. Imaging systems may use sensors to capture image data which may be displayed during the medical procedure. Imaging systems include angiography systems, computed tomography (CT) scan systems (including coronary computed tomography angiography (CCTA) systems), fluoroscopic systems (e.g., isocentric C-arm fluoroscopic systems), intravascular ultrasound (IVUS) systems, other ultrasound imaging systems, optical coherence tomography (OCT) fractional flow reserve (FFR) systems, magnetic resonance imaging (MRI) systems, positron emission tomography (PET) systems, as well as other imaging systems.
SUMMARY
[0004] Microvascular obstruction (MVO) is a coronary condition which may cause tissue death, increasing an infarct size after a ST-Segment Elevation Myocardial Infarction (STEMI) over time. There currently is not an objective way of identifying MVO in a patient in a catheterization laboratory (cath lab). Most measures for identifying MVO are subjectively performed, for example by “eyeballing,” and/or using angiographic indices, such as thrombolysis in myocardial infarction (TIMI) flow, myocardial blush score, and electrocardiogram (ECG) data. While prompt treatment for MVO may limit tissue death, clinicians may be reluctant to employ such treatments in all but the most severe cases, because such treatments may be relatively invasive or involved and clinicians may not know whether treating a patient without a most severe case of MVO would receive any benefit from such treatment. One such treatment may involve removal of blood from the body of the patient and mixing the blood with a super oxygenated saline before reintroducing the blood to the body of the patient. This treatment may last for an hour or so and provide little if any benefit to patients not having MVO.
[0005] As such, there may be a desire to have an objective measure of MVO which may lead to more frequent, earlier, and better use of available treatments across patients because patients of similar MVO objective measures may receive treatments shown to be effective on other patients of such MVO objective measures.
[0006] Currently, a clinician may subjectively categorize MVO on a scale of 0-3, based on personally viewed angiographic data. A 3 generally indicates that the blood flow appears to be relatively brisk (which may be good for the patient). A 0 generally indicates that blood flow appears to be sluggish (which may be bad for the patient). However, such ratings are visually subjective and a given patient may be rated differently by different clinicians viewing the same angiographic data. For example, one clinician may categorize MVO of the patient as a 0, while another clinician may categorize the MVO of the patient as a 1. This may make the difference between the patient receiving treatment for MVO or not receiving treatment for MVO. This may also make the difference between which type of treatment the patient may receive, if they receive any treatment at all.
[0007] This disclosure describes a system and techniques for objectively (e.g., quantitatively) determining an indication of MVO in a patient, thus improving a clinician's ability to make informed treatment decisions earlier during the patient timeline. In some examples, the techniques involve using a machine learning model or artificial intelligence model to determine an indication of the likelihood or severity of MVO, such as an MVO score. While either a machine learning model or artificial intelligence model may be utilized, for simplicity purposes, this disclosure describes the use of a machine learning model hereinafter. It should be understood that the techniques of this disclosure may be performed using machine learning model(s) and/or artificial intelligence model(s). For example, the machine learning model may determine indication of MVO based on angiography data and/or non-angiography data associated with the patient. In some examples, the indication of MVO is associated with a location within the anatomy of the patient. In some examples, the techniques include determining a recommended treatment, such as a pharmaceutical treatment and/or coronary interventional procedure, based on the MVO score and providing such recommendation to the clinician.
[0008] In one example, the disclosure describes a medical system comprising: memory configured to store angiography data of a patient; and processing circuitry communicatively coupled to the memory, the processing circuitry being configured to: obtain the angiography data; execute at least one machine learning model to determine, based at least in part on the angiography data, an indication of microvascular obstruction (MVO), the indication of MVO being indicative of at least one of a likelihood of the patient having MVO or a severity of MVO; and output the indication of MVO.
[0009] In another example, the disclosure describes a method comprising: obtaining, by processing circuitry, angiography data; executing, by the processing circuitry, at least one machine learning model to determine, based at least in part on the angiography data, an indication of microvascular obstruction (MVO), the indication of MVO being indicative of at least one of a likelihood of the patient having MVO or a severity of MVO; and outputting, by the processing circuitry, the indication of MVO.
[0010] In yet another example, the disclosure describes a non-transitory computer readable medium comprising instructions, which, when executed, cause processing circuitry to: obtain angiography data of a patient; execute at least one machine learning model to determine, based at least in part on the angiography data, an indication of microvascular obstruction (MVO), the indication of MVO being indicative of at least one of a likelihood of the patient having MVO or a severity of MVO; and output the indication of MVO.
[0011] These and other aspects of the present disclosure will be apparent from the detailed description below. In no event, however, should the above summaries be construed as limitations on the claimed subject matter, which subject matter is defined solely by the attached claims.
[0012] This summary is intended to provide an overview of the subject matter described in this disclosure. It is not intended to provide an exclusive or exhaustive explanation of the apparatus and methods described in detail within the accompanying drawings and description below. Further details of one or more examples are set forth in the accompanying drawings and the description below. BRIEF DESCRIPTION OF DRAWINGS
[0013] FIG. l is a schematic perspective view of one example of a system for determining an indication of MVO according to one or more aspects of this disclosure. [0014] FIG. 2 is a schematic view of one example of a computing system of the system of FIG. 1.
[0015] FIG. 3 is a flow diagram illustrating example machine learning model verification techniques according to one or more aspects of this disclosure.
[0016] FIG. 4 is a conceptual diagram illustrating an example machine learning model according to one or more aspects of this disclosure.
[0017] FIG. 5 is a conceptual diagram illustrating an example training process for a machine learning model according to one or more aspects of this disclosure.
[0018] FIG. 6 is a conceptual diagram illustrating another example training process for a machine learning model according to one or more aspects of this disclosure.
DETAILED DESCRIPTION
[0019] During a cath lab procedure, such as a diagnostic procedure or a percutaneous coronary intervention (PCI), a clinician may utilize an angiographic imager, such as a fluoroscopic imager using contrast, to visualize coronary vasculature and/or other anatomy of a patient. Other equipment may also be employed during the diagnostic procedure or PCI which may output data. A system may utilize such angiography data and/or non-angiography data to objectively determine an indication of MVO, such as an MVO score. The system and/or a clinician may determine a treatment for the patient based, at least in part, on the indication of MVO.
[0020] The techniques of this disclosure may utilize image processing to analyze angiography data and may analyze non-angiography data, to determine an indication of MVO which may be associated with a degree of MVO or a likelihood of MVO in the patient. In some examples, the techniques may include determining an indication of MVO for a particular location within the anatomy of the patient. As such, the techniques of this disclosure may include determining a plurality of indications of MVO, such as MVO scores, for a particular patient, each of the plurality of MVO scores being associated with a respective location within the anatomy of the patient.
[0021] FIG. l is a schematic perspective view of one example of a system for determining an indication of MVO according to one or more aspects of this disclosure. System 100 includes a display device 110, a table 120, an imager 140, and a computing device 150. System 100 may be an example of a system for use in an emergency room or a cath lab. In some examples, system 100 may include other devices, not shown for simplicity purposes. In some examples, system 100 may also include server 160, which may be co-located with the other devices of system 100 or may be located elsewhere. System 100 may be used during a medical procedure, such as an interventional medical procedure like a PCI and/or a diagnostic medical procedure.
[0022] Computing device 150 may include, for example, an off-the-shelf device such as a laptop computer, desktop computer, tablet computer, smart phone, or other similar device or may include a specific purpose device. Computing device 150 may perform various control functions with respect to imager 140. In some examples, computing device 150 may include a guidance workstation. Computing device 150 may control the operation of imager 140 and receive the output of imager 140 and may receive angiography data from imager 140. Computing device 150 may execute a machine learning model and determine an indication of MVO for anatomy of the patient.
[0023] Display device 110 may be configured to output instructions, images, and messages relating to the medical procedure(s). For example, display device 110 may display angiography data obtained through imager 140 and/or a representation of the MVO score associated with the anatomy being displayed. Table 120 may be, for example, an operating table or other table suitable for use during a medical procedure. [0024] In the example of FIG. 1, imager 140, such as an angiography imager (or other imaging device), may be used to image relevant portions of the patient’s anatomy during a medical procedure to visualize the anatomy, characteristics and locations of lesions or other issues inside the patient’s body through the generation of imaging data. As such, imager 140 may capture angiography data. While described herein primarily as an angiography imager, imager 140 may be any type of imaging device, such as an angiography device, a fluoroscopy device, a CT device, a CCTA device, an IVUS device, an OCT - FFR device, an MRI device, a PET device, an ultrasound device, or the like. In some examples, imager 140 may represent more than one imaging device, such as a plurality of any of the aforementioned devices.
[0025] Imager 140 may image a region of interest in the patient’s body. The particular region of interest may be dependent on anatomy, the medical procedure, patient symptoms, and/or the like. For example, when performing a cardiac medical procedure, a portion of the vasculature and/or the heart may be within the region of interest. [0026] Additional equipment 170 may include equipment utilized during a medical procedure, for example, to monitor patient parameters. For example, additional equipment 170 may include an electrocardiogram (ECG) device for monitoring an ECG of the patient during the medical procedure. Additionally, or alternatively, additional equipment 170 may include a fractional flow reserve (FFR) device, a coronary flow reserve (CFR) device, an index of microvascular resistance (IMR) device, a hemodynamics measurement device, or the like.
[0027] Computing device 150 may be communicatively coupled to imager 140, additional equipment 170, display device 110 and/or server 160, for example, by wired, optical, or wireless communications. Server 160 may be a hospital server which may or may not be located in an emergency room or Cath lab of a hospital, a cloud-based server, or the like. Server 160 may be configured to store patient imaging data (such as angiography data), electronic healthcare or medical records, or the like. In some examples, server 160 may be configured to execute the machine learning model(s) and/or perform one or more of, or a portion of one or more of, the determinations associated therewith.
[0028] Any of, or any combination of, computing device 150, imager 140, and/or server 160 may include one or more machine learning model(s). For example, computing device 150, imager 140, and/or server 160 may obtain angiography data, e.g., via imager 140. Computing device 150, imager 140, and/or server 160 may execute at least one machine learning model to determine, based at least in part on the angiography data, an indication of MVO. The indication of MVO may be indicative of a likelihood of the patient having MVO and/or a severity of MVO of the patient. For example, the indication of MVO may be an MVO score that may be objectively determined via the execution of the at least one machine learning model. Computing device 150, imager 140, and/or server 160 may output the indication of MVO. For example, Computing device 150, imager 140, and/or server 160 may output the indication of MVO for display on display device 110. The indication of MVO may include a numerical score, a color, a fill pattern, or other visual means of indicating a likelihood the patient has MVO. In some examples, the indication may not be visual or may include visual, as well as other elements, such as auditory, tactile, or the like. For example, computing device 150, imager 140, and/or server 160 may output a representation of an MVO score for display, for example, to display device 110. For example, display device 110 may overlay a representation of the MVO score on live angiography images from imager 140. In some examples, computing device 150, imager 140, and/or server 160 may determine a location of the anatomy of the patient associated with the indication of MVO and output an indication of the location (e.g., by color, flashing, or other visual identification technique) the location associated with the indication of MVO in the anatomy of the patient.
[0029] By determining and outputting a representation of indication of MVO, system 100 may assist clinicians in more effectively determining whether to treat, and/or how to treat, potential MVO in a patient. The techniques of this disclosure may utilize data already being collected during a medical procedure, such that additional diagnostic procedures are not necessary to determine whether treatment is appropriate or to determine which treatment option should be pursued for the patient. As such, the techniques of this disclosure may improve patient outcomes, as patients may be better treated in a more timely manner, and/or improve medical facility efficiency, as a followup diagnostic procedure may not be necessary to determine candidate patients for treatment.
[0030] FIG. 2 is a schematic view of one example of a computing device 150 of system 10 of FIG. 1. Computing device 150 may include a workstation, a desktop computer, a laptop computer, a smart phone, a tablet, a dedicated computing device, or any other computing device capable of performing the techniques of this disclosure. [0031] Computing device 150 may be configured to perform processing, control and other functions associated with imager 140. As shown in FIG. 2, computing device 150 may represent multiple instances of computing devices, each of which may be associated with imager 140. Computing device 150 may include, for example, memory 202, processing circuitry 204, a display 206, a network interface 208, input device(s) 210, and/or output device(s) 212, each of which may represent any of multiple instances of such a device within the computing system, for ease of description.
[0032] While processing circuitry 204 appears in computing device 150 in FIG. 2, in some examples, features attributed to processing circuitry 204 may be performed by processing circuitry of any of computing device 150, imager 140, or server 160, or combinations thereof. In some examples, one or more processors associated with processing circuitry 204 in computing system may be distributed and shared across any combination of computing device 150, imager 140, and server 160. Computing device 150 may be used to perform any of the techniques described in this disclosure, and may form all or part of devices or systems configured to perform such techniques, alone or in conjunction with other components, such as components of computing device 150, imager 140, server 160, or a system including any or all of such systems/devices.
[0033] Memory 202 of computing device 150 includes any non-transitory computer- readable storage media for storing data or software that is executable by processing circuitry 204 and that controls the operation of computing device 150 and/or imager 140, as applicable. It should be noted that memory 202 may include one or more memory devices. In one or more examples, memory 202 may include one or more solid-state storage devices such as flash memory chips. In one or more examples, memory 202 may include one or more mass storage devices connected to the processing circuitry 204 through a mass storage controller (not shown) and a communications bus (not shown). [0034] Although the description of computer-readable media herein refers to a solid- state storage, it should be appreciated by those skilled in the art that computer-readable storage media may include any available media that may be accessed by the processing circuitry 204. That is, computer readable storage media includes non-transitory, volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. For example, computer-readable storage media includes RAM, ROM, EPROM, EEPROM, flash memory and/or other solid state memory technology, CD-ROM, DVD, Blu-Ray and/or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage and/or other magnetic storage devices, and/or any other medium that may be used to store the desired information and that may be accessed by computing device 150. In one or more examples, computer-readable storage media may be stored in the cloud or remote storage and accessed using any suitable technique or techniques through at least one of a wired or wireless connection. [0035] Memory 202 may store angiography data 214, non-angiography data 216, 3D model 228, and/or MVO data 226. Angiography data 214 may include a plurality of angiography images obtained, for example, from imager 140 during the medical procedure. In some examples, angiography data 214 may also include images obtained during a prior diagnostic angiography medical procedure. While the medical procedure is proceeding in time, additional angiography data may be obtained from imager 140 and stored in angiography data 214. Such angiography data may be displayed via display 206 and/or display device 110 and may be used by a clinician when navigating a medical instrument through anatomy of a patient. [0036] Angiography data 214 may be generated by imager 140 of anatomy of the patient and obtained by computing device 150 via network interface 208 which may be communicatively coupled to imager 140. In some examples, imager 140 may generate other types of imaging data, such as when imager 140 represents more than one imaging device. For example, imager 140 may generate cardiac magnetic resonance imaging (CMR) data and/or CT angiography (CTA) data which may be examples of nonangiography data 216 which may be used to determine an indication of MVO.
[0037] For example, angiography data 214 may be captured by imager 140 (FIG. 1). Processing circuitry 204 may obtain angiography images of angiography data 214 from imager 140 and store the angiography images in angiography data 214 in memory 202. Non-angiography data 216 may be captured by imager 140 and/or additional equipment 170 (FIG. 1). Processing circuitry 204 may obtain non-angiography data 216 in memory 202. Processing circuitry 204 may execute user interface 218 so as to cause display 206 (and/or display device 110 of FIG. 1) to present user interface 218 to one or more clinicians performing the medical procedure. User interface 218 may display angiography data 214 and/or non-angiography data 216.
[0038] Memory 202 may also store one or more machine learning model(s) 222 and user interface 218. Machine learning model(s) 222 may be configured to, when executed by processing circuitry 204, determine an indication of MVO based on angiography data 214 and/or non-angiography data 216 and may store the indication of MVO in MVO data 226. The indication of MVO may be indicative of a likelihood and/or severity of MVO in a patient. In some examples, the indication of MVO may take the form of an MVO score. Machine learning model(s) 222 may also be configured, when executed by processing circuitry 204, to determine a location associated with the indication of MVO, and/or a probability of MVO deterioration and/or a probability of heart failure, which may be processing circuitry 204 may store in MVO data 226.
[0039] In some examples, angiography data 214 includes live angiography data, angiography data captured during the current medical procedure. In some examples, non- angiography data 216 includes live non-angiography data captured during the current medical procedure. In some examples, processing circuitry 204 may use angiography data 214 and/or non-angiography data 216 to build 3D model 228 of anatomy of the patient.
[0040] Processing circuitry 204 may use angiography data 214 and/or non- angiography data 216 to quantitatively assess the possibility of the presence of and/or severity of MVO in a patient. Angiography data 214 may include data that is determined based on angiography images captured by imager 140. For example, angiography data 214 that may be used by processing circuitry 204 to quantitatively assess the possibility of presence and/or severity of MVO includes TIMI flow, TIMI epicardial flow grade data, TIMI frame count (TFR) data, TIMI myocardial perfusion grade (TMP) data, and/or myocardial blush grade (MBG) data. For example, processing circuitry 204 may determine one or more of TIMI flow data, TIMI epicardial flow grade data, TFR data, TMP data, or MBG data based on angiography image(s) in angiography data 214. In some examples, processing circuitry 204 may execute machine learning model(s) 222 to determine one or more of TIMI flow data, TIMI epicardial flow grade data, TFR data, TMP data, or MBG data.
[0041] Non-angiography data may include data that is not determined based on angiography images captured by imager 140. For example, non-angiography data 216 that may be used by processing circuitry 204 to quantitatively assess the possibility of presence and/or severity of MVO includes fractional flow reserve (FFR) data, coronary flow reserve (CFR) data, index of microvascular resistance (IMR) data, cardiac magnetic resonance imaging (CMR) data, ECG data, hemodynamics data, and/or CT angiography (CTA) data. In some examples, processing circuitry 204 may determine FFR data, CFR data, IMR data, CMR data, ECG data, hemodynamics data, and/or CTA data from data captured by additional equipment 170. In the event that imager 140 represents a plurality of imagers, processing circuitry 204 may determine FFR data, CFR data, IMR data, CMR data, ECG data, hemodynamics data, and/or CTA data from non-angiography imagers of imager 140. In some examples, processing circuitry 204 may determine non-angiography data 216 from a combination of imager 140 and additional equipment 170.
[0042] For example, system 100 may utilize angiography data 214 and/or non- angiography data 216 which may already be collected as part of a typical workflow for a cath lab medical procedure.
[0043] For example, processing circuitry 204 may execute machine learning model(s) 222, such as a neural network, to more objectively assess MVO. In some examples, processing circuitry 204 may use such an assessment to determine and output a suggested treatment for an affected patient, such as a pharmaceutical and/or coronary interventional procedure.
[0044] In some examples, machine learning model(s) 222 may be trained using angiography images, such as fluoroscopy with contrast images, hemodynamic readings, ECG data, or the like. For example, machine learning model(s) 222 may be trained using angiography images that have been annotated by a person trained to identify and score indices such as TIMI, TFR, TMP, MBG, and/or ECG. These may later be validated and/or further trained using CMR and/or CFR and IMR data. Machine learning model(s) 222 may include naive Bayes, k-nearest neighbors, random forest, support vector machines, neural networks, linear regression, logistic regression, etc.
[0045] Training output data may include raw MVO severity and/or MVO location indications. Such output data may be reviewed and annotated or corrected and used to further train machine learning model(s) 222. For example, a CMR raw image may be translated into a classified region via a manual evaluation and/or an automated pixelbased algorithm. Such a pixel-based algorithm may allow for a higher spatial resolution, such as thousands of segments in 3D-space, rather than the traditional 17 segment model in current use.
[0046] Processing circuitry 204 may execute machine learning model(s) 222 to generate an indication of MVO and/or a location of the anatomy of the patient associated with the indication of MVO. Processing circuitry 204 may control display 206, network interface 208, and/or output device(s) 212 to output the indication of MVO and/or the location of the anatomy of the patient associated with the indication of MVO. For example, display 206 and/or display device 110 may overlay the output on live angiography data being displayed. If the pixel-based algorithm is used, this may provide finer resolution of the overlayed data, therefore providing more accurate targeting of potentially ischemic regions.
[0047] In some examples, anatomy (e.g., cardiac tissue) within the training angiography images may be divided or classified into a plurality of regions.
[0048] Angularity of angiography data 214 may cause imaging data captured at one angle to vary from imaging data captured at another angle. Therefore, machine learning model(s) 222 may be trained to work with a specific set of angles of angiography images. In some examples, machine learning model(s) 222 may include a plurality of machine learning models that each may be trained to work with a different angle, or set of angles, of angiography images. The set of angles may be initially limited and may be expanded as more training data (e.g., data from angles outside of the initial set of angles) is collected. In some examples, angulation metadata associated with angiography images captured by imager 140 may be fed into machine learning model(s) 222 at a different level of a convolutional neural network (CNN) to allow the CNN to account for the angularity of the input data.
[0049] In some examples, in order to provide more, or more varied, angiography data 214 from which machine learning model(s) 222 may determine the likelihood and/or severity of MVO, processing circuitry 204 may output (e.g., via display 206, display device 110, network interface 208, and/or output device(s) 212) a request that a clinician provide multiple fluoroscopy angles (or a rotational c-arm sweep) using imager 140. In some examples, processing circuitry 204 may control imager 140 (e.g., via network interface 208) to capture imaging data from a plurality of angles, in an effort to improve the accuracy of any prediction made by machine learning model(s) 222.
[0050] In some examples, the system may generate a 3D model 228 of anatomy of the patient based on the angiography images of angiography data 214. Various techniques for generating 3D models of patient anatomy exist and may be known to those skilled in the art. In some examples, processing circuitry 204 may map features onto 3D model 228 to account for the angularity of angiography data 214.
[0051] Machine learning model(s) 222 may be trained to assess the possibility of the presence of and/or severity of MVO based on 3D model 228. In some examples, machine learning model(s) 222 may be trained to identify abruptly cut-off or narrowing vasculature at a very small scale which might be missed by a human viewer without lengthy inspection, for example, using a blush-based model. For example, processing circuitry 204 may identify microvasculature which may be input into machine learning model(s) 222 to provide contextual information as to where contrast may be expected to perfuse well into the myocardium (and vice versa).
[0052] In some examples, machine learning model(s) 222 may be trained to perform flow and/or volume modeling where an estimate of volume at various points of the patient anatomy may be determined and analyzed to detect anomalies. For example, processing circuitry 204 executing machine learning model(s) 222 may estimate the volume and location/morphology of contrast as it flows through vasculature of the patient. In some examples, machine learning model(s) 222 may include a pixel-based algorithm to estimate the volume on a pixel-by-pixel basis in angiography data 214 and then feed the estimates into 3D model 228. Processing circuitry 204 executing machine learning model(s) 222 may then detect anomalies between expected flow throughout that vasculature and the observed (e.g., estimated) contrast path. Such anomalies may be indicative of a likelihood and/or severity of MVO in the patient. [0053] When processing circuitry 204 determines a likelihood of the presence of MVO, (for example, the likelihood of the presence of MVO meets a predetermined threshold), processing circuitry 204 may also determine the extent of the likely MVO. This extent of the likely MVO may include a probability of MVO deterioration and/or a probability of heart failure. Processing circuitry 204 may also determine a location associated with the likelihood of the MVO. Processing circuitry 204 may control display 206 and/or display device 110 to overlay such information on a currently displayed angiography image, for example, highlighting the risk area and/or displaying a representation (e.g., a graphical representation) of whether treatment is recommended. [0054] An example patient scenario is now described. The patient is identified as having a STEMI. A clinician may perform a PCI to attempt to unblock the infarct vessel. For example, the clinician may perform the PCI in a cath lab using a system such as system 100 of FIG. 1.
[0055] During the PCI, processing circuitry 204 may obtain angiography image data (e.g., of angiography data 214) from imager 140, e.g., via network interface 208. Processing circuitry 204 may execute machine learning model(s) 222 and determine one or more of TIMI flow, TFR, TIMI myocardial perfusion grade, MBG, or the like. Processing circuitry 204 executing machine learning model(s) 222 may quantify these indices more objectively, accurately, and more precisely than if a clinician were to do access the information in the angiography image data by eye. In some examples, processing circuitry 204 may also process ECG data, including any ECG data captured of the patient prior to the STEMI, as well as ECG data captured after the STEMI (e.g., ECG data captured during the PCI).
[0056] The following data may also be obtained by processing circuitry 204, either automatically (e.g., determined by processing circuitry 204 or retrieved from another device (imager 140, additional equipment 170, etc.) by processing circuitry 204) or otherwise input to computing device 150, such as manually by a clinician: an identification of an infarct vessel; an area at risk; a time to balloon (e.g., door to balloon time) for STEMI; and/or an aortic pressure. For example, processing circuitry 204 may determine an area at risk based on angiography images, e.g., from both the left side and right side of the heart of the patient pre- and post-PCI.
[0057] Processing circuitry 204 executing machine learning model(s) 222 may determine a likelihood of the presence (or lack) of MVO. Processing circuitry 204 executing machine learning model(s) 222 may also determine a likely extent of the MVO. The extent of MVO may take a form of an MVO score. The MVO score may include a percentage, a numerical score (e.g., on a scale of 0-3, 1-10, or the like), a descriptive assessment of the MVO, or other technique used to stratify different extents of MVO. [0058] In some examples, processing circuitry 204 executing machine learning model(s) 222 may also determine a prediction or probability metrics for MVO deterioration and heart failure in the patient. In some examples, processing circuitry 204 executing machine learning model(s) 222 may determine a location associated with the MVO. Processing circuitry 204 may control display 206 and/or display device 110 to overlay an indication of the MVO or MVO score on the location associated with the MVO as represented on live angiography images displayed on display 206 and/or display device 110. In this manner, processing circuitry 204 may highlight an at-risk area for the clinician.
[0059] In some examples, processing circuitry 204 may simulate a treatment to the patient for the MVO and control display 206 and/or display device 110 to display the results of the simulation. In some examples, the results of the simulation may include simulated MVO score(s), other simulated measures, such as TIMI flow data, TIMI epicardial flow grade data, TFR data, TMP data, or MBG data, FFR data, CFR data, IMR data, CMR data, ECG data, hemodynamics data, and/or CTA data, and/or a visual indication of the anatomy of the patient based on the simulation.
[0060] In some examples, processing circuitry 204 may simulate a non-treatment to the patient for the MVO and control display 206 and/or display device 110 to display the results of the simulation. In some examples, the results of the simulation may include simulated MVO score(s), other simulated measures, such as TIMI flow data, TIMI epicardial flow grade data, TFR data, TMP data, or MBG data, FFR data, CFR data, IMR data, CMR data, ECG data, hemodynamics data, and/or CTA data, and/or a visual indication of the anatomy of the patient based on the simulation. For example, such a simulation may be based on the determination of the prediction or probability metrics for MVO deterioration and heart failure in the patient.
[0061] In some examples, processing circuitry 204 may control display 206 and/or display device 110 to display any combination of the simulation results of the simulation including treatment, the simulation results of the simulation not including treatment, and/or the live angiography data with or without any overlayed information.
[0062] As more patients undergo medical procedures using the techniques of this disclosure and/or more patients receive treatment for MVO, subsequent outcome data may be used to further train machine learning model(s) 222 to provide a recommendation for treatment(s) or rank potential treatment(s) based on the input data for a current patient. Such techniques may be beneficial to patients because some therapies are costly and/or may increase procedure length. However, if previous outcome data indicates that there is a high probability of a reduction on infarct size for the current patient, then processing circuitry 204 executing machine learning model(s) 222 may recommend such a treatment and thereby make the decision to employ the treatment easier and more justifiable for the clinician.
[0063] Processing circuitry 204 may be implemented by one or more processors, which may include any number of fixed-function circuits, programmable circuits, or a combination thereof. In various examples, control of any function by processing circuitry 204 may be implemented directly or in conjunction with any suitable electronic circuitry appropriate for the specified function. Fixed-function circuits refer to circuits that provide particular functionality and are preset on the operations that may be performed. Programmable circuits refer to circuits that may programmed to perform various tasks and provide flexible functionality in the operations that may be performed. For instance, programmable circuits may execute software or firmware that cause the programmable circuits to operate in the manner defined by instructions of the software or firmware. Fixed-function circuits may execute software instructions (e.g., to receive parameters or output parameters), but the types of operations that the fixed-function circuits perform are generally immutable. In some examples, the one or more of the units may be distinct circuit blocks (fixed-function or programmable), and in some examples, the one or more units may be integrated circuits.
[0064] Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), graphics processing units (GPUs) or other equivalent integrated or discrete logic circuitry. Accordingly, the term processing circuitry 204 as used herein may refer to one or more processors having any of the foregoing processor or processing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements. [0065] Display 206 may be touch sensitive or voice activated, enabling display 206 to serve as both an input and output device. Alternatively, a keyboard (not shown), mouse (not shown), or other data input devices (e.g., input device(s) 210) may be employed.
[0066] Network interface 208 may be adapted to connect to a network such as a local area network (LAN) that includes a wired network or a wireless network, a wide area network (WAN), a wireless mobile network, a Bluetooth network, or the internet. For example, computing device 150 may obtain angiography data 214 from imager 140 during a medical procedure and/or non-angiography data 216 from imager 140 and/or additional equipment 170 during a medical procedure. Computing device 150 may receive updates to its software, for example, application(s) 217, via network interface 208. Computing device 150 may also display notifications on display 206 that a software update is available.
[0067] Input device(s) 210 may include any device that enables a user to interact with computing device 150, such as, for example, a mouse, keyboard, foot pedal, touch screen, augmented-reality input device receiving inputs such as hand gestures or body movements, or voice interface.
[0068] Output device(s) 212 may include any connectivity port or bus, such as, for example, parallel ports, serial ports, universal serial busses (USB), or any other similar connectivity port known to those skilled in the art.
[0069] Application(s) 217 may be one or more software programs stored in memory 202 and executed by processing circuitry 204 of computing device 150. Processing circuitry 204 may execute user interface 218, which may display angiography data 214, location of true lumen 228, and/or navigation path 226 for the medical instrument to the true lumen on display 206 and/or display device 110. A clinician may use the displayed data or navigation path 226 to guide and advance a distal portion of a medical instrument from within a neo lumen into a true lumen when crossing a lesion.
[0070] FIG. 3 is a flow diagram of example techniques for determining MVO according to one or more aspects of this disclosure. The techniques of FIG. 3 are described below with respect to processing circuitry 204, but such techniques may be performed by any of, or any combination of, processing circuitry of devices depicted in FIG. 1 or capable of performing such techniques.
[0071] Processing circuitry 204 may obtain angiography data 214 (300). For example, processing circuitry 204 may obtain angiography images from imager 140 via network interface 208. [0072] Processing circuitry 204 may execute at least one machine learning model (e.g., of machine learning model(s) 222) to determine, based at least in part on angiography data 214, an indication of MVO (e.g., of MVO data 226), the indication of MVO being indicative of at least one of a likelihood of the patient having MVO or a severity of MVO (302). For example, processing circuitry 204 may execute machine learning model(s) 222 to determine an MVO score based, at least in part, on one or more of TIMI flow, TIMI epicardial flow grade, TFR, TMP, or MBG of angiography data 214. In some examples, processing circuitry 214 may determine the one or more of TIMI flow, TIMI epicardial flow grade, TFR, TMP, or MBG from angiography images of angiography data 214. In some examples, imager 140 may determine one or more of TIMI flow, TIMI epicardial flow grade, TFR, TMP, or MBG from captured angiography images and processing circuitry 204 may obtain the one or more of TIMI flow, TIMI epicardial flow grade, TFR, TMP, or MBG from imager 140.
[0073] Processing circuitry 204 may output the indication of MVO (304). For example, processing circuitry 204 may control display 206 and/or display device 110 to display the indication of MVO for a clinician to view.
[0074] In some examples, as part of determining the indication of MVO, processing circuitry 204 is configured to determine, based on angiography data 214, at least one of TIMI flow, TIMI epicardial flow grade, TFR, TMP, or MBG. In some examples, processing circuitry 204 is configured to determine the indication of MVO further based on non-angiography data 216. In some examples, the non-angiography data 216 includes at least one of FFR data, CFR data, IMR data, CMR data, ECG data, hemodynamics data, or CTA data. For example, processing circuitry 204 may execute machine learning model(s) 222, which may be trained on previously captured angiography data 214 and/or previously captured non-angiography data 216 (e.g., training data). Processing circuitry 204 executing machine learning model(s) 222 may analyze angiography data 214 and/or non-angiography data 216 captured during a current medical procedure (e.g., input data), and, in this manner, determine the indication of MVO based on the training data and the input data.
[0075] In some examples, the indication of MVO includes at least one of an MVO score, a probability of MVO deterioration, or a probability of heart failure. In some examples, the indication of MVO includes the MVO score and processing circuitry 204 is further configured to determine, based on the MVO score, a recommended treatment (e.g., of MVO data 226) and output an indication of the recommended treatment (e.g., to display 206 or display device 110). In some examples, processing circuitry 204 is further configured to determine an MVO location (e.g., of MVO data 226) associated with the indication of MVO and output an overlay indicative of the MVO location and the indication of MVO for display (e.g., on display 206 or display device 110) with angiography images (e.g., of angiography data 214).
[0076] In some examples, at least one of machine learning model(s) 222 is trained on at least one of angiography images, hemodynamics data, or electrocardiogram data. In some examples, at least one of machine learning model(s) 222 is trained on annotated angiography images. In some examples, machine learning model(s) includes a plurality of machine learning models, each of the plurality of machine learning models being associated with a respective angiography angle. In some examples, at least one of machine learning model(s) 222 is trained on angiography data from a plurality of angiography angles.
[0077] In some examples, processing circuitry 204 is further configured to generate a 3D model 228 based on angiography data 214. In some examples, as part of executing the at least one machine learning model(s) 222 to determine the indication of MVO, processing circuitry 204 is configured to perform at least one of flow or volume modeling 3D model 228 and determine the indication of MVO based on the at least one of flow or volume modeling. In some examples, processing circuitry 204 is further configured to at least one of output a request for the capture of a plurality of angles of angiography data 214 or control imager 140 to capture the plurality of angles of angiography data 214, wherein angiography data 214 comprises angularity information indicative of a respective angle at which an angiography datum is captured.
[0078] FIG. 4 is a conceptual diagram illustrating an example machine learning model according to one or more aspects of this disclosure. Machine learning model 400 may be an example of machine learning model(s) 222. Machine learning model 400 may be an example of a neural network, such as a convolutional neural network, or other machine learning model, trained to determine an indication of MVO, for example, based on angiography data 214 and/or non-angiography data 216. One or more of computing device 150 and/or server 160 may train, store, and/or utilize machine learning model 400, but other devices of system 100 may apply inputs to machine learning model 400 in some examples. In some examples, various types of machine learning and deep learning models or algorithms may be utilized. For examples, a convolutional neural network model, e.g., ResNet-18, may be used. Some non-limiting examples of models that may be used for transfer learning include AlexNet, VGGNet, GoogleNet, ResNet50, or DenseNet, etc. Some non-limiting examples of machine learning techniques include support vector machines, naive Bayes, k-nearest neighbor, multi-layer perceptron, random forest, support vector machines, neural networks, convolutional neural networks, recurrent neural networks, ensemble networks, decision trees, linear regression, logistic regression, long short-term memory, etc.
[0079] As shown in the example of FIG. 4, machine learning model 400 may include three types of layers. These three types of layers include input layer 402, hidden layers 404, and output layer 406. Output layer 406 comprises the output from the transfer function 405 of output layer 406. Input layer 402 represents each of the input values XI through X4 provided to machine learning model 400. In some examples, the input values may include any of the values input into the machine learning model, as described above. For example, the input values may include angiography data 214 and/or non-angiography data 216, as described above.
[0080] Each of the input values for each node in the input layer 402 is provided to each node of a first layer of hidden layers 404. In the example of FIG. 4, hidden layers 404 include two layers, one layer having four nodes and the other layer having three nodes, but fewer or greater number of nodes may be used in other examples. Each input from input layer 402 is multiplied by a weight and then summed at each node of hidden layers 404. During training of machine learning model 400, the weights for each input are adjusted to establish a relationship between angiography data 214 and/or non- angiography data 216, and the indication of MVO. In some examples, one hidden layer may be incorporated into machine learning model 400, or three or more hidden layers may be incorporated into machine learning model 400, where each layer includes the same or different number of nodes.
[0081] The result of each node within hidden layers 404 is applied to the transfer function of output layer 406. The transfer function may be linear or non-linear, depending on the number of layers within machine learning model 400. Example nonlinear transfer functions may be a sigmoid function or a rectifier function. The output 407 of the transfer function may be a classification that angiography data 214 and/or non- angiography data 216 is indicative of a particular likelihood and/or severity of MVO, a location of the likely MVO, a probability of MVO deterioration, and/or a probability of heart failure. [0082] As shown in the example above, by applying machine learning model 400 to input data such as angiography data 214 and/or non-angiography data 216, processing circuitry 204 is able to determine a likelihood and/or severity of MVO in a patient. This may improve the ability of a clinician to determine whether or not to treat a patient for MVO and/or how to treat a patient for MVO without requiring a patient to come back for a future medical procedure at a later date, thereby, decreasing the risk of further tissue death in the patient.
[0083] FIG. 5 is a conceptual diagram illustrating an example training process for a machine learning model according to one or more aspects of this disclosure. Process 570 may be used to train machine learning model(s) 222 or machine learning model 400. A machine learning model 574 (which may be an example of machine learning model 400 and/or machine learning model(s) 222) may be implemented using any number of models for supervised and/or reinforcement learning, such as but not limited to, an artificial neural network, convolutional neural network, recurrent neural network, a decision tree, naive Bayes network, support vector machine, k-nearest neighbor model, ensemble network, to name only a few examples.
[0084] In some examples, one or more of computing device 150 and/or server 160 initially trains machine learning model 574 based on a corpus of training data 572. Training data 572 may include, for example, angiography images, such as fluoroscopy with contrast images, hemodynamic readings, and/or ECG data. In some examples, training data 572 may include annotations scoring indices such as TIMI, TFR, TMP, MBG, and/or ECG.
[0085] In some examples, training data 572 may include angular metadata associated with particular angiography images, indicative of the angle at which the angiography images were captured.
[0086] In some examples, with the treatment of patients for MVO, training data 572 may include treatment and outcome data so as to train machine learning model 574 to recommend medical treatment be provided to the patient or a type of medical treatment to be provided to the patient.
[0087] While training machine learning model 574, processing circuitry of system 100 may compare 576 a prediction or classification with a target output 578. Processing circuitry 204 may utilize an error signal from the comparison to train (learning/training 580) machine learning model 574. Processing circuitry 204 may generate machine learning model weights or other modifications which processing circuitry 204 may use to modify machine learning model 574. For examples, processing circuitry 204 may modify the weights of machine learning model 574 based on the leaming/training 580. For example, one or more of computing device 150 and/or server 160, may, for each training instance in training data 572, modify, based on training data 572, the manner in which the indication of MVO, the location of MVO, the probability of MVO deterioration, the probability of heart failure, and/or the recommendation of treatment is determined.
[0088] FIG. 6 is a conceptual diagram illustrating another example training process for a machine learning model according to one or more aspects of this disclosure. Process 600 may be used to train machine learning model(s) 222 or machine learning model 400. A machine learning model 608 (which may be an example of machine learning model 400 and/or machine learning model(s) 222) may include a neural network or other type of machine learning model, such as those mentioned in this disclosure.
[0089] Training data 602 may include patient data. In some examples, training data 602 include patient data including angiography data, such as angiography data from prePCI and/or post-PCI procedures. Such angiography data may include TFR, TMP, and/or MBG data. Training data 602 also include ECG data, for example, from pre-PCI and/or post-PCI procedures. Training data 6092 may also include CMR data. In some examples, this CMR data may be captured outside of the cath lab environment, such as around 3 days after a STEMI.
[0090] The input training data 602 may be pre-processed 604, for example, by a neural network. During this pre-processing 604, the machine learning model may determine FFR, CRF, IMR, TIMI frame count, TMP, and/or MBG. ECG data and/or CMR data may be used to confirm MVO presence, severity, and/or location.
[0091] The pre-processed data may be modeled 606 to generate machine learning model 608. For example, modeling 606 may generate a transfer function (Fx) that may be applied to input data to make a prediction regarding the indication of MVO. For example, modeling 606 may select that input data that best predicts or estimates the indication of MVO, the location of MVO, the probability of MVO deterioration, the probability of heart failure, and/or the recommendation of treatment. It should be noted that output of machine learning model 608 may be used for further modeling 606. Machine learning model 608 may thereby be trained to estimate a likelihood and/or severity of MVO, the location of MVO, the probability of MVO deterioration, the probability of heart failure, and/or the recommendation of treatment for MVO for a particular patient. [0092] The techniques described in this disclosure may be implemented, at least in part, in hardware, software, firmware or any combination thereof. For example, various aspects of the described techniques may be implemented within one or more processors or processing circuitry, including one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components. The terms “controller”, “processor”, or “processing circuitry” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry. A control unit comprising hardware may also perform one or more of the techniques of this disclosure. Such hardware, software, and firmware may be implemented within the same device or within separate devices to support the various operations and functions described in this disclosure. In addition, any of the described units, circuits or components may be implemented together or separately as discrete but interoperable logic devices. Depiction of different features as circuits or units is intended to highlight different functional aspects and does not necessarily imply that such circuits or units must be realized by separate hardware or software components. Rather, functionality associated with one or more circuits or units may be performed by separate hardware or software components or integrated within common or separate hardware or software components.
[0093] The techniques described in this disclosure may also be embodied or encoded in a computer-readable medium, such as a computer-readable storage medium, containing instructions. Instructions embedded or encoded in a computer-readable storage medium may cause a programmable processor, or other processor, to perform the method, e.g., when the instructions are executed. Computer readable storage media may include random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), or electronically erasable programmable read only memory (EEPROM), or other computer readable media.
[0094] This disclosure includes the following non-limiting examples.
[0095] Example 1. A medical system comprising: memory configured to store angiography data of a patient; and processing circuitry communicatively coupled to the memory, the processing circuitry being configured to: obtain the angiography data; execute at least one machine learning model to determine, based at least in part on the angiography data, an indication of microvascular obstruction (MVO), the indication of MVO being indicative of at least one of a likelihood of the patient having MVO or a severity of MVO; and output the indication of MVO.
[0096] Example 2. The medical system of example 1, wherein as part of determining the indication of MVO, the processing circuitry is configured to determine, based on the angiography data, at least one of thrombolysis in myocardial infarction (TIMI) flow and TIMI epicardial flow grade, TIMI frame count (TFR), TIMI myocardial perfusion grade (TMP), or Myocardial blush grade (MBG).
[0097] Example 3. The medical system of example 1 or example 2, wherein the processing circuitry is configured to determine the indication of MVO further based on non-angiography data, wherein the non-angiography data comprises at least one of fractional flow reserve (FFR) data, coronary flow reserve (CFR) data, index of microvascular resistance (IMR) data, cardiac magnetic resonance imaging (CMR) data, electrocardiogram (ECG) data, hemodynamics data, or computed tomography angiography (CTA) data.
[0098] Example 4. The medical system of any of examples 1-3, wherein the indication of MVO comprises at least one of an MVO score, a probability of MVO deterioration, or a probability of heart failure.
[0099] Example 5. The medical system of example 4, wherein the indication of MVO comprises the MVO score, and wherein the processing circuitry is further configured to: determine, based on the MVO score, a recommended treatment; and output an indication of the recommended treatment.
[0100] Example 6. The medical system of any of examples 1-5, wherein the processing circuitry is further configured to: determine an MVO location associated with the indication of MVO; and output an overlay indicative of the MVO location and the indication of MVO for display with the angiography images.
[0101] Example 7. The medical system of any of examples 1-6, wherein the at least one machine learning model is trained on at least one of angiography images, hemodynamics data, or electrocardiogram data.
[0102] Example 8. The medical system of example 7, wherein the at least one machine learning model is trained on annotated angiography images.
[0103] Example 9. The medical system of any of examples 1-8, wherein the at least one machine learning model comprises a plurality of machine learning models, each of the plurality of machine learning models being associated with a respective angiography angle. [0104] Example 10. The medical system of any of examples 1-9, wherein the at least one machine learning model is trained on angiography data from a plurality of angiography angles.
[0105] Example 11. The medical system of any of examples 1-10, wherein the processing circuitry is further configured to: generate a 3D model based on the angiography data.
[0106] Example 12. The medical system of example 11, wherein as part of executing the at least one machine learning model to determine the indication of MVO, the processing circuitry is configured to: perform at least one of flow or volume modeling using the 3D model; and determine the indication of MVO based on the at least one of flow or volume modeling.
[0107] Example 13. The medical system of any of examples 1-12, wherein the processing circuitry is further configured to at least one of output a request for the capture of a plurality of angles of the angiography data or control an imager to capture the plurality of angles of the angiography data, wherein the angiography data comprises angularity information indicative of a respective angle at which an angiography datum is captured.
[0108] Example 14. A method comprising: obtaining, by processing circuitry, angiography data; executing, by the processing circuitry, at least one machine learning model to determine, based at least in part on the angiography data, an indication of microvascular obstruction (MVO), the indication of MVO being indicative of at least one of a likelihood of the patient having MVO or a severity of MVO; and outputting, by the processing circuitry, the indication of MVO.
[0109] Example 15. The method of example 14, wherein determining the indication of MVO comprises determining, based on the angiography data, at least one of thrombolysis in myocardial infarction (TIMI) flow and TIMI epicardial flow grade, TIMI frame count (TFR), TIMI myocardial perfusion grade (TMP), or Myocardial blush grade (MBG).
[0110] Example 16. The method of example 14 or example 15, wherein determining the indication of MVO is further based on non-angiography data, wherein the non-angiography data comprises at least one of fractional flow reserve (FFR) data, coronary flow reserve (CFR) data, index of microvascular resistance (IMR) data, cardiac magnetic resonance imaging (CMR) data, electrocardiogram (ECG) data, hemodynamics data, or computed tomography angiography (CTA) data. [0111] Example 17. The method of any of examples 14-16, wherein the indication of MVO comprises at least one of an MVO score, a probability of MVO deterioration, or a probability of heart failure.
[0112] Example 18. The method of example 17, wherein the indication of MVO comprises the MVO score, and wherein the method further comprises: determining, by the processing circuitry and based on the MVO score, a recommended treatment; and outputting, by the processing circuitry, an indication of the recommended treatment.
[0113] Example 19. The method of any of examples 14-18, further comprising: determining, by the processing circuitry, an MVO location associated with the indication of MVO; and outputting, by the processing circuitry, an overlay indicative of the MVO location and the indication of MVO for display with the angiography images.
[0114] Example 20. A non-transitory computer-readable storage medium storing instructions, which when executed cause processing circuitry to: obtain angiography data of a patient; execute at least one machine learning model to determine, based at least in part on the angiography data, an indication of microvascular obstruction (MVO), the indication of MVO being indicative of at least one of a likelihood of the patient having MVO or a severity of MVO; and output the indication of MVO.

Claims

What is claimed is:
1. A medical system comprising: memory configured to store angiography data of a patient; and processing circuitry communicatively coupled to the memory, the processing circuitry being configured to: obtain the angiography data; execute at least one machine learning model to determine, based at least in part on the angiography data, an indication of microvascular obstruction (MVO), the indication of MVO being indicative of at least one of a likelihood of the patient having MVO or a severity of MVO; and output the indication of MVO.
2. The medical system of claim 1, wherein as part of determining the indication of MVO, the processing circuitry is configured to determine, based on the angiography data, at least one of thrombolysis in myocardial infarction (TIMI) flow and TIMI epicardial flow grade, TIMI frame count (TFR), TIMI myocardial perfusion grade (TMP), or Myocardial blush grade (MBG).
3. The medical system of claim 1 or claim 2, wherein the processing circuitry is configured to determine the indication of MVO further based on non-angiography data, wherein the non-angiography data comprises at least one of fractional flow reserve (FFR) data, coronary flow reserve (CFR) data, index of microvascular resistance (IMR) data, cardiac magnetic resonance imaging (CMR) data, electrocardiogram (ECG) data, hemodynamics data, or computed tomography angiography (CTA) data.
4. The medical system of any of claims 1-3, wherein the indication of MVO comprises at least one of an MVO score, a probability of MVO deterioration, or a probability of heart failure.
5. The medical system of claim 4, wherein the indication of MVO comprises the MVO score, and wherein the processing circuitry is further configured to: determine, based on the MVO score, a recommended treatment; and output an indication of the recommended treatment.
6. The medical system of any of claims 1-5, wherein the processing circuitry is further configured to: determine an MVO location associated with the indication of MVO; and output an overlay indicative of the MVO location and the indication of MVO for display with the angiography images.
7. The medical system of any of claims 1-6, wherein the at least one machine learning model is trained on at least one of angiography images, hemodynamics data, or electrocardiogram data.
8. The medical system of claim 7, wherein the at least one machine learning model is trained on annotated angiography images.
9. The medical system of any of claims 1-8, wherein the at least one machine learning model comprises a plurality of machine learning models, each of the plurality of machine learning models being associated with a respective angiography angle.
10. The medical system of any of claims 1-9, wherein the at least one machine learning model is trained on angiography data from a plurality of angiography angles.
11. The medical system of any of claims 1-10, wherein the processing circuitry is further configured to: generate a 3D model based on the angiography data.
12. The medical system of claim 11, wherein as part of executing the at least one machine learning model to determine the indication of MVO, the processing circuitry is configured to: perform at least one of flow or volume modeling using the 3D model; and determine the indication of MVO based on the at least one of flow or volume modeling.
13. The medical system of any of claims 1-12, wherein the processing circuitry is further configured to at least one of output a request for the capture of a plurality of angles of the angiography data or control an imager to capture the plurality of angles of the angiography data, wherein the angiography data comprises angularity information indicative of a respective angle at which an angiography datum is captured.
14. A method comprising: obtaining, by processing circuitry, angiography data; executing, by the processing circuitry, at least one machine learning model to determine, based at least in part on the angiography data, an indication of microvascular obstruction (MVO), the indication of MVO being indicative of at least one of a likelihood of the patient having MVO or a severity of MVO; and outputting, by the processing circuitry, the indication of MVO.
15. A non-transitory computer-readable storage medium storing instructions, which when executed cause processing circuitry to: obtain angiography data of a patient; execute at least one machine learning model to determine, based at least in part on the angiography data, an indication of microvascular obstruction (MVO), the indication of MVO being indicative of at least one of a likelihood of the patient having MVO or a severity of MVO; and output the indication of MVO.
PCT/US2024/043041 2023-08-22 2024-08-20 Angiographically derived microvascular obstruction determination WO2025042892A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US63/578,020 2023-08-22

Publications (1)

Publication Number Publication Date
WO2025042892A1 true WO2025042892A1 (en) 2025-02-27

Family

ID=

Similar Documents

Publication Publication Date Title
US11854704B2 (en) Systems and methods for anatomical modeling using information obtained from a medical procedure
US11660143B2 (en) Systems and methods for diagnosis and assessment of cardiovascular disease by comparing arterial supply capacity to end-organ demand
US11826175B2 (en) Machine-based risk prediction for peri-procedural myocardial infarction or complication from medical data
US20210338333A1 (en) Systems and methods for treatment planning based on plaque progression and regression curves
US11389130B2 (en) System and methods for fast computation of computed tomography based fractional flow reserve
JP6626498B2 (en) System and method for determining blood flow characteristics and pathology via modeling myocardial blood supply
US10262101B2 (en) Systems and methods for predicting perfusion deficits from physiological, anatomical, and patient characteristics
US10758125B2 (en) Enhanced personalized evaluation of coronary artery disease using an integration of multiple medical imaging techniques
US20200226749A1 (en) Inflammation estimation from x-ray image data
US11694330B2 (en) Medical image processing apparatus, system, and method
CN110444275B (en) System and method for rapid calculation of fractional flow reserve
CN112446499A (en) Improving performance of machine learning models for automated quantification of coronary artery disease
WO2021041074A1 (en) Methods and systems for computer-aided diagnosis with deep learning models
WO2023239743A1 (en) Use of cath lab images for procedure and device evaluation
WO2025042892A1 (en) Angiographically derived microvascular obstruction determination
US20230386113A1 (en) Medical image processing apparatus and medical image processing method
WO2024233090A1 (en) Identification of arterial disease patients for follow-up
WO2024259118A1 (en) True lumen detection and navigation using imaging
CN119072758A (en) Methods and systems for predicting coronary artery disease based on echocardiography
WO2023239741A1 (en) Use of cath lab images for treatment planning
EP4505478A1 (en) Use of cath lab images for physician training and communication
WO2023239742A1 (en) Use of cath lab images for prediction and control of contrast usage
JP2024530417A (en) Modeling the Subject's Heart