CN112567378A - Method and system for utilizing quantitative imaging - Google Patents

Method and system for utilizing quantitative imaging Download PDF

Info

Publication number
CN112567378A
CN112567378A CN201980049912.9A CN201980049912A CN112567378A CN 112567378 A CN112567378 A CN 112567378A CN 201980049912 A CN201980049912 A CN 201980049912A CN 112567378 A CN112567378 A CN 112567378A
Authority
CN
China
Prior art keywords
data
pathology
analyte
imaging
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201980049912.9A
Other languages
Chinese (zh)
Inventor
马克·A·巴克勒
戴维·S·派克
弗拉迪米尔·瓦尔特奇诺夫
安德鲁·J·巴克勒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Elucid Bioimaging Inc.
Original Assignee
Elucid Bioimaging Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Elucid Bioimaging Inc. filed Critical Elucid Bioimaging Inc.
Publication of CN112567378A publication Critical patent/CN112567378A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/217Validation; Performance evaluation; Active pattern learning techniques
    • G06F18/2193Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Databases & Information Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Radiology & Medical Imaging (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Processing (AREA)

Abstract

Presented herein are systems and methods for analyzing pathology using quantitative imaging. Advantageously, the systems and methods of the present disclosure utilize a hierarchical analysis framework that identifies and quantifies biological properties/analytes from imaging data, and then identifies and characterizes one or more pathologies based on the quantified biological properties/analytes. This hierarchical approach of using imaging to examine basic biology as an intermediary for assessing pathology offers a number of analysis and processing advantages over systems and methods configured to determine and characterize pathology directly from basic imaging data.

Description

Method and system for utilizing quantitative imaging
Background
The present disclosure relates to quantitative imaging and analysis. More particularly, the present disclosure relates to systems and methods for analyzing pathology using quantitative imaging.
Imaging, particularly with safe and non-invasive methods, represents the most powerful method for locating disease origin, capturing its detailed pathology, guiding therapy and monitoring health progress. Imaging is also an extremely valuable and low cost method for mitigating these human and financial costs by allowing for appropriate early intervention that is both inexpensive and destructive.
Enhanced imaging techniques have made medical imaging an essential component of patient care. Imaging is particularly valuable because it provides anatomical and functional information localized in space and time using non-invasive or minimally invasive methods. However, there is a need for techniques for effectively utilizing increased temporal and spatial resolution to take advantage of patterns or features in data that are not readily accessible for evaluation by the human eye and to manage large magnitudes of data in this manner to effectively integrate it into a clinical workflow. Without assistance, clinicians have neither time nor the ability to efficiently extract available information content, and in any event, generally interpret information subjectively and qualitatively. Integrating quantitative imaging for individual patient management and clinical trials for therapy development requires a new class of decision-supporting informatics tools to enable the medical community to leverage the capabilities that imaging modalities may possess with evolution and growth within the realities of existing workflow and reimbursement constraints.
Quantitative results from imaging methods are likely to be useful as biomarkers in routine clinical care and clinical trials, for example, according to the widely accepted definition of biomarkers at the NIH Consensus Conference (NIH Consensus Conference). In clinical practice, quantitative imaging is aimed at (a) detecting and characterizing a disease before, during or after the course of therapy, and (b) predicting the course of a disease with or without therapy. In clinical studies, imaging biomarkers can be used to define endpoints of clinical trials.
Quantification is based on the development of imaging physics that leads to improvements in spatial, temporal and contrast resolution and the ability to excite tissue with multiple energies/sequences, producing different tissue-specific responses. Thus, these improvements may allow for tissue differentiation and functional assessment, and are evident, for example, in the following: computed Tomography (CT), Dual Energy Computed Tomography (DECT), spectral computed tomography (spectral CT), Computed Tomography Angiography (CTA), Cardiac Computed Tomography Angiography (CCTA), Magnetic Resonance Imaging (MRI), multi-contrast magnetic resonance imaging (multi-contrast MRI), Ultrasound (US), and targeted or conventional contrast agent methods with various imaging modalities. Quantitative imaging measures specific biological characteristics that indicate the effectiveness of one treatment relative to another, how effective the current treatment is, or what risk a patient is at if it is not yet treated. As a measurement device, a scanner in combination with image processing of the formed image has the ability to measure the properties of the tissue and to measure how different tissues respond to it based on physical principles related to a given imaging method. Although the image formation process varies widely across modalities, some generalizations help to frame the overall assessment, which, despite exceptions, nuances, and subtleties, push true conclusions and omit them before they are considered some of the greatest opportunities, and unless they are considered some of the greatest opportunities.
Imaging in the early stages of clinical testing of novel therapeutics helps to understand the underlying biological pathways and pharmacological effects. It can also reduce the cost and time required to develop new drugs and therapeutics. In later stages of development, imaging biomarkers may serve as important endpoints for clinical benefit and/or serve as a companion diagnostic to aid prescription and/or follow-up for a particular patient condition for personalized therapy. In all stages, imaging biomarkers can be used to select or stratify patients based on disease status in order to better demonstrate therapeutic efficacy.
a.Quantitative medical imaging:
enhanced imaging techniques have made medical imaging an essential component of patient care. Imaging is particularly valuable because it provides anatomical and functional information localized in space and time using non-invasive or minimally invasive methods. However, there is a need for techniques to handle increased resolution to take advantage of patterns or features in data that are not readily accessible for evaluation by the human eye and to manage large magnitudes of data in this manner to effectively integrate it into clinical workflows. With newer high resolution imaging techniques, the radiologist will "drown" in the data without assistance. Integrating quantitative imaging for individual patient management would require a new class of decision support informatics tools to enable the medical community to leverage the capabilities of these new tools within the realities of existing workflows and compensatory constraints.
Furthermore, quantitative imaging methods are increasingly important for (i) preclinical studies, (ii) clinical studies, (iii) clinical trials, and (iv) clinical practice. Imaging in the early stages of clinical testing of novel therapeutics helps to understand the underlying biological pathways and pharmacological effects. It can also reduce the cost and time required to develop new drugs and therapeutics. Imaging biomarkers can serve as important endpoints of clinical benefit at later stages of development. In all stages, imaging biomarkers can be used to select or stratify patients based on disease status in order to better demonstrate therapeutic efficacy.
Improving patient selection through the use of quantitative imaging reduces the sample size required for a given trial (by increasing the number of evaluable patients and/or reducing the impact of unwanted variables) and helps identify the subpopulations that may benefit most from the proposed treatment. This will reduce the development time and cost of new drugs, but may also result in a corresponding reduction in the size of the "target" population.
The disease is not simple and, although the disease usually manifests itself locally, it is often systemic. Multifactor assessments of objectively related tissue properties, which represent panels or "spectra" of continuous indices, sometimes ideally proven to be future "surrogate" and/or difficult-to-measure but approved endpoints, have proven to be an effective method across medicine and are also true here. The computer-aided measurement of lesion and/or organ biology and quantification of tissue composition in the first or second reading paradigm, which enables cross-discipline convergence between next generation computational methods for personalized diagnosis based on quantitative imaging assessment of phenotypes implemented in an architecture that actively optimizes interoperability with modern clinical IT systems, provides clinicians with strength in managing their patients across continuous disease severity for improved classification across surgical, medical and surveillance pathways. More timely and accurate assessment will yield improved results and more efficient use of healthcare resources, with gains far in excess of tool costs, at a level of granularity and complexity closer to the complexity of the disease itself than insisting on the assumption that it can be reduced to a level that hides underlying biology.
b.Phenotyping (phenotyping):
radiological imaging is generally interpreted subjectively and qualitatively for medical conditions. The term phenotype is used in the medical literature as a set of observable characteristics of an individual caused by the interaction of their genotype with the environment. Phenotype generally means objectivity, i.e., a phenotype may be referred to as being true rather than subjective. Radiology is well known for its ability to visualize features and increasingly can be verified as an objective truth standard (U.S. application serial No. 14/959,732; U.S. application serial No. 15/237,249; U.S. application serial No. 15/874,474; and U.S. application serial No. 16/102,042). Thus, radiological images can be used to determine a phenotype, but generally lack sufficient methods for determining a phenotype.
Advantageously, phenotyping has a true-phase basis and can therefore be evaluated independently and objectively. In addition, phenotyping has become an approved form of system in the healthcare industry for managing patient treatment decisions. Thus, the phenotype is highly clinically relevant. Finally, phenotyping is consumer-relevant. This allows both self-advocates and serves as an incentive for lifestyle changes.
Early identification of phenotypes based on continuous indicators of the composite panel rather than just as detection of individual features would enable rapid intervention to prevent irreversible damage and death. Solutions are crucial for fully preempting human events or at least for improving the diagnostic accuracy in experiencing signs and/or symptoms. Patients at higher risk can be characterized using an efficient workflow solution that automatically measures structure and quantifies tissue composition and/or hemodynamics, and patients at higher risk will be treated differently than patients not at higher risk. There would be great clinical significance if plaque morphological properties were linked to embolic potential.
The imaging phenotype may be correlated with gene expression patterns in association studies. This may have clinical implications, as imaging is often used in clinical practice, providing unprecedented opportunities for improving decision support in personalized therapy at low cost. Correlating specific imaging phenotypes with large-scale genomic, proteomic, transcriptomic, and/or metabolomic analysis has the potential to influence treatment strategies by producing a more definitive and patient-specific prognosis and measurement of response to drugs or other therapies. However, the methods used to date to extract imaging phenotypes are mostly empirical in nature and are based primarily on human observations, albeit expert observations. These methods have embedded human variability and are clearly not scalable to support high throughput analysis.
At the same time, converging unmet needs to enable more personalized medicine without increasing costs-indeed, providing the ability to better control costs and/or avoid unfortunate events rather than react to unfortunate events by proactively working on preventative medicine, comparative results, compensation methods, etc., and moreover, technological advances delivering such capabilities in a manner that simultaneously reduces costs, have brought unprecedented pressure.
In addition to the problem of phenotypic classification (classifying unordered classes), there is also the problem of outcome prediction/risk stratification (predicting the level of ordering of risks). Both problems have clinical utility, but both are the result of different technical device characteristics. In particular, one problem is not strictly dependent on the other.
Without limiting generality, examples of phenotypic classifications comprising clinical relevance are provided below:
an exemplary manifestation of the "stable plaque" phenotype of atherosclerosis can be described as follows:
"healing" disease, low response to an enhanced anti-statin regimen
Balloon/stent complications are less
Sometimes the stenosis is higher than 50%
Sometimes Ca is higher and deeper
Minimal or no lipid, bleeding and/or ulceration
Smooth appearance
Such plaques typically have a lower rate of adverse events than the "unstable plaque" phenotype.
Active disease, possibly high response to intensified lipid lowering and/or anti-inflammatory regimens
Balloon/stent complications are high
Sometimes the stenosis is lower < 50%
Low or diffuse Ca, indications of Ca approaching the lumen, napkin rings and/or microcalcifications
Sometimes there is evidence of a higher lipid content and a thin cap
Sometimes with signs of bleeding or intraplaque hemorrhage (IPH) and/or ulceration
The rate of adverse events for such plaques has been reported to be 3-fold to 4-fold greater than for the stable phenotype. These two examples can be evaluated as a single patient encounters, but other phenotypes such as "rapid progression (rapid progression)" can be determined by comparing the rate of change of the characteristics over time, i.e., phenotypes that are statically present at a certain point in time and that are informed based on kinetically determined and/or how things change n times.
c.Machine learning techniques and deep learning techniques:
deep Learning (DL) methods have been very successfully applied to many difficult Machine Learning (ML) and classification tasks resulting from complex reality problems. Notable recent applications include computer vision (e.g., optical character recognition, facial recognition, satellite image interpretation, etc.), speech recognition, natural language processing, medical image analysis (image segmentation, feature extraction and classification), clinical and molecular data biomarker discovery and verification. An attractive feature of the method is that it can be applied to both unsupervised and supervised learning tasks.
Neural Networks (NN) and deep NN methods, which broadly comprise Convolutional Neural Networks (CNN), Recursive Convolutional Neural Networks (RCNN), etc., have been shown to be based on a good theoretical basis and are broadly modeled after the principles are considered to represent the higher-level cognitive functions of the human brain. For example, in the neocortex of brain regions associated with many cognitive abilities, sensory signals are propagated through a complex hierarchy of local modules that learn to represent observations made over time-a design principle that has led to the general definition and construction of CNNs for i.e. image classification and feature extraction. However, what is the more fundamental reason for the superior performance of DL networks and methods over a framework with the same number of fitting parameters but without a deep hierarchical architecture is to some extent an as yet undetermined issue.
Conventional ML methods for image feature extraction and image-based learning from raw data have many limitations. Notably, it is often difficult to obtain spatial context in ML methods using feature extraction when the features are at an aggregate level rather than the 3D level of the original image, or when they are at a 3D level to the extent that they are not tied to biological realities that can be objectively verified but are only mathematical operators. The use of raw data sets that do contain spatial context typically lacks an objective reference true-phase label for the extracted features, but the use of raw image data itself contains many variations that are "noise" relative to the classification problem at hand. In applications other than imaging, this problem is typically alleviated by a very large training set, such that network training "learning" ignores this variation and focuses only on salient information, but large training sets of this scale are not typically available in medical applications, especially datasets annotated with benchmark truth due to cost, logistics and privacy issues. The systems and methods of the present disclosure help overcome these limitations.
d.An example application: cardiovascular measures such as Fractional Flow Reserve (FFR) and plaque stability or high risk plaques (HRP):
Over the past 30 years, new treatments have revolutionized in improving outcomes, but cardiovascular disease still imposes a $ 3200 million per year burden on the U.S. economy. There are a large patient population that may benefit from a better characterization of the risk of a major adverse coronary or brain event. The American Heart Association (AHA) infers ACSD (atherosclerotic cardiovascular disease) risk scores for population projects: the risk of 9.4% of adverse events in all adults (> 20 age) is greater than 20% in the next 10 years, and 26% risk is between 7.5% and 20%. Applying this to the population resulted in 2300 million high risk patients and 5700 million medium risk patients. 8000 tens of thousands at risk can be compared to 3000 tens of thousands of U.S. patients currently on statin therapy in an attempt to avoid a new or recurrent event, as well as 1650 thousands diagnosed with CVD. Among statin patients, some patients will develop occlusive disease and Acute Coronary Syndrome (ACS). The vast majority of patients are unaware of their disease progression prior to the onset of chest pain. Additional outcome and cost improvements of coronary artery disease would be shifted by improved non-invasive diagnosis to identify which patients have progressed on first line therapy.
Heart health may be significantly affected by degeneration of arteries surrounding the heart. Factors (e.g., tissue characteristics such as angiogenesis, neovascularization, inflammation, calcification, lipid deposition, necrosis, hemorrhage, rigidity, density, stenosis, dilation, remodeling rates, ulcers, blood flow (e.g., blood flow of blood in a channel), pressure (e.g., pressure of blood in a channel or pressure of one tissue pressing against another tissue), cell type (e.g., macrophages), cell arrangement (e.g., cell arrangement of smooth muscle cells) or shear stress (e.g., shear stress of blood in a channel), cap thickness, and/or tortuosity (e.g., entrance and exit angles)) may cause these arteries to reduce their effectiveness in transmitting oxygen-filled blood to surrounding tissues (fig. 35).
Functional testing of coronary arteries, mainly echocardiography stress and single photon emission computed tomography myocardial perfusion testing (SPECT MPI), are the major non-invasive methods currently used to diagnose obstructive coronary artery disease. Over a million functional tests are performed each year in the united states, with positive results driving 260 million visits to a catheter laboratory to perform invasive angiography to confirm the finding of coronary artery disease.
Another method of assessing perfusion is to determine the ability of the vasculature to transport oxygen. In particular, the reduced capacity may be quantified as fractional flow reserve or FFR. FFR is not a direct measure of ischemia, but instead measures the rate of pressure drop across the lesion. The change in vessel diameter caused by local vasodilatory injury at maximal hyperemia relative to other segments of the same vessel can produce significant hemodynamic effects, resulting in abnormal FFR measurements. During physical FFR measurements, the infusion of adenosine reduces downstream resistance to allow increased blood flow in a hyperemic state. Physical measurement of FFR requires invasive surgical procedures involving physical pressure sensors within the artery. Since this level of trauma can be itself risky and inconvenient, there is a need for a method of estimating FFR with high accuracy without the need for physical measurements. The ability to perform this measurement non-invasively also reduces significant "treatment bias": stent placement is relatively easy to perform once the patient is in the catheter lab, so many have noted that over-treatment occurs, however if blood flow reserves can be non-invasively assessed, the decision as to whether to place or not place a stent may be improved. Likewise, flow reserve is also applicable to perfusion of brain tissue (e.g., associated with congestion in the brain).
Known problems with functional testing are sensitivity and specificity. It is estimated that 30-50% of patients with cardiovascular disease are misclassified and are over-treated or under-treated, with significant monetary and quality of life costs. Functional testing is also expensive, time consuming and limited in use for patients with pre-obstructive disease. False positives from non-invasive functional tests are an important factor in overuse of both invasive coronary angiography and percutaneous coronary intervention in stable patients, which are major policy issues in the united states, uk and china. Study of false negative impact estimates, in a 380 million years MPI test given to us patients suspected of having coronary dynamic disease (CAD), nearly 600,000 will report false negative results leading to 13,700 cases of acute coronary events, many of which will be preventable by the introduction of appropriate drug therapies. Another drawback of functional testing is temporary in nature: ischemia is a delayed indicator of following anatomical changes brought about by disease progression. Patients at high risk of ACS can be better served if future criminal illnesses can be detected and reduced with intensive drug therapy prior to the onset of ischemia.
Coronary Computed Tomography Angiography (CCTA), particularly when used in tandem with quantitative analysis software, is evolving into an ideal testing modality to fill the gap in understanding the degree and rate of progression of coronary artery disease. Over the past 10 years, the CT scanner suite in most countries has been upgraded to higher speed, higher detector count machines capable of excellent spatial resolution without slowing the heart or extensive breath holding. The radiation dose has been greatly reduced to a point equal to or below SPECT MPI and invasive angiography.
Recent analysis of data from tests such as SCOT-heat, PREDICT and PROMISE, and other milestones, has shown the value of using CCTA to detect non-obstructive disease sometimes referred to as High Risk Plaque (HRP) or vulnerable plaque by identifying patients at increased risk of future adverse events. The study design varied and included a cohort of nested case controls comparing CCTA enrolled patients who developed Cardiovascular (CV) events to controls with similar risk factors/demographics, a comparison to FFR, and a multi-year follow-up on a large "test and treatment" study. A recent favorable determination of NICE to localize CCTA as a front-line diagnosis is based on a significant reduction in CV events in the CCTA branch of the SCOT-hear study due to changes in drug treatment initiation or plaque discovery with CCTA.
Based on the PROMISE findings that indicate the most need to assess plaques, important target patient groups are those with stable chest pain without a prior history of CAD with typical or atypical angina symptoms (based on SCOT-hear data), as well as those with non-obstructive disease (< 70% stenosis) and younger patients (e.g. 50-65 year old group). CCTA-based analysis with non-obstructive CAD finds that patients with high risk plaque profiles may be assigned the most appropriate high-intensity statin therapy (especially when considering decisions on very expensive new lipid lowering therapies like PCSK9 inhibitors or anti-inflammatory drugs like canakinumab), or new antiplatelet agents that increase the risk for reducing coronary thrombosis and/or longitudinal follow-up of possible intensification or degradation of the therapy. CCTA is an ideal diagnostic tool because it is non-invasive and requires less radiation than cardiac catheterization.
Pathological literature on culprit lesions implicated in fatal heart attacks indicates that clinically non-obstructive CAD is more likely to be the home for most high risk plaques than more occlusive plaques, which tend to be more stable. These findings were confirmed by recent studies that pointed out culprit lesions from ACS patients undergoing invasive angiography and compared them to precursor plaques in baseline CCTA. In one cohort receiving clinically indicated CCTA, patients with non-obstructive CAD were found, 38% of those so tested, still with a significant risk of a medium to long term major adverse cardiovascular and/or cerebrovascular event (MACCE). The blockage-independent hazard ratio based on the number of diseased segments was found to be a significant long-term predictor of MACCE in this group. One contributing factor to the predicted value of clinically non-obstructive CAD is that these lesions are more likely to be home to most high risk plaques than more occlusive plaques, which tend to be more stable.
In several recently published longitudinal studies of the therapeutic effects of statins and anti-inflammatory drugs, further confirmation of the potential utility of CCTA in the detection and management of obstructive and pre-obstructive atherosclerotic lesions was seen, with plaques and plaque regression in the treatment arms being observed that remodeled to a more stable representation. This confirms the subject of earlier intravascular ultrasound (IVUS, sometimes using the "virtual histology" VH), near infrared spectroscopy (NIRS), Optical Coherence Tomography (OCT), etc. studies to explore disease progression and therapeutic efficacy under various lipid-lowering drug regimens. Recent drug trials provide potential plaque biomarkers for demonstrating the efficacy of new medical therapies. Integrated biomarker and imaging study-4 (IBIS-4) found that calcification progressed with potential protection of statins. Other studies found that lipid-rich necrotic nuclei (LRNC) were reduced under statin treatment. In these studies, clinical variables, when used alone, were poorly differentiated for identifying high risk plaque characteristics. These studies emphasize the importance of the complete characterization and assessment of the entire coronary tree, rather than just criminal lesions, to allow more accurate risk stratification where the CCTA can be properly analyzed by the jingles. In meta-analysis, CCTA has good diagnostic accuracy for detecting coronary plaques compared to IVUS with small differences in assessing plaque area and volume, percent stenosis area, and a slightly overestimation of luminal area. It has also been found that increasing the rate of change of lipid-rich necrotic nuclei and their distance from the lumen has a high prognostic value. In addition, the results of the romimat II trial show that for stable CAD patients with acute chest pain but negative for initial ECG and troponin, the identification of high risk plaques on CCTA increases the likelihood of ACS, regardless of significant CAD and clinical risk assessment. Examinations by CCTA have been established for the evaluation of coronary atherosclerotic plaques. For patients with uncertain necessity for invasive procedures, it would be beneficial and feasible to non-invasively predict MACCE utilization with CCTA, which gives an overall estimate of disease burden and risk of future events.
The prevalence of carotid artery disease and CAD are closely related. Carotid atherosclerosis has been shown to be an independent predictor of MACCE, even in patients without pre-existing CAD. Such findings suggest a common underlying pathogenesis in both conditions, which is further supported by the Multi-ethical Study of Atherosclerosis (MESA). Atherosclerosis progresses through the progression of lesions in the arterial wall, leading to the accumulation of cholesterol-rich lipids and an inflammatory response. These vary similarly (if not identically) in the coronary, carotid, aorta and peripheral arteries. Certain plaque properties such as large atherosclerotic nuclei and lipid rich content, thin caps, outward remodeling, plaque infiltration by macrophages and lymphocytes, and thinning of the culture medium are prone to induce fragility and rupture.
e.Non-invasive determination of HRP and/or FFR:
noninvasive assessment of functional significance of stenosis using CCTA is of clinical and economic interest. The combination of the geometry or anatomy of the lesion or vessel and the properties or composition of the tissue comprising the wall and/or plaque in the wall (collectively referred to as the plaque morphology) may explain the outcome of the lesion with higher or lower risk plaque (HRP) and or orthogonal considerations of normal and abnormal blood flow reserve (FFR). Lesions with large necrotic nuclei may develop dynamic stenosis due to outward remodeling during plaque formation, resulting in more tissue extension, tissue stiffening, or smooth muscle layers having extended to the limit of the Glagov phenomenon, after which the lesion gradually invades the lumen itself. Likewise, inflammatory injury and/or oxidative stress may lead to local endothelial dysfunction, manifested as impaired vasodilation capacity.
If the tissue constituting the plaque is predominantly stroma or "fatty streaks" that do not organize into necrotic nuclei, the plaque will expand sufficiently to keep up with the demand. However, if the plaque has a larger necrotic core, it will not expand. The blood supply will not be able to keep up with the demand. Plaque morphology can improve accuracy by evaluating complex factors such as LRNC, calcification, bleeding, and ulceration versus objectivity that can be used to verify underlying information in ways that other methods cannot do due to lack of intermediate measurement target verification.
But that is not all that the plaque can do. Often, the plaque actually ruptures, causing a sudden clot which may then cause an infarction of the heart or brain tissue. Plaque morphology can also identify and quantify these high risk plaques. For example, plaque morphology can be used to determine how close a necrotic core is to the lumen: this is a key determinant of infarct risk. Knowing whether a lesion restricts blood flow under stress does not indicate a risk of rupture, and vice versa. Other methods, such as Computational Fluid Dynamics (CFD), can simulate blood flow restriction but not risk of infarction without objectively validated plaque morphology. The fundamental advantage of plaque morphology is that its accuracy depends on the determination of vascular structure and tissue properties, while allowing the phenotype to be determined.
There are more and more clinical guidelines for optimal management of patients with different assessments of blood flow reserve. It is well known that obstructive lesions (large necrotic nuclei and thin caps) with high risk characteristics are predictive of the greatest likelihood of future events, and importantly, the opposite.
In the case of inaccurate assessment of plaque morphology, methods for determining FFR using CFD have been published. But CFD-based flow reserves consider only the lumen, or at most, changes in the luminal surface at different parts of the cardiac cycle. At best, only the luminal surface is considered to need to deal with both systole and diastole to derive the motion vectors (which is not even the case with most available methods), but even if nothing happens under stress, as these analyses are done using computer simulations of what might happen under stress rather than measurements of the actual stress and are not based on the actual properties that cause vasodilation capacity but on the blood channel only. Some methods attempt to simulate forces and apply biomechanical models, but the methods utilize assumptions rather than validated measurements of wall tissue. Thus, these methods cannot anticipate what may happen if the stress actually causes a fracture beyond the basic assumption. Rather, characterizing the tissue may solve these problems. Wall properties, including the effect on the vasodilation capacity of the vessel due to its wall expandability, are considered superior because the composition of the lesion determines flexibility and energy absorption, stable lesions are still over-treated and assessment of MACCE risk is incomplete. Advantages of using morphology to assess FFR include the following facts: morphology is an indicator of lead, FFR lag, and the presence and extent of HRP will better inform treatment of borderline subjects. Research increasingly shows that FFR can be predicted by morphology but FFR does not, reinforcing the importance of accurate assessment by morphology resolution. That is, the morphology effectively evaluated not only enables the determination of FFR, but also may produce discrete changes in plaque that shift the patient from ischemia to infarction or HRP.
Disclosure of Invention
Systems and methods are provided herein that utilize a hierarchical analysis framework to identify and quantify biological properties/analytes from imaging data, and then identify and characterize one or more medical conditions based on the quantified biological properties/analytes. In some embodiments, the systems and methods incorporate computer image analysis and data fusion algorithms with patient clinical chemistry and blood biomarker data to provide a multi-factor panel that can be used to distinguish between different subtypes of disease. Thus, the systems and methods of the present disclosure may advantageously implement biological and clinical insights in advanced computational models. These models can then interface with complex image processing by specifying rich ontologies associated with an increasing understanding of pathogenesis, and take the form of a strict definition of what was measured, how it was measured and evaluated, and how it relates to clinically relevant subtypes and stages of disease that can be validated.
Human diseases exhibit strong phenotypic differences that can be understood by applying complex classifiers to extracted features that capture spatial, temporal and spectral results that can be measured by imaging but are difficult to understand without assistance. Conventional computer-aided diagnosis is inferred from image features in a single step only. In contrast, the systems and methods of the present disclosure employ a hierarchical inference scheme that includes intermediate steps of determining spatially resolved image features and temporally resolved kinetics at multiple levels of a biological target component of morphology, composition, and structure, which are then used to derive clinical inferences. Advantageously, the hierarchy inference scheme ensures that the clinical inferences can be understood, validated, and interpreted at each level of the hierarchy.
Thus, in example embodiments, the systems and methods of the present disclosure utilize a hierarchical analysis framework that includes a first level of algorithms that measure biological properties that can be objectively verified against truth criteria independent of imaging, and a subsequent second set of algorithms for determining a medical or clinical condition based on the measured biological properties. Such frameworks are for example adapted in an "and/or" manner, i.e. individually or in combination, to a variety of different biological properties, such as angiogenesis, neovascularization, inflammation, calcification, lipid deposition, necrosis, hemorrhage, rigidity, density, stenosis, dilation, rate of remodeling, ulceration, blood flow (e.g. blood flow of blood in a channel), pressure (e.g. pressure of blood in a channel or pressure of one tissue against another), cell type (e.g. macrophages), cell arrangement (e.g. of smooth muscle cells) or shear stress (e.g. shear stress of blood in a channel), cap thickness and/or tortuosity (e.g. entrance and exit angles). The measurand of each of these properties, such as the quantity and/or extent and/or the trait of the property, may be measured. Example conditions include perfusion/ischemia (e.g., limited) (e.g., perfusion/ischemia of brain or cardiac tissue), perfusion/infarction (e.g., total resection) (e.g., perfusion/infarction of brain or cardiac tissue), oxygenation, metabolism, blood flow reserve (perfusability), malignancy, invasion, and/or risk stratification (whether as a probability of an event or as a time of event (TTE)), such as a major adverse cardiovascular or cerebrovascular event (MACCE). The ground truth may include, for example, biopsies, expert tissue annotations forming excised tissue (e.g., endarterectomy or necropsy), expert phenotypic annotations of excised tissue (e.g., endarterectomy or necropsy), physical pressure lines, other imaging modalities, physiological monitoring (e.g., ECG, Sa02, etc.), genomics and/or proteomics and/or metabolomics and/or transcriptomics assays, and/or clinical outcomes. These properties and/or conditions may be measured at a given point in time and/or may vary across time (longitudinal).
In an example embodiment, the systems and methods of the present application advantageously relate to computer-assisted phenotyping (CAP) of a disease. CAP is a new and exciting complement to the field of computer-aided diagnosis (CAD). As disclosed herein, CAP can apply hierarchical inference incorporating computer image analysis and data fusion algorithms to patient clinical chemistry and blood biomarker data to provide measured multi-factor panels or "spectra" that can be used to distinguish between different subtypes of disease that are to be treated differently. Thus, CAP implements new approaches to robust feature extraction, data integration, and scalable computational strategies to implement clinical decision support. For example, the spatial temporal texture (SpTeT) method captures relevant statistical features for spatially and kinetically characterizing tissue. Spatial signatures map to characteristic patterns of lipids mixed with extracellular matrix fibers, necrotic tissue, and/or inflammatory cells, for example. The kinetic features map to e.g. endothelial permeability, neovascularization, necrosis and/or collagen breakdown.
In contrast to current CAD methods that make clinical inferences in a single step of machine classification from image features, the systems and methods of the present application can advantageously utilize a hierarchical inference scheme that can be applied not only to spatially resolved image features at the beginning, but also to intermediate temporally resolved dynamics over multiple layers of morphologically and structurally composed biological target components, and then to final clinical inferences. This results in a system that can be understood, validated, and interpreted at each level in the hierarchy from low-level image features at the bottom to biological and clinical features at the top.
The systems and methods of the present disclosure improve on phenotypic classification and outcome prediction. Phenotypic classification may occur on two layers, on the one hand individual anatomical locations and on the other hand more generally described body parts. The input data of the former may be a 2D data set and the input data of the latter may be a 3D data set. However, for phenotypic classification, the objective truth may be on any one level, and for outcome prediction/risk stratification, the objective truth typically occurs on the patient level, but in some cases may be more specific (e.g., on which side the stroke symptoms are manifested). What is meant here is that the same input data can be used for both purposes, but the model will differ significantly due to the layer used by the input data and the basis of the true phase annotation.
While modeling of the reading vector as input data may be performed, performance is typically limited by the implemented measurand. The present application advantageously utilizes unique measurands (e.g., cap thickness, calcium depth, and ulceration) to improve performance. Thus, the read-only vector method may be applied where the vector contains these measurands (e.g., in combination with conventional measurands). However, the systems and methods of the present disclosure may advantageously utilize a Deep Learning (DL) approach, which may provide an even richer data set. The systems and methods of the present application can also advantageously utilize unsupervised learning applications, providing better scalability across data domains (a highly desirable feature that allows for the speed with which new biological data is generated).
In example embodiments presented herein, Convolutional Neural Networks (CNNs) may be used to build classifiers in methods that may be characterized as migratory learning with a fine-tuning method. CNNs trained on large summaries of imaging data on powerful computing platforms can be successfully used to classify images that have not been annotated in network training. This is intuitively understandable because many common feature classifications will help identify images of very different objects (i.e., shapes, boundaries, orientations in space, etc.). It is then conceivable that a CNN trained to recognize thousands of different objects using a pre-annotated dataset of tens of millions of images will perform the basic image recognition task better than occasional and will have comparable performance to CNNs trained from the beginning after a relatively small slight adjustment of the last classification layer, sometimes referred to as the' softmax layer. Since these models are very large and have been trained on a large number of pre-annotated images, they tend to "learn" very unique, distinctive imaging features. Thus, the convolutional layer may be used as a feature extractor, or the trained convolutional layer may be slightly adjusted to accommodate the problem at hand. The first method is called transfer learning, and the latter is called fine tuning.
CNNs are adept at performing many different computer vision tasks. However, CNN has some disadvantages. Two important drawbacks of the importance of medical systems are 1) the need for extensive training and validation of the data set, and 2) the intermediate CNN calculation does not represent any measurable property (sometimes criticized as a "black box" whose principle is not described). The methods disclosed herein may advantageously utilize a pipeline consisting of one or more stages that are individually biometricable and capable of independent verification, followed by a convolutional neural network that starts at these stages rather than just the raw image. Furthermore, some transformations may be applied to reduce variations that are not related to the problem at hand, such as unfolding the cross section of the donut-like vessel to become a rectangular representation with a normalized coordinate system before feeding the network. These front-end pipeline stages simultaneously alleviate both of the two disadvantages of using CNN for medical imaging.
Typically, the early convolutional layers act as feature extractors that increase specificity, and the fully-connected one or two last layers act as classifiers (e.g., "softmax layers"). Schematic representations of layer sequences and their functions in a typical CNN are available from many sources.
Advantageously, the systems and methods of the present disclosure utilize an enriched data set to enable non-invasive phenotyping of tissue determined by a radiological data set. One type of enrichment is pre-processing the data to perform tissue classification and using "pseudo-color" overlays to provide a data set that can be objectively validated (as opposed to using only the original image, which does not have this possibility). Another type of enrichment is the use of transformations on the coordinate system to emphasize biologically plausible spatial contexts while removing noise variations to improve classification accuracy, to allow for a smaller training set, or both.
In example embodiments, the systems and methods of the present application may employ a multi-stage pipeline: (i) semantic segmentation for identifying and classifying regions of interest (which may represent quantitative biological analytes, for example), (ii) spatial unfolding for transforming a cross-section of a tubular structure (e.g. a vein/artery cross-section) into a rectangle; and (iii) the application of the trained CNN to read the annotated rectangle and identify which phenotype (e.g., stable or unstable plaque and/or normal or abnormal peak FFR) and/or time of predicted event (TTE) it belongs to. It should be noted that by training and testing the CNNs in the case of the unfolded dataset (in the case of unfolding) versus the donut shaped dataset (in the case of unfolding), it can be demonstrated that unfolding can improve the validation accuracy of each particular embodiment. Thus, imaging various embodiments of tubular structures (e.g., plaque phenotyping) or other structures (e.g., lung cancer mass subtyping) or other applications may similarly benefit from performing similar steps (e.g., semantic segmentation followed by spatial transformations such as unfolding (prior to applying CNN)). However, it is contemplated that in some alternative embodiments, untransformed data sets (e.g., data sets that are not spatially spread) may be used to determine phenotypes (e.g., in conjunction with or independent of untransformed data sets).
In an example embodiment, the semantic segmentation and spatial transformation may involve the following: the image volume may be pre-processed, including target initialization, normalization, and any other desired pre-compression, such as deblurring or restoration, to form a region of interest containing a physiological target to be phenotyped. It is noted that the region of interest may be the volume constituted by a cross-section through the volume, a body part which may be determined automatically or provided explicitly by the user. A target that is a tubular body part in nature may be accompanied by a centerline. The centerline may have branches when present. The branches may be marked automatically or by the user. Note that the generalization to the centerline concept may represent anatomy that is not tubular, but benefits from some structural directionality, such as a region of a tumor. In any case, a centroid may be determined for each cross section in the volume. For a tubular structure, the centroid may be the center of the channel, e.g., the lumen of a blood vessel. For lesions, the centroid may be the center of the mass of the tumor. The (optionally deblurred or restored) image may be represented in a Cartesian data set (Cartesian data set) in which x is used to represent how far from the centroid, y represents the rotation θ, and z represents the cross section. Each branch or region will form one such cartesian set. When multiple sets are used, a "null" value may be used for the overlap region, that is, each physical voxel may be represented only once in the entire set in a geometrically fitted together manner. Each dataset may be paired with additional datasets having sub-regions marked by objectively verifiable tissue composition. Example labels for vascular tissue may be lumen, calcifications, LRNC, IPH, etc. Example labels for lesions may be necrotic, neovascularization, and the like. These tags can be objectively verified, for example by histology. The paired data sets may be used as input to a training step for building a convolutional neural network. In an example embodiment, two layers of analysis may be supported, one layer at a separate cross-sectional layer and a second layer at a volume layer. The output tags represent phenotype or risk stratification.
Exemplary image pre-processing may include deblurring or recovery using, for example, a patient-specific point spread determination algorithm to mitigate artifacts or image limitations caused by the image formation process. These artifacts and image limitations may reduce the ability to determine the characteristics of the predicted phenotype. For example, deblurring or restoration may be achieved as a result of, for example, iteratively fitting a physical model of the scanner point spread function with regularization assumptions about the true latent density of different regions of the image.
In an example embodiment, the CNN may be AlexNet, inclusion, cafneet, or other network. In some embodiments, the CNN may be reconstructed, for example, where the same number and type of layers are used, but the input and output dimensions are changed (e.g., aspect ratio is changed). Various exemplary embodiments of CNNs are provided as open sources, e.g., on a TensorFlow and/or in other frameworks, available as open sources and/or perhaps configurable.
In an example embodiment, the data set may be enhanced. For example, in some embodiments, 2D or 3D rotation may be applied to the data set. Thus, in the case of an untransformed (e.g., doughnut-shaped) data set, the enhancement may involve, for example, a combination of a random longitudinally flipped data set and a random rotated data set (e.g., rotated by a random angle between 0 and 360 degrees). Similarly, in the case of a transformed (e.g., unwrapped) data set, the enhancement may involve, for example, a combination of random vertical flipping and random "scrolling" of the image, such as a random number of pixels ranging from 0 to the width of the image (where scrolling is similar to rotation around θ).
In an example embodiment, the data set may be enriched by using different colors to represent different tissue analyte types. These colors may be selected to visually contrast with respect to each other and with respect to non-analyte surfaces (e.g., normal walls). In some embodiments, a non-analyte surface may be depicted in gray. In example embodiments, data set enrichment may lead to baseline true phase annotations of tissue properties (e.g., tissue properties indicative of plaque phenotype), and may provide spatial context of how such tissue properties appear in cross-section (e.g., taken orthogonal to the axis of the vessel). Such spatial context may include a coordinate system (e.g., based on polar coordinates relative to the centroid of the cross-section) that provides a common basis for data set analysis relative to the histological cross-section. Thus, the enriched data set may advantageously be superimposed on top of the color-coded pathologist annotation (or vice versa). Advantageously, histology based on the annotated dataset may then be used for training in conjunction with or independent of image feature analysis of the radiological dataset (e.g., training of CNNs). Notably, histology based on the annotated dataset may improve the efficiency of the DL method, as histology based on the annotated dataset uses relatively simple pseudo-color images instead of high resolution full images without loss of spatial context. In an example embodiment, the coordinate directions may be represented using unit phasors and phasor angle internals. In some embodiments, the coordinate system may be normalized, for example, by normalizing the radial coordinate with respect to wall thickness (e.g., to provide a common basis for comparing tubular structures/cross-sections of different diameters/thicknesses). For example, the normalized radial distance has a value of 0 at the inner boundary (inner wall lumen boundary) and a value of 1 at the outer boundary (outer wall boundary). Notably, this may apply to tubular structures (e.g., the gastrointestinal tract) associated with blood vessels or other pathophysiology.
Advantageously, the enriched dataset of the present application provides a non-invasive image-based classification (e.g., where a tissue classification scheme may be used to non-invasively determine phenotypes) that is based on a known baseline truth. In some embodiments, the known baseline true phase may be nonradioactive (e.g., histology or another ex vivo-based tissue analysis). Thus, for example, a radiological data set (such as histological information) annotated as containing ex-vivo baseline true phase data may be advantageously used as input data for a classifier. In some embodiments, a plurality of different known reference true phases may be used in conjunction with each other or independently of each other in annotating an enriched data set.
As described herein, in some embodiments, the enriched data set may utilize a normalized coordinate system to avoid irrelevant variations associated with, for example, wall thickness and radial representations. Further, as described herein, in an example embodiment, a "donut-like" shaped data set may be "unrolled," e.g., prior to a classification training (e.g., using CNN) and/or prior to running a trained classifier on the data set. Notably, in such embodiments, the analyte annotation of the training data set may be prior to transformation, e.g., after expansion, or a combination of both. For example, in some embodiments, the untransformed data set may be annotated (e.g., the data is classified ex vivo using, for example, histological information) and then transformed for classifier training. In such embodiments, finer granularity of ex vivo-based classification may be collapsed to match lower expected granularity for in vivo radiology analysis to reduce computational complexity while addressing what would otherwise be open to criticism for "black boxes".
In some embodiments, the colors and/or axes used to visualize the annotated radiological dataset may be selected to correspond to the same colors/axes as typically presented in the classification based on the ex vivo fiducial true phase (e.g., the same colors/axes as used in histology). In example embodiments, the transformed enriched data set may be presented (e.g., may be normalized for wall thickness), with each analyte represented visually for all non-analyte regions, in a different contrast color and relative to background regions (e.g., black or gray). Notably, according to embodiments, the common background may or may not be annotated, and thus may or may not visually distinguish between non-analyte regions within and outside the vessel wall or between background features (e.g., luminal surface irregularities, varying wall thicknesses, etc.). Thus, in some embodiments, the annotated analyte regions may be visually delineated (e.g., color coded and normalized for wall thickness) against a uniform (e.g., completely black, completely gray, completely white, etc.) background. In other embodiments, the annotated analyte regions (e.g., color-coded for wall thickness but not normalized) may be visually depicted relative to the annotated background (e.g., where different shading (grey, black, and/or white) may be used to distinguish between (i) a central lumen region inside the inner lumen of the tubular structure, (ii) a non-analyte region inside the wall, and/or (iii) a region outside the outer wall). This may enable analysis of changes in wall thickness (e.g. due to ulcers or thrombi). In further example embodiments, the annotated data set may contain an identification (and visualization) of areas such as intra-plaque hemorrhage and/or other morphological aspects. For example, areas of intra-plaque hemorrhage may be viewed in red, LRNC in yellow, etc.
One particular embodiment of the systems and methods of the present application may be in guided vascular therapy. Classifications may be established based on the likely dynamic behavior of plaque lesions (based on their physical characteristics or specific mechanisms, e.g., based on inflammation or cholesterol metabolism) and/or based on the progression of the disease (e.g., early versus late in their natural history). Such classification can be used to guide patient treatment. In an example embodiment, the Stary plaque typing (typing) system employed by AHA may be used as a basis for the type of in vivo determination shown in color overlay. Examples are mapped to [ 'I', 'II', 'III', 'IV', 'V', 'VI', 'VII', 'VIII' ], resulting in a class _ map [ subclinical, unstable, stable ]. However, the systems and methods of the present disclosure are not tied to Stary. As another example, the Virmani system [ "calcified nodules," "CTO," "FA," "FCP," "healing plaque rupture," "PIT," "IPH," "rupture," "TCFA," "ULC" ] has been used with class-map [ stable, unstable ] and other typing systems can yield similarly high performance. In example embodiments, the systems and methods of the present disclosure may incorporate disparate typing systems, may change class diagrams, or other variations. For FFR phenotypes, equivalence as normal or abnormal and/or quantity may be used to facilitate comparison with, for example, physical FFR.
Thus, in example embodiments, the systems and methods of the present disclosure may provide phenotypic classification of plaques based on enriched radiological datasets. In particular, one or more phenotypic classifications may include distinguishing stable plaques from unstable plaques, for example, where the baseline true basis for the classification is based on: (i) lumen narrowing (possibly enhanced by additional measures such as tortuosity and/or ulceration), (ii) calcium content (possibly enhanced by depth, shape and/or other complex representation), (iii) lipid content (possibly enhanced by cap thickness and/or other complex representation), (iv) anatomy or geometry, and/or (v) IPH or other content. Notably, this classification has been shown to have high overall accuracy, sensitivity and specificity and a high degree of clinical relevance (with the potential to alter existing standards of care for patients undergoing catheter and cardiovascular care).
Another exemplary embodiment is lung cancer, where the subtype of mass can be determined in order to guide the most likely beneficial treatment to the patient based on the apparent phenotype. In particular, pretreatment and data set enrichment can be used to isolate solid versus semi-solid ("ground glass") subregions that differ in malignancy and demonstrate different optimal treatment regimens.
In further example embodiments, the systems and methods of the present disclosure may provide image pre-processing, image denoising, and novel geometric representation (e.g., affine transformation) of CT angiography (CTA) diagnostic images to facilitate and maximize the performance of CNN-based markers for deep learning algorithms for developing flow classifiers and representing the risk of adverse cardiovascular effects during procedures. Thus, as disclosed herein, image deblurring or restoration can be used to identify lesions of interest and quantitatively extract plaque components. Furthermore, transforming the cross-sectional (along the major axis vessel) segmented image into, for example, a "spread out" rectangular reference frame following the established lumen along the X axis can be applied to provide a normalized frame to allow the DL method to best learn representative features.
Although the example embodiments herein utilize 2D annotated cross-sections for analysis and phenotyping, it should be noted that the present application is not limited to such embodiments. In particular, some embodiments may utilize enriched 3D data sets, for example instead of or in addition to processing 2D cross-sections separately. Thus, in an example embodiment, a video interpretation according to computer vision may be applied to the classifier input data set. Note that processing multiple cross-sections in sequence, as along the centerline in a "movie" sequence, these methods for tubular structures can be generalized, for example, moving the centerline up and down and/or other 3D representations according to the aspects most appropriate for the anatomy.
In further example embodiments, the pseudo color representations in the enriched dataset may have continuous values across pixel or voxel locations. This can be used for "radiologic" features (with or without explicit verification) or verified tissue types computed independently for each voxel. This set of values may exist in any number of preprocessed stacks and may be fed into a phenotype classifier. Notably, in some embodiments, the value of each pixel/voxel may be for any number of different features (e.g., may be represented in any number of different overlays for different analytes, which is sometimes referred to as "multi-occupancies"). Alternatively, each pixel/voxel may be assigned to only one analyte (e.g., to only a single analyte stack). Further, in some embodiments, the pixel/voxel values of a given analyte (e.g., a given analyte) may be based on an all-or-nothing classification scheme (e.g., whether the pixel is calcium or not). Alternatively, the pixel/voxel values of a given analyte may be relative values, e.g., probability scores. In some embodiments, the relative values of pixels/voxels are normalized across a set of analytes (e.g., such that the total probability adds up to 100%).
In an example embodiment, the classification model may be trained by applying, in whole or in part, multi-scale modeling techniques such as partial differential equations to represent, for example, a representation of possible cell signaling pathways or plausible biological incentives.
Other alternative embodiments include using, for example, change data collected from multiple points in time, rather than (only) data from a single point in time. For example, if the amount or nature of a negative cell type increases, it may be referred to as a "progression variable" phenotype, while a "regression variable" phenotype is directed to a decrease. The regression variable may be, for example, due to a response to the drug. Alternatively, if for example the rate of change of LRNC is fast, this may imply a different phenotype, for example a "fast progression variable".
In some embodiments, non-spatial information, as derived from other assays (e.g., laboratory results) or demographic/risk factors, or other measurements extracted from radiological images, may be fed to the final layer of the CNN to combine spatial information with non-spatial information.
Notably, while the systems and methods focus on phenotypic classification, similar methods can be applied to result prediction. Such classification may be based on baseline truth history results assigned to the training data set. For example, life expectancy, quality of life, treatment efficacy (including comparison of different treatment regimens), and other outcome predictions can be determined using the systems and methods of the present application.
Examples of the systems and methods of the present application are further illustrated in the various figures and detailed description that follows.
In further example embodiments, the systems and methods of the present disclosure provide for the determination of fractional flow reserve in myocardial and/or brain tissue by measurement of plaque morphology. The systems and methods of the present disclosure may use sophisticated methods to characterize the vasodilation capacity of blood vessels through objectively validated determinations of tissue types and traits that affect their expandability. In particular, from the perspective of blood flow reserve, plaque morphology can be used as an input to the analysis of the dynamic behavior of the vasculature (training the model with true phase data of blood flow reserve). It is thus possible to determine the dynamic behavior of the system rather than a (only) static description. Stenosis itself is known to be of low predictive power because it provides only a static description, requiring the addition of accurate plaque morphology for the most accurate imaging-based assessment of dynamic function. The present disclosure provides systems and methods that determine accurate plaque morphology and then process to determine dynamic functionality.
In an example embodiment, deep learning is utilized in order to maintain the spatial context of tissue properties and vessel anatomy (collectively referred to as plaque morphology) at an optimal level of granularity, thereby avoiding excessive non-material variability in the training set, while maintaining other simpler uses that need to go beyond machine learning. Other alternatives use measurements of only the vascular structure, rather than a more complete treatment of tissue properties. Such methods can capture lesion length, stenosis, and possibly entrance and exit angles, but ignore determinants of vasodilation ability. Advanced assumptions about the flexibility of the arterial tree as a whole must be made to use these models, but plaque and other tissue properties can result in the expandability of the coronary tree being expandable in a heterogeneous manner. Different parts of the tree are more or less expandable. These methods are inadequate because expandability is a key factor in determining FFR. Other methods of attempting tissue characterization do so under the following circumstances: there is no objective verification as to its accuracy and/or no data enrichment method required to maintain the spatial context necessary to provide the effectiveness of the deep learning method optimal for medical image deep learning (e.g., transformations such as unfolding and verified pseudo-color tissue type superposition). Some approaches attempt to increase the training set size by using synthetic data, but this is ultimately limited by the limited data on which synthetic generation is based, and more by the data enhancement scheme rather than the amount of actual expansion of the input training set. In addition, the systems and methods of the present disclosure are capable of producing continuous assessments across the length of a blood vessel.
The systems and methods of the present disclosure effectively utilize objective tissue characterization by histological validation across multiple arterial beds. Regarding the correlation with an example application of atherosclerosis, plaque composition is similar in coronary and carotid arteries, regardless of their age, and this will largely determine relative stability, indicating that the representation at CCTA is similar to that at CTA. Minor differences in the range of various plaque features may include thicker caps and higher prevalence of intraplaque hemorrhage and calcified nodules in the carotid artery, however there is no difference in the properties of the plaque components. In addition, the carotid and coronary arteries share many similarities in the physiology of vascular tone modulation that has an effect on plaque progression. Myocardial blood perfusion is regulated by the vasodilation of the epicardial coronary arteries in response to various stimuli, such as NO, resulting in dynamic changes in coronary artery tension that may result in multiple changes in blood flow. In a similar manner, the carotid artery is not just a simple conduit that supports the cerebral circulation; which exhibit a vaso-responsive property in response to a stimulus, said vaso-responsive property comprising a change in shear stress. Endothelial shear stress contributes to endothelial health and a favorable vascular wall transcriptome profile. Clinical studies have demonstrated that areas of low endothelial shear stress are associated with atherosclerotic development and high risk plaque features. Similarly, in the carotid artery, lower wall shear stress is associated with plaque development and localization. (endothelial shear stress is a useful measure by itself, but not in place of plaque morphology.) it is important to acknowledge that the technical challenges are different across the bed (e.g., using gating, vessel size, amount and nature of motion), but these effects can be mitigated by scanning protocols, which can result in approximately in-plane voxel sizes in the range of 0.5-0.75mm, and that the planar resolution of penetration of the coronaries (smaller vessels) is actually better than, but not inferior to, the resolution of the cervicodynia (where voxels are isotropic in the coronaries, but not in the cervical and peripheral endings).
The present disclosure is based on a solid mathematical principle that respects the Nyquist-Shannon sampling theorem, achieving effective resolution with conventionally acquired CTAs within the same field of activity as the IVUS VH. IVUS imaging has excellent spatial resolution for the overall structure (i.e., lumen), but generally lacks the ability to characterize plaque components with high accuracy. The literature estimates that with typical transducers in the 20-40MHz range, IVUS resolution is 70-200 μm axially and 200-400 μm laterally. IVUS VH is a method of spectral backscattering analysis that enables plaque composition analysis (and hence measurement). For the IVUS VH approach, which uses a large (e.g. 480 μm) moving window in the axial direction, the relatively large size of this moving window (and hence the accuracy of the composition analysis) is fundamentally limited by the bandwidth requirements of the spectral analysis. In the case of IVUS VH images displayed over a small moving window (e.g. 250 μm), this limits the accuracy of this analysis since each IVUS pixel is classified into a discrete class. 64-layer multi-detector CCTA scanning has been described as being in the range of 300-400 μm resolution. Although this has brought CCTA resolution very close to that of IVUS VH, other factors specific to the analysis of the present invention are considered. Thus, rather than discretely classifying CCTA pixels, the systems and methods of the present disclosure perform iterative deblurring or restoration modeling steps at sub-voxel precision, for example, using triangular tessellated surfaces to represent the true surfaces of lipid nuclei. The lipid core area was in the range of 6.5-14.3mm2 in 393 patients (corresponding to a radius of curvature of 1.4-2.1 mm). Using the chord length formula where the chord spans a single voxel in a diagonal fashion, this represents an upper limit of the error for the tessellated surface representation of the lipid core of 44 μm. There are also additional factors associated with the deblurring or recovery analysis that may cause an error of approximately half of the total range 194-244 μm pixels of accuracy, which is generally equivalent to that of the IVUS VH used to measure cap thickness.
The present disclosure is also innovative in dealing with the fundamental limitations of applying artificial intelligence and deep learning to the analysis of atherosclerotic imaging data. Conventional competitive methods that lack validated and objective grounds are fundamentally limited in a number of ways. First, the use of arbitrary thresholds results in an inability to assess accuracy unless in a form that is weakly correlated with other markers that lack objective validation themselves. Second, this lack of objectivity increases the need for large amounts of data that are environmentally based on relevance. This poses unfeasible requirements for manual annotation of radiological images, which are themselves the subject of analysis (that is, as opposed to verification from an independent modality). Third, because of the interpretability of the generated models, these models must be presented to regulatory agencies such as the FDA as "black boxes" that lack scientifically rigorous descriptions of mechanisms of action that can be linked to traditional biological hypothesis testing. In particular, although CNN has proven to be excellent in performing many different computer vision tasks, it has significant drawbacks when applied to radiological data sets: 1) a large number of training and validation data sets are required, and 2) intermediates.
CNN calculations are generally not representative of any measurable property, making regulatory approval difficult. To address these challenges, a pipelined approach consisting of stages is utilized with the output of the CNN that alone can be objectively verified at the biological level to feed. The present invention overcomes these disadvantages by using a pipeline consisting of one or more stages that are biometricable (i.e., objectively verifiable) and a subsequent smaller-scale convolutional neural network that processes these verified biological properties to output what may be said to be a desired output condition that is not based on subjective or qualitative "image characteristics". These architectural capabilities reduce shortcomings by increasing the efficiency of available training data and enable intermediate steps to be objectively validated. The system and method of the present disclosure uses CNNs for medical imaging while reducing the disadvantages by: 1) reducing the complexity of the visual task to a level acceptable when training a CNN with a medium-sized data set; and 2) producing intermediate output that is objectively validated and easily interpreted by a user or regulatory agency. Intermediate CNN calculations generally do not represent any measurable property, spatial context is often difficult to obtain in ML methods using feature extraction, and the use of raw data sets that do contain spatial context generally lacks objective benchmark ground truth labels for the extracted features, whether processed using traditional machine learning or deep learning methods. Also, the raw data set contains variations that are "noise" relative to the classification problem at hand, which is overcome in computer vision applications outside medicine by having a very large training set of a certain size that is not generally available in the medical field, particularly data sets annotated with a baseline true phase.
The quantitative capabilities of the systems and methods of the present disclosure make such ideal for analyzing more advanced imaging protocols (such as early/delayed phase alignment for tissue characterization studies, dual-energy and multi-spectral techniques, etc.).
While the systems and methods of the present disclosure have been particularly shown and described with reference to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the disclosure.
Drawings
The foregoing will be apparent from the following more particular description of example embodiments as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the disclosure.
Fig. 1 depicts a schematic diagram of an exemplary system for determining and characterizing a medical condition by implementing a hierarchical analysis framework in accordance with the present disclosure.
FIG. 2 outlines a resampling-based model building method that may be implemented by the systems and methods described herein, according to the present disclosure.
Fig. 3 depicts a sample patient report that may be output by the systems and methods described herein according to the present disclosure.
Fig. 4 depicts example segmentation levels for a multi-scale vessel wall analyte map according to the present disclosure.
FIG. 5 depicts an exemplary pixel-level probability mass function as a set of analyte probability vectors according to the present disclosure.
FIG. 6 illustrates a technique for calculating putative analyte spots according to the present disclosure.
Fig. 7 depicts normalized vessel wall coordinates for an exemplary vessel wall composition model according to the present disclosure.
Fig. 8 depicts an example edge between removed plaque and outer vessel wall of a histological sample according to the present disclosure.
Fig. 9 illustrates some complex vessel topologies that may be explained using the techniques described herein in accordance with the present disclosure.
FIG. 10 depicts an exemplary analyte spot representative of a normalized blood vessel wall coordinate distribution according to the present disclosure.
Figure 11 depicts an exemplary distribution of blog descriptors according to the present disclosure.
FIG. 12 depicts an exemplary model for imaging data correlating between a hidden baseline true phase state and an observed state according to the present disclosure.
Fig. 13 depicts a diagram of an example Markov (Markov) model/Viterbi (Viterbi) algorithm for associating observed states with hidden states in an image model in accordance with the present disclosure.
Fig. 14 depicts an example frequency distribution of the total number of spots per set of histological slides of a plurality of histological slides according to the present disclosure.
Fig. 15 depicts an exemplary implantation of a 1D markov chain in accordance with the present disclosure.
Fig. 16 depicts an example first-order markov chain for a text probability table according to the present disclosure.
FIG. 17 depicts conditional dependencies of a first pixel based on its neighboring pixels according to the present disclosure.
Fig. 18 depicts a further exemplary hierarchical analysis framework in accordance with the present disclosure.
Fig. 19 depicts an example application of phenotyping objectives in guided vascular therapy according to the present disclosure. The depicted example uses the Stary plaque typing system employed by AHA as a basis, with the type of in vivo determination shown in color overlay.
Fig. 20 depicts an example application of phenotyping lung cancer according to the present disclosure.
FIG. 21 illustrates exemplary image pre-processing steps for deblurring or restoration according to the present disclosure. Deblurring or recovery uses patient-specific point spread determination algorithms to mitigate artifacts or image limitations caused by the image formation process that may reduce the ability to determine characteristics of the predicted phenotype. The depicted (bottom) deblurred or restored (processed) image is derived from the CT imaging of plaque (top) and is the result of iteratively fitting a physical model of the scanner point spread function with regularization assumptions about the true latent density of different regions of the image.
Fig. 22 depicts an exemplary application of an enriched data set demonstrating an atherosclerotic plaque phenotype in accordance with the present disclosure.
Fig. 23 illustrates tangential and radial direction variables using internal representations of unit phasors and phasor angles (shown coded in grey scale) illustrating the use of normalized axes for tubular structures associated with blood vessels and other pathophysiology associated with such structures (e.g., the gastrointestinal tract) in accordance with the present disclosure.
Fig. 24 illustrates an exemplary overlay of annotations produced from a CTA (contour region) radiology analysis application on top of annotations produced by a pathologist from histology (solid region) according to the present disclosure.
Fig. 25 illustrates an additional step of data enrichment utilizing, in particular, a normalized coordinate system to avoid irrelevant variations associated with wall thickness and radial representations in accordance with the present disclosure. Specifically, the "donut shape" is "unrolled" while maintaining pathologist notes.
Figure 26 represents additional steps of data enrichment associated with plaque phenotyping according to the present disclosure. Working from the deployed modality, it is shown that the lumen is irregular (e.g. due to ulceration or thrombosis) and locally varying wall thickening.
Figure 27 represents data-enriched imaging (in both undeployed and deployed forms) comprising intra-plaque hemorrhage and/or other morphology according to the present disclosure.
FIG. 28 shows validation results of an algorithm trained and validated with different variations of data-enriched imaging according to the present disclosure.
Fig. 29 provides an example application of phenotyping lung lesions according to the present disclosure.
Fig. 30 provides examples of biological properties (including, e.g., tissue characteristics, morphology, etc.) for phenotyping lung lesions in accordance with the present disclosure. Note that in an example embodiment, the pseudo-color may be represented as a continuous value rather than a discrete value with respect to one or more of these biological properties.
FIG. 31 illustrates a high-level view of an example method for user interaction with a computer-assisted phenotypic typing system according to the present disclosure.
FIG. 32 illustrates an example system architecture according to this disclosure.
FIG. 33 illustrates components of an example system architecture according to this disclosure.
FIG. 34 is a block diagram of an example data analysis in accordance with the present disclosure. Images of the patient are collected, raw slice data is used in a set of algorithms to measure biological properties that can be objectively validated, which are then formed into an enriched data set for feeding one of a plurality of CNNs, in this example where the results are propagated forward and backward using recursive CNNs to enforce constraints or to produce a continuous condition (such as a monotonically decreasing fractional flow reserve from proximal to distal throughout the vessel tree, or a constant HRP value in focal lesions or other constraints).
Figure 35 demonstrates the causal relationships and available diagnostics of coronary ischemia according to the present disclosure.
Fig. 36 depicts an exemplary 3D segmentation of lumens, vessel walls, and plaque components (LRNC and calcification) of two patients exhibiting chest pain and similar risk factors, and a degree of stenosis, according to the present disclosure. Left side: a 68 year old male with NSTEMI at follow-up. And on the right: a 65 year old male with no events at follow-up. The system and method of the present disclosure correctly predicts its corresponding outcome.
FIG. 37 depicts example histological processing steps for objective verification of tissue composition according to the present disclosure.
Fig. 38 is a schematic depiction of multiple cross-sections that can be processed to provide dynamic analysis of a vessel tree across an entire scan, demonstrating the relationship between cross-sections, and where the processing of one cross-section depends on its neighbors, in accordance with the present disclosure.
Detailed Description
Presented herein are systems and methods for analyzing pathology using quantitative imaging. Advantageously, the systems and methods of the present disclosure utilize a hierarchical analysis framework that identifies and quantifies biological properties/analytes from imaging data, and then identifies and characterizes one or more pathologies based on the quantified biological properties/analytes. This hierarchical approach using imaging to examine basic biology as an intermediary for assessing pathology offers a number of analysis and processing advantages over systems and methods configured to directly determine and characterize a pathology from raw imaging data without performing a validation step and/or without performing the advantageous processing described herein.
For example, one advantage is the ability to utilize a training set from a non-radiological source, e.g., from a tissue sample source, such as histological information, in conjunction with or independent of a training set of radiological sources to correlate radiological imaging characteristics with biological properties/analytes with pathology. For example, in some embodiments, histological information may be used in a training algorithm for identifying and characterizing one or more pathologies based on quantified biological properties/analytes. More specifically, it is also possible to identify and quantify in the radiological data (which is advantageously non-invasive) biological properties/analytes identifiable/quantifiable in non-radiological data (as in a histologic dataset or obtainable by gene expression profiling) as well as in radiological data. Information from non-radioactive sources can then be used to correlate these biological properties/analytes with clinical findings about pathology, for example using histological information, gene expression profiles, or other clinically abundant data sets. Such clinically relevant data sets may then serve as or part of a training set for determining/tuning (e.g., using machine learning) algorithms that relate biological properties/analytes and pathologies having known relationships to clinical results. These algorithms relating biological properties/analytes to pathology using a training set of nonradioactive sources can then be applied to evaluate biological properties/analytes derived from radiological data. Thus, the systems and methods of the present disclosure may advantageously enable the use of radiological imaging (which advantageously may be cost-effective and non-invasive) to provide an alternative means for predicting clinical outcome or guiding treatment.
Notably, in some cases, the training data for non-radiological sources (e.g., histological information) may be more accurate/reliable than the training data for radiological sources. Further, in some embodiments, training data from a radiological source may be enhanced using training data from a non-radiological source. Thus, the disclosed hierarchical analysis framework advantageously improves the trainable performance and resulting reliability of the algorithms disclosed herein, as better data input may result in better data output. As described above, one major advantage is that once trained, the systems and methods of the present disclosure can enable derivation of clinical information comparable to existing histological and other non-radiological diagnostic tests without having to undergo invasive and/or expensive procedures.
Alternatively, in some embodiments, a training set of radiological sources (such as non-radiological imaging sources, e.g., histological sources and/or non-imaging sources) may be utilized in conjunction with or independent of the training set of radiological sources, e.g., when correlating image features with biological properties/analytes. For example, in some embodiments, one or more biological models may be extrapolated and fitted to correlate radiological data with non-radiological data. For example, histological information can be correlated with radiological information based on the underlying biological model. This correlation may enable trained identification of biological properties/analytes in radiological data that utilizes non-radiological, e.g., histological, information.
In some embodiments, data extracted from complementary modalities may be used to correlate, for example, image features with biological properties/analytes from blood panels, physical FFRs, and/or other data sources.
In an example embodiment, one or more biological models may be extrapolated and fitted using imaging data extracted from one imaging modality that is related and/or fused to another imaging modality or to a non-imaging source (e.g., a blob work). These biological models may advantageously be correlated across and between imaging and non-imaging datasets based on the biological models. Thus, these biological models may enable a hierarchical analysis framework to utilize data from one imaging modality with another or with non-imaging sources to identify/quantify/identify one or more biological properties/analytes or to identify/characterize one or more medical conditions.
Another advantage of the hierarchical analysis framework disclosed herein is the ability to incorporate data from multiple data sources of the same or different types into a process for identifying and characterizing pathologies based on imaging data. For example, in some embodiments, one or more non-imaging data sources may be used in conjunction with one or more imaging data sources to identify and quantify a set of biological properties/analytes. Thus, in particular, the set of biological properties/analytes may comprise one or more biological properties/analytes identified and/or quantified based on one or more imaging data sources, one or more biological properties/analytes identified and/or quantified based on one or more non-imaging data sources, and/or one or more biological properties/analytes identified and/or quantified based on a combination of imaging and non-imaging data sources (note that for purposes of the quantitative imaging systems and methods of the present disclosure, the set of biological properties/analytes may generally comprise at least one or more biological properties/analytes identified and/or quantified based at least in part on imaging data). The ability to augment information from imaging data sources with information from other imaging and/or non-imaging data sources when identifying and quantifying a set of biological properties/analytes increases the robustness of the systems and methods presented herein and enables the utilization of any and all relevant information in identifying and characterizing a pathology.
Yet another advantage of the hierarchical analysis framework relates to, for example, the ability to adjust/fine-tune the data at each stage before or after evaluating subsequent stages with the data (note that in some embodiments this may be an iterative process). For example, in some embodiments, information relating to a set of identified and quantified biological properties/analytes may be adjusted in a post-empirical manner (e.g., after preliminary identification and/or quantification thereof). Similarly, in some embodiments, information relating to a set of identified and characterized pathologies may be adjusted in a post-empirical manner (e.g., after preliminary identification and/or characterization thereof). These adjustments may be automatic or user-based and may be objective or subjective. The ability to adjust/fine-tune the data at each level may advantageously improve data accountability and reliability.
In an example embodiment, the adjustment may be based on context information that may be used to update one or more probabilities that affect the determination or quantification of the biological property/analyte. In an example embodiment, the context information for adjusting the information relating to the set of identified and quantified biological properties/analytes in a post-empirical manner may comprise patient demographics, correlations between biological properties/analytes, or correlations between identified/characterized pathologies and biological properties/analytes. For example, in some cases, a biological property/analyte may be correlated in the sense that the identification/quantification of a first biological property/analyte may affect the probability related to the identification/quantification of a second biological property/analyte. In other cases, the identification/characterization of the first pathology, e.g. based on the identified/quantified biological properties/analytes of the initial set, may affect the probability related to the identification/quantification of the biological properties/analytes in said initial set or even biological properties/analytes not in the first set. In other cases, the pathology may be relevant, for example, where the identification/characterization of the first pathology may affect a probability associated with the identification/characterization of the first pathology. As described above, information relating to the identification and quantification of biological properties/analytes and/or information relating to the identification and characterization of pathologies may be updated in an iterative manner, for example until data aggregation or threshold/baseline implementation or for a selected number of cycles.
An additional advantage of the hierarchical analysis framework relates to the ability to provide information to users, e.g., physicians, relating to both pathology as well as basic biology. This increased context may facilitate clinical diagnosis/assessment as well as evaluation/determination of next steps, e.g., therapy/treatment options or additional diagnosis. For example, the systems and methods may be configured to determine those biological parameters/analytes that are most uncertain/have the highest degree of uncertainty (e.g., due to lack of data or conflicting data) in relation to the identification/quantification of one or more pathologies. In such cases, specific additional diagnostics may be recommended. Providing the user with increased context for information relating to both pathology as well as basic biology may further assist the user in evaluating/error-checking various clinical conclusions and recommendations derived from the analysis.
A hierarchical analysis framework, as used herein, refers to an analysis framework in which one or more intermediate sets of data points are used as intermediate transformations between intermediate processing layers or initial and end sets of data points. This is similar to the concept of deep learning or hierarchical learning in which an algorithm is used to model higher level abstractions using multiple processing layers or otherwise utilizing multiple transforms, such as multiple non-linear transforms. In general, the hierarchical analysis framework of the systems and methods of the present disclosure includes data points that are intermediate treatment layers or intermediate transformations between imaging data points and pathology data points involving biological properties/analytes, which in an example, embodiment, may include multiple treatment layers or multiple transformations (e.g., embodied by multiple levels of data points) for determining each of imaging information, underlying biological information, and pathology information. Although example hierarchical analysis framework structures (e.g., with specific processing layers, transforms, and data points) are introduced herein, the systems and methods of the present disclosure are not limited to such embodiments. Rather, any number of different types of analysis framework structures may be utilized without departing from the scope and spirit of the present disclosure.
In an example embodiment, the hierarchical analysis framework of the present application may be conceptualized as comprising a logical data layer that is an intermediary between an empirical data layer (comprising imaging data) and a results layer (comprising pathology information). However, the empirical data layer represents data of direct origin, and the logical data layer advantageously adds a degree of logic and reasoning that can extract this raw data to a useful set of analytes for the result layer in question. Thus, empirical information from diagnosis, such as raw imaging information, for example, can be advantageously extracted down to logical information relating to a particular set of biological features that is relevant for assessing a selected pathology or group of pathologies (e.g., a pathology involves an imaged region of a patient's body). In this way, the biological features/analytes of the present application may also be considered as pathological symptoms/indicators.
The biological features/analytes of the present application may sometimes be referred to herein as biomarkers. Although the term "biological" or the prefix "biological" is used to characterize a biological feature or biomarker, this is only intended to mean that the feature or marker has some degree of correlation with respect to the patient's body. For example, a biological feature may be an anatomical, morphological, compositional, functional, chemical, biochemical, physiological, histological, genetic, or any number of other types of features that relate to a patient's body. Example biological features utilized by particular embodiments of the systems and methods of the present disclosure are disclosed herein (e.g., relating to a particular anatomical region of a patient, such as the vascular system, the respiratory system, an organ such as the lungs, heart, or kidneys, or other anatomical region).
Although example systems and methods of the present disclosure may be adapted to detect, characterize, and treat pathologies/diseases, applications of the systems and methods of the present disclosure are not limited to pathologies/diseases, but may be more generally applicable with respect to any clinically relevant medical condition of a patient, including, for example, syndromes, conditions, wounds, allergies, and the like.
In exemplary embodiments, the systems and methods of the present disclosure relate to computer-assisted phenotyping, for example, by analyzing medical images using knowledge about biology to measure differences between disease types that have been determined by research to indicate a phenotype that in turn predicts outcome. Thus, in some embodiments, characterizing a pathology may comprise determining a phenotype of the pathology, which in turn may determine a predictive outcome.
Referring initially to FIG. 1, a schematic diagram of an exemplary system 100 is depicted. There are three basic functions that may be provided by the system 100, represented as a trainer module 110, an analyzer module 120, and a queue tool module 130. As depicted, the analyzer module 120 advantageously implements a hierarchical analysis framework that first identifies and quantifies a biological property/analyte 130 using a combination of (i) imaging features 122 from one or more acquired images 121A of the patient 50, and (ii) non-imaging input data 121B of the patient 50, and then identifies and characterizes one or more pathologies 124 (e.g., prognostic phenotypes) based on the quantified biological property/analyte 123. Advantageously, the analyzer module 120 may be independent of the baseline truth or verify reference operation by implementing one or more pre-trained, e.g., machine-learned, algorithms to draw inferences thereof.
In an example embodiment, the analyzer may contain an algorithm for calculating the imaging characteristics 122 from the acquired image 121A of the patient 50. Advantageously, some of the image features 122 may be computed on a per voxel basis, while other image features may be computed on a region of interest basis. An example non-imaging input 121B that may be used with the acquired image 121A may contain data from a laboratory system, patient reported symptoms, or patient history.
As described above, the image features 122 and non-imaging inputs may be used by the analyzer module 120 to calculate the biological properties/analytes 123. Notably, biological properties/analytes are generally quantitative, objective properties (e.g., objectively verifiable rather than expressed as impressions or appearances) that may indicate the presence and extent of, for example, markers (e.g., chemicals) or other measures such as structure, size, or anatomical characteristics of a region of interest. In an example embodiment, the quantified biological property/analyte 123 may be displayed or derived for direct consumption by a user, e.g., by a clinician, in addition to or independent of further processing by the analyzer module.
In an example embodiment, one or more of the quantified biological properties/analytes 123 may be used as an input for determining a phenotype. Phenotypes are typically defined in a disease-specific manner, are independent of imaging, and are typically extracted from ex vivo pathophysiological samples that are documented with the expected outcome. In an example embodiment, the analyzer module 120 may also provide the predictive results 125 for the determined phenotype.
It should be understood that example embodiments of the analyzer module 120 are further described herein with respect to specific examples following the general description of the system 100. In particular, the specific imaging characteristics, biological properties/analytes and pathology/phenotype are described with respect to a specific medical application, such as with respect to the vascular system or with respect to the respiratory system.
Still referring to fig. 1, cohort tool module 130 enables the definition of a cohort of patients for their cohort analysis, e.g., based on a selected set of criteria related to the cohort study in question. An example cohort analysis may be directed to a group of patients enrolled in a clinical trial, e.g., where the patients are further grouped based on one or more arms of the trial, e.g., a treatment arm versus a control arm. Another type of cohort analysis may be directed to a set of subjects for which a baseline true phase or reference exists, and this type of cohort may be further broken down into a training set or "development" set and a test set or "retention" set. Development sets may be supported for training 112 algorithms and models within the analyzer module 120, and retention sets may be supported for evaluating/verifying 113 performance of algorithms or models within the analyzer module 120.
With continued reference to FIG. 1, the trainer module 110 may be used to train 112 algorithms and models within the analyzer module 120. In particular, the trainer module 110 may rely on the benchmark truth 111 and/or the reference annotations 114 in order to derive weights or models, for example, according to established machine learning paradigms or by notifying algorithm developers. In an example embodiment, classification and regression models are employed that may be highly adaptive, e.g., capable of revealing complex relationships between predictors and responses. However, its ability to adapt to infrastructure within existing data may enable the model to find patterns that are not reproducible for another sample of the subject. Adaptation to irreproducible structures within existing data is often referred to as model overfitting. To avoid building an over-fit model, a systematic approach can be applied that prevents the model from finding spurious structures and enables the end user to have confidence that the final model will predict new samples with a similar degree of accuracy with the dataset for which the model is evaluated.
The continuous training set may be used to determine one or more optimal tuning parameters, and the test set may be used to estimate the predicted performance of the algorithm or model. The training set may be used to train each of the classifiers through randomized cross-validation. The data set may be repeatedly split into training and test sets and may be used to determine classification performance and model parameters. The splitting of the data set into a training set and a test set occurs using a hierarchical approach or a maximum dissimilarity approach. In an example embodiment, a resampling method (e.g., bootstrapping) may be used within the training set in order to obtain confidence intervals for (i) optimal parameter estimates and (ii) predictive performance of the model.
FIG. 2 outlines a resampling-based model building methodology 200 that may be utilized by the systems and methods of the present disclosure. First, at step 210, a set of tuning parameters may be defined. Next, at step 220, the model is fitted and retention samples are predicted for resampling of each tuning parameter set data. At step 230, the resample estimates are combined into a performance spectrum. Next, at step 240, final tuning parameters may be determined. Finally, at step 250, the entire training set is re-fitted with the final tuning parameters. After each model has been tuned according to the training set, each model may be evaluated for predicted performance on the test set. Test set evaluation occurs once for each model to ensure that the model building process does not over-fit the test set. For each model built, the best tuning parameter estimates, the resampled training set performance, and the test set performance may be reported. The values of the model parameters over the random split are then compared to evaluate model stability and robustness to the training data.
In accordance with the systems and methods of the present disclosure, multiple models may be tuned for each of the biological properties/analytes (e.g., tissue types) represented in the baseline phase map. The model response may include, for example, covariance-based techniques, non-covariance-based techniques, and tree-based models. Depending on its construction, the endpoint may have a continuous response and a categorical response, with some of the techniques in the above categories being used for both categorical and continuous responses, while others are specific to either categorical or continuous responses. The best tuning parameter estimates, the resampled training set performance, and the test set nature may be reported for each model.
Table 1:
Figure BDA0002916856570000291
table 1 above provides an overview of some example functions of the analyzer module 120 of the system 100. That is, the analyzer module 120 may be configured to describe the fields, e.g., to register multiple data streams across the fields; to segment organs, vessels, lesions, and other application specific objects; and/or to reformat/reconfigure the anatomy for a particular analysis. The analyzer module 120 may be further configured to describe the target, e.g., a lesion, in the described field. Describing the target may include, for example: registering the plurality of data streams at a regional setting; performing fine-grained segmentation; measuring a size and/or other characteristic of the relevant anatomical structure; and/or extracting the entire target feature (e.g., biological property/analyte characteristic of the entire target region). In some embodiments, one or more sub-target regions may also be described, e.g., a target region may be split into sub-targets (e.g., biological properties/analyte properties of the sub-target regions) according to a particular application and sub-target specific calculations. The analyzer module 120 may also describe, for example, components or related features (e.g., compositions) in a particular field, target, or sub-target area. This may include segmenting or re-segmenting components/features, calculating values for the segmented components/features (e.g., biological properties/analyte characteristics of the components/features), and assigning a probability map to the readings. The next pathology may be determined based on the biologically quantified properties/analytes and characterized, for example, by determining the phenotype and/or predictive outcome of the pathology. In some embodiments, the analyzer module 120 may be configured to compare data across multiple time points, e.g., one or more of the biological components/analytes may relate to time-based quantification. In further embodiments, a wide scan field may be utilized to assess multifocal pathologies, e.g., quantification of aggregation based on biological properties/analytes across multiple targets in the described field. Finally, based on the foregoing analysis, the analyzer module 120 may be configured to generate a patient report.
A sample patient report 300 is depicted in fig. 3. As shown, the sample patient report 300 may contain the quantification of biological parameters/analytes, as related to the structure 310 and composition 320, as well as data from non-imaging sources, such as hemodynamics 330. The sample patient report may further comprise visualizations 340, e.g., 2D and/or 3D visualizations of imaging data and combined visualizations of non-imaging data such as hemodynamic data superimposed on the imaging data. Various analyses 350 may be displayed for assessing biological parameters/analytes, including, for example, visualizations of one or more models (e.g., decision tree models) for determining/characterizing pathologies. Patient context and identification information may further be included. Accordingly, the analyzer module 120 of the system 100 may advantageously provide a user, such as a clinician, with integrated feedback for evaluating a patient.
Advantageously, the systems and methods of the present disclosure may be adapted for specific applications. Example vascular and pulmonary applications are described in more detail in the following sections (although it should be understood that the specific applications described have general meaning and interoperability with respect to many other applications). Table 2 provides an overview of vessel and lung related applications utilizing a hierarchical analysis framework as described herein.
Table 2:
Figure BDA0002916856570000311
the following sections provide specific examples of quantitative biological properties/analytes that may be used by the systems and methods of the present disclosure with respect to vascular applications:
the anatomical structure: vascular structure measurements, particularly measurements that result in the determination of% stenosis, have long been and are still the single most common measurement in patient care. These measurements were initially limited to inner lumen measurements, rather than wall measurements involving both the inner and outer surfaces of the vessel wall. However, all of the major non-invasive modalities, unlike X-ray angiography, can resolve the vessel wall and, as it does, make extended measurements that can be achieved. The classes are extensive and the objects measured have varying sizes, so care should be taken to generalize. The main consideration is spatial sampling or resolution limitations. However, by utilizing subtle changes in intensity levels due to partial volume effects, the minimum detectable change in wall thickness may be lower than spatial sampling. Furthermore, the prescribed resolution generally refers to the grid size and field of view reconstructed after acquisition rather than the actual resolving power of the imaging protocol to determine the minimum feature size that can be resolved. Likewise, the in-plane resolution and the cross-plane resolution may or may not be the same, and the size of a given feature, as well as its scale and shape, will drive measurement accuracy. Last but not least, in some cases classification conclusions are drawn by applying thresholds to the measurements, which can then be interpreted according to signal detection theory with the ability to optimize the trade-off between sensitivity and specificity (terms not otherwise referring to measurements in the normal sense).
Tissue characteristics: quantitative assessment of individual component components of atherosclerotic plaques containing, for example, lipid-rich necrotic core (LRNC), fibrosis, intraplaque hemorrhage (IPH), permeability, and calcification, can provide critical information about the relative structural integrity of the plaque that can assist a physician's decision during medical or surgical therapy. From an imaging technology perspective, the ability to do so is less dependent on spatial resolution, as is contrast resolution and tissue discrimination made possible by making the tissue differently corresponding to the incident energy different, resulting in different received signals. Each imaging modality does so to some extent; terms such as "acoustic transparency", number of CT in Hounsfield units and differential MR enhancement in ultrasound vary with various sequences such as (but not limited to) T1, T2 and T2.
Dynamic tissue behavior (e.g., permeability): in addition to morphological features of vessel walls/plaques, it is increasingly recognized that dynamic features are a valuable quantitative indicator of vessel pathology. Dynamic sequences in which acquisition is performed at multiple closely spaced times (referred to as phases) expand the library beyond spatially resolved values t, which contain time resolved values that can be used in compartment modeling or other techniques to determine the dynamic response of tissue to stimuli, such as, but not limited to, wash-in and wash-out of contrast agents. By using dynamic contrast enhanced imaging with ultrasound or MR in the carotid artery or delayed contrast enhancement in the coronary artery, a sensitivity assessment of the relative permeability (e.g., Ktrans and Vp parameters from kinetic analysis) of the neovascularization microvascular network within the plaque of interest can be determined. In addition, these dynamic series may also assist in differentiating between increased vascular permeability and intra-plaque hemorrhage.
Hemodynamics: the basic hemodynamic parameters of circulation have a direct impact on vascular lesions. Blood pressure, blood flow velocity and vessel wall shear stress can be measured by techniques ranging from very simple oscillatory methods to complex imaging analysis. Using the general principles of fluid dynamics, calculations of vessel wall shear stress for different regions of the wall can be determined. In a similar manner, MRI, with or without US, has been used to calculate Wall Shear Stress (WSS) and correlate the results with structural changes in the vessel of interest. In addition, short and long term studies have been conducted on the effects of antihypertensive drugs on hemodynamics.
Thus, in an example embodiment, applying key aspects of the systems and methods of the present disclosure in vascular settings may involve evaluating plaque structure and plaque composition. Evaluating plaque structures may advantageously include, for example, lumen measurements (which improve stenosis measurements by providing area rather than diameter-only measurements) and wall measurements (e.g., wall thickness and vascular remodeling). Evaluating plaque composition may advantageously involve quantification of tissue properties (e.g., lipid core, fibrosis, calcification, permeability, etc.) rather than just "soft" or "hard" designations as commonly found in the art. Tables 3 and 4 below describe example structural and tissue property calculations, respectively, that may be utilized by the vascular applications of the systems and methods of the present disclosure.
Table 3: structural calculation of vascular anatomy supported by vascular applications of the systems and methods disclosed herein
Figure BDA0002916856570000321
Figure BDA0002916856570000331
An example system relating to evaluating a vascular system may advantageously comprise/employ an algorithm for evaluating a vascular structure. Thus, the system may employ, for example, a target/vessel segmentation/cross-section model for segmenting the infrastructure of the imaged vessel. Advantageously, a fast marching competitive filter may be applied to individual vessel segments. The system may be further configured to treat a vessel bifurcation. Image registration may be applied using Mattes mutual information (MR) or mean square error (CT) metrics, rigid versor transforms, LBFGSB optimizers, etc. As described herein, vessel segmentation may advantageously comprise lumen segmentation. The initial lumen segmentation may utilize a filter of confidence connections (e.g., carotid artery, vertebrae, femur, etc.) to differentiate the lumens. Lumen segmentation may utilize MR imaging (e.g., a combination of normalized (e.g., inverted) dark contrast images) or CT imaging (e.g., using registered pre-contrast, post-contrast CT, and 2D Gaussian (Gaussian) distributions) to define a vessel-function (vessel-less function). The various connected components may be analyzed and thresholding may be applied. Vessel segmentation may further require outer wall segmentation (e.g., using minimum curvature (k2) flow to account for lumen irregularities). In some embodiments, the edge potential map is calculated as an outward-downward gradient in both contrast and non-contrast. In an example embodiment, the outer wall segmentation may utilize a cumulative distribution function (e.g., merging previous distributions of wall thicknesses, e.g., 1-2 adjacent levels) in the velocity function to allow for a median thickness in the absence of any other edge information. In an example embodiment, a fischer-tropsch diameter (ferret diameter) may be employed for vessel characterization. In further embodiments, the wall thickness may be calculated as the sum of the distance to the lumen plus the distance to the outer wall. In further embodiments, the lumen and/or wall segmentation may be performed using semantic segmentation, using, for example, CNN.
The exemplary system in connection with evaluating the vascular system may further advantageously analyze the vascular composition. For example, in some embodiments, composition may be determined based on image intensity and other image characteristics. In some embodiments, the lumen shape may be utilized, for example, in connection with determining thrombosis. Advantageously, an analyte spot model may be employed to better analyze the composition of a particular sub-region of a blood vessel. Analyte spots are defined as spatially continuous regions in a 2D, 3D or 4D image of a class of biological analytes. The blob model may utilize an anatomically aligned coordinate system using contour lines at, for example, a normalized radial distance from the luminal surface to the adventitial surface of the vessel wall. The model may advantageously identify one or more blobs and analyze each blob location, for example, relative to the entire vascular structure, as well as relative to other blobs. In an example embodiment, the relative positions of the blobs may be modeled using a hybrid Bayesian (Bayesian)/Markov (Markovian) network. The model may advantageously interpret the image intensity observed at pixels or voxels affected by the local neighborhood of hidden analyte class nodes, thus interpreting the partial volume and scanner Point Spread Function (PSF). The model may further allow for dynamic description of analyte spot boundaries from the analyte probability map during inference by the analyzer module. This is a major difference from typical machine vision methods, such as the superpixel method, which pre-compute small regions to be analyzed but cannot dynamically adjust these regions. An iterative inference procedure can be applied that utilizes current estimates of both analyte probability and blob boundaries. In some embodiments, probability density estimation between sparse data used to train the model may be achieved using parametric modeling assumptions or kernel density estimation methods.
Described herein are novel models for classifying the composition of components of vascular plaque that eliminate the need for histological to radiological registration. This model still utilizes expert annotated histology as a reference standard, but training of the model does not require registration with radiological imaging. The multiscale model computes statistics for each continuous region of a given analyte type, which statistics may be referred to as "blobs". In a cross-section through the vessel, the wall is defined by two boundaries, an inner boundary with the lumen and an outer boundary with the vessel wall, thereby creating a doughnut-like shape in the cross-section. Within the doughnut-shaped wall region, there is a discrete number of blobs (different from the default background class of normal wall tissue, which is not considered a blob). The number of blobs is modeled as a discrete random variable. Then, a label for the analyte type is assigned to each spot, and various shape descriptors are computed. In addition, the spots are considered to be in pairs. Finally, within each spot, each pixel can produce a radiological imaging intensity value that is modeled as a sample of independent and identical distributions (i.i.d.) from the continuously estimated distributions specific to each analyte type. Note that in this last step, the parameters of the imaging intensity distribution are not part of the training process.
One key feature of this model is that it takes into account the spatial relationship of the analyte spots within the blood vessel and also with respect to each other, recognizing that point-by-point image features (whether based on histology and/or radiology) are not the only source of information for experts to determine plaque composition. Although the model allows training without explicit histology to radiology registration, it can also be applied to situations where the configuration is known. It is believed that statistical modeling of the spatial layout of the constituent parts of atherosclerotic plaques to classify invisible plaques is a novel concept.
Example techniques for estimating vessel wall composition from CT or MR images are further set forth in the following sections. In particular, the method may employ a multi-scale bayesian analysis model. The basic bayesian formula is as follows:
Figure BDA0002916856570000351
in the context of the present disclosure, it is assumed that it may be based on a multi-scale vessel wall analyte map a, where the observation is combed according to CT or MR image intensity information I.
As depicted in fig. 4, the multi-scale vessel wall analyte map may advantageously include a wall-level segmentation 410 (e.g., a cross-sectional slice of a vessel), a spot-level segmentation, and a pixel-level segmentation 430 (e.g., based on individual image pixels). For example, a ═ B, C can be defined as a map of vessel wall class labels (similar to a map with vertices B and edges C), where B is a set of blobs (regions of contiguous cross-section of the non-background wall that share a label) and C is a set of blob pairs or pairings. B isbCan be defined as a generic single blob, where b e [ lB]Is an index of all the blobs in A, Bb aIs a blob with label a. Individual blob descriptor operator D for statistical purposesB{ } in a low dimensional space. CcCan be defined as a blob pair, where c e [ lB(nB-l)/2]Is an index of all blob pairs in a. Cc f,gIs a blob pair with labels f and g. For statistical purposes, the blob-pair descriptor operator Dc { } is in a low-dimensional space. A (x) ═ a can be defined as a class label of pixel x where \ a ∈ { 'CALC', 'LRNC', 'TIBR', 'IPH', 'background' } (combined characteristics). In an exemplary embodiment, I (x) is the continuous estimated pixel intensity at pixel x. Within each blob, I (x) is modeled independently. Note that because the model is used to classify wall composition in 3D radiological images, the word "pixel" is used to genetically represent both 2D pixels and 3D voxels
The characteristics of the speckle regions of similar composition/structure may advantageously provide insight into the course of the disease. Each slice (e.g., cross-sectional slice) of the blood vessel may advantageously contain a plurality of spots. The relationship between the spots can be evaluated in a pairwise fashion. The number of blobs within a cross-section is modeled as a discrete random variable and may also have quantifiable significance. At the slice level of segmentation, the correlation property (e.g., biological property/analyte) may comprise a quantification of the total number of spots and/or multiple spots for a particular structural/compositional classification, a relationship between the spots, e.g., a spatial relationship, such as closer to interior. At the spot level of segmentation, characteristics (e.g., size and shape) of each spot, such as structural characteristics, as well as compositional characteristics, etc., may be evaluated as acting as a biological property/analyte. Finally, at the pixel level of segmentation, individual pixel level analysis may be performed, for example, based on the image intensity distribution.
The probability mapping of the characteristics may be applied with respect to the multi-scale vessel wall analyte map depicted in fig. 4. The probability map may advantageously establish a probability vector for each pixel, with the components of the vector being for the probability of each class of analyte and one component being for the probability of background tissue. In an example embodiment, the set of probability vectors may represent mutually exclusive characteristics. Thus, each set of probability vectors representing mutually exclusive characteristics will sum to 1. For example, in some embodiments, it may be known that pixels should fall into one and only one component category (e.g., a single coordinate of a blood vessel cannot be both fibrous and calcified). It should be particularly noted that probability mapping does not assume independence of analyte species between pixels. This is because neighboring pixels or pixels within the same blob may generally have the same or similar characteristics. Thus, probability mapping accounts as described in more detail herein advantageously address dependencies between pixels.
f (a ═ α) can be defined as the probability density of the mapping a. f (A) is the probability distribution function over all vessel walls. f (D)B{Baβ) is the probability density with label a of the descriptor vector P. f (D)B{Ba}) is the probability density function (pdf) of the blob descriptor with the label a. For each value of a there is a probability distribution function. f (b) ═ Πf (D)B{Ba})f(Dc{Cf,gY) is the probability density of the pair-wise descriptor vector y with labels f and g. f. of(Dc{Cf,g}) is the probability density function (pdf) of the paired blob descriptors. For each ordered pair f, g, there is a probability distribution function. Thus:
f(c)=Πf(Dc{Ca})
f(A)=f(B)f(C)=Πf(Db{Ba})Πf(Dc{Ca})
p (a) (x) ═ a) is the probability of pixel x having label a. P (a (x)) is the probability mass function (pmf) of the analyte (prevalence). Which can be viewed as a vector of probabilities at a particular pixel x, or a probability map of particular class label values.
Note that: f (a) ═ p (n) · f (C) · (f) (b) ═ p (n) · Π f (C)c)·Πf(Bb)
f(Ccγ) is the probability density of the pair-wise descriptor vector γ. f (C)c) Is the probability density function (pdf) of the pair of blob descriptors. f (B)bβ) is the probability density of the descriptor vector β. f (B)b) Is the probability density function (pdf) of the blob descriptor. P (a) (x) ═ a) is the probability of pixel x having label a. P (a (x)) is the probability mass function (pmf) of the analyte (prevalence in the given plot). Which can be viewed as a vector of probabilities at a particular pixel x, or a spatial probability map of a particular analyte type. P (a (x) ═ a | i (x) ═ i) is the probability of an analyte of a given image intensity as the primary target of the calculation. P (i (x) i | a (x) a) is the distribution of image intensity for a given analyte.
FIG. 5 depicts an exemplary pixel-level probability mass function as a set of analyte probability vectors. As mentioned above, the following assumptions can inform the probability mass function: integrity: in an example embodiment, it may be assumed that a sufficiently small pixel must fall into at least one of the analyte classes (containing the general "background" class), and thus the sum of the probabilities totals 1. Mutual exclusivity: it can be assumed that sufficiently small pixels belong to only one class of analytes; if there are combinations (i.e., acicular calcium in the context of LRNC), new combination classes can be created to maintain mutual exclusivity. Non-independence: it can be assumed that each pixel is highly dependent on its neighbors and the overall structure of a.
An alternative view of the analyte map is a spatial map of the probability of a given analyte. At any given point during the inference, the analyte spot can be defined using the full width at half maximum rule. Using this rule, for each local maximum of the probability of the analyte, the region grows outward to a lower threshold of half the local maximum value. Note that this 50% value is an adjustable parameter. Spatial regularization of the blobs can be done by performing some evolution of curvature on the probability map to keep the boundaries more realistic (smooth, with few topological holes). Note that different possible putative spots for different analyte species may typically have spatial overlap, since these spatial overlaps all represent alternative hypotheses for the same pixel until the different possible putative spots collapse the probability, and thus the modifier is "putative".
When iterative inference terminates, there are several options for the representation of the result. First, the continuous estimate probability map can be presented directly to the user in one of several forms including, but not limited to, surface mapping, contour mapping, or using image fusion similar to visualizing the PET values as changes in hue and saturation at the top of the CT. A second alternative is to collapse the probability map at each pixel by selecting a single analyte label for each pixel. This can be most directly done by selecting the maximum a posteriori value independently at each pixel, thereby creating a visualized classification map by assigning a different color to each analyte label and assigning full or partial opacity on top of the radiological image. Under this second alternative, the label values may be assigned independently, rather than by distinguishing overlapping putative blobs based on the priority probability of each blob. Thus, at a given pixel, if the given pixel belongs to a higher probability blob, a lower priority analyte probability may be used for the tag.
FIG. 6 illustrates a technique for calculating a false putative analyte spot. In an example embodiment, the presumed blobs may have overlapping regions. Therefore, it may be advantageous to apply an analysis technique to the segmentation of pixels by estimating blobs. Determining a local maximum of probability for a probability of a given analyteAnd (4) counting. The full width at half maximum rule can then be applied to determine the discrete blobs. In any given iteration of the inference, the full-width half-maximum rule can be used to define the analyte spots. The local maximum is found and then the region grows with a lower threshold of 0.5 max. (50% of the values may be tunable parameters.) in some embodiments, spatial regularization of the blobs may also be applied, for example, by performing some curvature evolution on the probability map in order to keep the boundaries smooth and avoid holes. Note that at this stage, the different possible putative spots of different analyte species may typically have spatial overlap, since these do not represent alternative hypotheses until the probability collapses. Thus, an image-level analyte map is computed, e.g., based on the collapse of a probability map function. Note that this collapse can be determined based on a pixel-level analyte probability map, a putative spot, or a combination of both. With respect to the pixel-level analyte probability map, collapse may be determined by selecting, for each pixel, the label with the largest probability a (x): arg maxa P (a (x) ═ a). This is similar to the implementation viterbi algorithm. Basically, the highest probability of each set of mutually exclusive probability vectors is locked (e.g., the analyte priority breaks a possible association). All other probabilities in the set may then be set to zero. In some embodiments, the probabilities of neighboring pixels/regions may be considered when collapsing data on the pixel level. With respect to putative blob-level collapse, overlapping putative blobs may be resolved. In some embodiments, prioritization may be based on the blob probability density f (D)1{Aa b}=d1). This may affect the analysis of the blob-level characteristics, since higher probability blobs may change the shape of overlapping lower probability blobs. In an example embodiment, rather than collapsing the data, the entire range of probabilities may be maintained.
To model the relative spatial positioning of the spots within the vessel wall, an appropriate coordinate system may be selected to provide rotational, translational and scale invariance between the different images. These invariances are important for the model because, under the assumption that the atherosclerotic process is similar across different vascular beds, they allow the ability to train on one type of vessel (e.g., the carotid artery where an endarterectomy sample is readily available) and apply the model to other vascular beds (e.g., the coronary artery where a plaque sample is not typically readily available). For tubular objects, the natural coordinate system follows the vessel centerline, with the distance along the centerline providing the longitudinal coordinate and each plane perpendicular to the centerline having polar coordinates of radial distance and angle. However, due to the variability of vessel wall geometry, especially in diseased patients that may be targeted for analysis, an improved coordinate system may be utilized. The longitudinal distance is calculated along the center line or along the vertical plane of interpolation in such a way that each 3D radiological image pixel is given a value. For a given blob, the proximal and distal planes perpendicular to the centerline are each used to create an unsigned distance map on the original image mesh, denoted as p (x) and D (x), where x represents the 3D coordinate, respectively. Distance map l (x) ═ p (x)/(p (x) + d (x)) represents the relative distance along the plaque, with a value of 0 at the proximal plane and a value of 1 at the distal plane. The direction of the l axis is determined by | (x).
Because the geometry of the wall may be significantly non-circular, the radial distance may be defined based on the shortest distance to the inner lumen surface and the shortest distance to the outer membrane surface. Expert annotation of histological images contains the regions defining the lumen and the vessel (defined as the union of the lumen and the vessel wall). A symbol distance function may be created for each of these l (x) and v (x) respectively. The convention is that the interior of these regions is negative, so that in the wall, L is positive and V is negative. The relative radial distance is calculated as r (x) l (x)/(l (x) -v (x)). It has a value of 0 at the luminal surface and a value of 1 at the adventitial surface. The direction of the r axis is determined by ^ (x).
Due to the geometry of the non-circular wall, the normalized tangential distance can be defined as lying along the contour r (contour 1 if processed in 3D). the direction of the t axis is determined by ×.l. The convention is that the histological sections are assumed to be viewed in the proximal to distal direction, so that a positive 1 points inside the image. Note that t, unlike others, does not have a natural origin because it wraps around the vessel to itself. Thus, the origin of this coordinate may be defined differently for each blob relative to the centroid of the blob.
Another wall coordinate used is normalized wall thickness. In a sense, this is a surrogate indicator of disease progression. The thicker wall is assumed to be due to a more severe disease. The statistical relationship of the analytes is assumed to change with more severe disease. The absolute wall thickness is easily calculated as wabs(x) L (x) -v (x). In order to normalize it to [0-1]Can determine the maximum possible wall thickness as the lumen approaches zero size and is completely eccentric and close to the outer surface. In this case, the maximum diameter is the maximum Fisher diameter D of the blood vesselmax. Thus, the relative wall thickness is calculated as w (x) wabs(x)/Dmax
The degree to which the aforementioned coordinates may or may not be used in the model depends in part on the amount of training data available. When the training data is limited, several options are available. Different sections through each plaque are treated, and the relative longitudinal distance is negligible as if it came from the same statistical distribution. Plaque composition has been observed to vary along the longitudinal axis with a more severe plaque appearance in the middle. However, this dimension can collapse as opposed to parameterizing the distribution by l (x). Similarly, the relative wall thickness may also collapse. It has been observed that certain analytes appear in the "shoulder" region of the plaque, where w (x) will have intermediate values. However, this dimension may also collapse until sufficient training data is available.
As described above, the vessel wall composition model may be utilized as an initial assumption (e.g., at the previous p (a)). Fig. 7 depicts normalized vessel wall coordinates of an exemplary vessel wall composition model. In the depicted model, 1 is the relative longitudinal distance from proximal to distal along the vessel target, which may be calculated, for example, at a normalized interval [0,1 ]. The longitudinal distance can be computed using 2 fast-marching propagations starting at the near and far planes to compute the unsigned distance fields P and D, where 1 ═ P/(P + D). The l axis direction is |. As depicted, r is a normalized radial distance, which may also be calculated at a normalized separation [0,1] from the luminal surface to the adventitial surface. Thus, r ═ L/(L + (-V)), where L is the lumen Symbol Distance Field (SDF) and V is the blood vessel SDF. The r axis direction is ^ r. Finally, t is the normalized tangential distance, which can be calculated, for example, with a normalized spacing of [ -0.5,0.5 ]. Notably, in an example embodiment, there may not be a meaningful origin for the entire wall, which is only for individual analyte spots (thus, the t origin may be at the spot centroid). The tangent distance is calculated along the contour curve of 1 and r. A t axis direction is ×.l.
Fig. 9 illustrates some complex vessel topologies that can be addressed using the techniques described herein. In particular, when processing CT or MR in 3D, different branches may advantageously be analyzed separately, such that the relationship between analyte spots in the individual branches is suitably ignored. Thus, if a segmented view (cross-sectional slice) contains more than one lumen, this can be explained by performing a watershed transform on r to split the wall into domains belonging to each lumen, which can then be considered/analyzed separately.
As described above, many of the coordinate and probability measures described herein can be represented using normalized dimensions, thereby preserving dimensional invariance, e.g., between vessels of different sizes. Thus, under the assumption that the disease process is similar and proportional to different caliber vessels, the proposed model can advantageously be independent of absolute vessel size.
In some embodiments, the model may be configured to characterize concentric and eccentric blobs. It is worth noting that all thicknesses of normalization close to 1 can indicate where highly off-center. In a further embodiment, an inward plaque characterization may be performed with an outward plaque characterization. Notably, histological information about this property is hampered by distortions. Thus, in some embodiments, CT and training data may be utilized to establish an algorithm for determining the inward plaque characterization versus the outward plaque characterization.
As described above, in an example embodiment, non-imaging data, such as histological data, may be utilized as a training set for establishing an algorithm for linking image features to biological properties/analytes. However, there are some differences between the types of data that need to be resolved in ensuring proper correlation. For example, the following differences between histology and imaging may affect the appropriate correlation: carotid Endarterectomy (CEA) leaves the adventitia and some medium in CT or MR image analysis of patients where the outer adventitial surface is presumed to be found. (see, e.g., fig. 8, which depicts the edges of the removed plaque of the histological sample relative to the outer vessel wall). Notably, the scientific literature shows uncertainty as to whether calcification will occur in the adventitia. The following technique may be employed to explain this difference. Histology can be expanded outward, for example, based on the assumption that little analyte is left in the wall. Alternatively, the image segmentation may erode inward, e.g., based on knowledge of the typical or specific edges that remain. For example, average edges may be utilized. In some embodiments, the average edge may be normalized to a percentage of the total diameter of the vessel. In further embodiments, histology may be used to mask imaging (e.g., overlay based on alignment criteria). In such embodiments, one or more transformations may need to be applied to the histological data to match the proper alignment. Finally, in some embodiments, the difference may be negligible (which equates to a uniform scaling of the removed plaque to the entire wall). Although this may induce some small errors, it is speculated that the remaining walls may be thin compared to the plaque of a CEA patient.
There may also be longitudinal differences between the histological data (e.g., training set) and the imaging data as represented by the vessel wall composition model. In an example embodiment, the longitudinal distance may be explicitly modeled/correlated. Thus, for example, histological slice numbers (e.g., a-G) can be used to approximately determine the location within the excised portion of the plaque. However, this approach limits analysis relative to other sections that do not have corresponding histological data. Thus, alternatively, in some embodiments, all histological sections may be considered to be caused by the same distribution. In an example embodiment, some limited regularization may still be employed along the longitudinal direction.
As mentioned above, normalized wall thickness is in a sense an imperfect surrogate indicator of disease progression. In particular, the thicker wall is assumed to be due to a more severe disease, e.g. based on the assumption that the statistical relationship of the analyte changes with a more severe disease. The normalized wall thickness may be calculated as follows: the absolute wall thickness T can be determineda(in mm), e.g. as TaL + (-V), where L is luminal SDF, V is vascular SDF, and DmaxIs the maximum Fisher diameter (in mm) of the vessel. And then may be based on T ═ Ta/DmaxCalculating the relative wall thickness T, e.g. at intervals [0,1]]Where 1 indicates the thickest part of the small lumen, which indicates that the plaque is completely eccentric. In an example embodiment, the probability may be adjusted based on the wall thickness, for example, such that the distribution of analyte spots will depend on the wall thickness. This can advantageously model differences in analyte composition during disease progression.
FIG. 10 depicts exemplary analyte spots represented with a normalized blood vessel wall coordinate distribution. Specifically, the origin of t is located at the centroid of the spot. The (r, t) coordinates are random vectors where the position/shape is completely represented by a joint distribution of points inside. This can be simplified by considering the edge distribution (since the radial and tangential shape characteristics appear to be relatively independent). The edge distribution can be calculated as a projection along r and T (note that the 1 and T coordinates can also be considered). Notably, the edge distribution in the radial direction may advantageously represent/characterize plaque growth in concentric layers (e.g., the medial, adventitial, and intimal layers). Similarly, the edge distribution in the tangential direction may advantageously represent growth factors that may indicate the stage of a disease. In an example embodiment, the analyte blob descriptor may be calculated based on the edge distribution. For example, on may perform low order statistics on the edge distribution (either using a histogram or fitting a parametric probability distribution function).
In an example embodiment, the following analyte spot descriptors may be used, for example, to capture the position, shape, or other structural characteristics of individual spots:
-position in normalized vessel coordinates
Mainly with respect to r
For example, to facilitate differentiation between shallow/deep calcifications
Neglect in t direction; [ optional model l Direction ]
Degree on normalized vascular coordinates
Intentionally avoiding the word "size", which implies an absolute measurement, however the degree is a normalized value
-asymmetry, which is used to denote the degree of asymmetry in the distribution
The clinical significance is not clear, but it can help regularize shapes against incredible asymmetric shapes
-an alignment for representing a definition of parallel tissue layers
The analyte spots appear to remain well within the radial layer (contours of r), so this will help to select similar image-processed shapes
Wall thickness at which the spot is located
Thick (e.g., heavy) plaque is assumed to have different statistics than thin plaque
In some embodiments, paired blob descriptors may also be utilized. For example:
-relative position
For example, if fibrosis is on the luminal side of the LRNC
Relative degree of
How thick/wide the fibrosis is relative to the LRNC, for example
-surround
How close one edge projection is to the middle of the other edge
For example, napkin ring signs or fibrosis around LRNC
Relative wall thickness
Degree to represent "shoulder" (shoulder will be relatively less thick compared to central plaque body)
Note that higher order interactions may also be implemented (e.g., between three blobs or between two blobs and another feature). However, decremental return and training limits may be considered.
The following is an example quantification of blob descriptors:
individual blob descriptors
Figure BDA0002916856570000421
Pair of blob descriptors
Figure BDA0002916856570000422
Notably, a set of descriptors (e.g., 8-12 descriptors) forms a finite shape space in which the blob is located. The distribution of a group of spots can then be considered as a distribution in this limited space. FIG. 11 depicts an exemplary distribution of blog descriptors. In an example embodiment, the distribution of blob descriptors may be computed over the entire training set. In some embodiments, lower order statistics (assuming independence) may be utilized on individual blob descriptors, e.g., location: e [ alpha ]r],Var[αr]. In other embodiments, a multidimensional gaussian (mean vector + covariance matrix) analysis may be used to model the descriptors (e.g., where independence is not assumed). In further embodiments, if the distribution is non-normal, it may be modeled with a density estimation technique.
As described above, the number of spots per cross-section (or number of each class) may also be modeled, e.g., η does not account for the analyte class, and ηiThe number in each analyte was counted. Fig. 14 depicts the frequency distribution of the total number of spots per histological slide. Resolution of toxicity was applied as an overage. Note that the analysis chart of fig. 14 depicts the number of spots per cross-section N without considering the analyte species (the number of spots per analyte type is denoted by B).
Summarizing the above sections, in an example embodiment, the entire vessel wall composition model may contain the following:
per pixel analyte before pmf
–P(A(x)=ai)=ρi
Individual blob descriptors
–B1=(αrrtrtrtt)
–B1~N(μ1,Σ1)
Pair of blob descriptors
-C2=(αrr,αtt,βrr,βtt,εrr,εtt,τTT)
-C2~N(μ2,∑2)
The number of blobs
- η -poisson (λ)η)
Wherein:
P(A(x)=al)=ρl
f(Ab)=f(Bl b)
Figure BDA0002916856570000431
as described above, the imaging model may be used as a possibility of a bayesian analysis model (e.g., P (I \ a)). A maximum likelihood estimate may then be determined. In an example embodiment, this may be done taking into account each individual pixel (e.g., without considering a prior probability of a structure in the model). The estimated analyte map is typically only smoothed because the image is smooth (which is why pre-smoothing is typically not performed). An independent pixel-by-pixel analysis may be performed, for example, at least until a point of the scanner PSF is considered. Imperfect imaging data is interpreted using an imaging model. For example, imaging small constituent parts of the patch adds independent noise to the top of the pixel values. In addition, local volume effects and scanner PSF are superiorKnown to apply to small objects. Thus, given a model (e.g., a horizontal set representation of an analyte region), it is simple and fast to simulate CT by using Gaussian blur of the PSF. The imaging models described herein may also be applied to determine (or estimate) the distribution of true (non-blurred) densities of different analytes. Notably, this cannot be from typical imaging studies, as these will have blurred image intensities. In some embodiments, a wide variance may be used to represent uncertainty. Alternatively, the distribution parameters may be optimized from a training set, but the objective function will have to be based on downstream readings (of the analyte region), e.g. unless aligned histological data is available. Fig. 12 depicts an exemplary model for imaging data (e.g., correlating between hidden (classified) state (a (x)) and observed (continuous) state (i (x)), thereby taking into account the parameters of randomness (e.g., analyte density distribution (H (a (x)) and certainty (e.g., scanner ambiguity g (x)) noise factor) theta is H (proportion and HU mean/variance of each analyte)1,μ1,σ1,...,τN,μN,σN). Note that θ is patient specific and will be estimated in a desired maximization (EM) manner, for example, where the analyte label is a latent variable and the image is observed data.
E-step: determining membership probability given current parameters
M-step: maximizing the probability of a parameter given a membership probability
FIG. 13 depicts a diagram of an example Markov model/Viterbi algorithm for correlating observed states with hidden states in an image model. In particular, the graph depicts an observed state (gray) (observed image intensity, i (x)) and a hidden state (white) (pure analyte intensity, H (a (x)) that can be modeled with an empirical histogram or with a gaussian or boxcar probability distribution function. The PSF of the imaging system was modeled as gaussian g (x). Therefore, the temperature of the molten metal is controlled,
I(x)=G(x)*H(A(x))
it should be noted that a viterbi-like algorithm could be applied here, but the convolution would instead be modeled as a gaussian or uniform transmit probability H.
As described above, one part of the inference procedure is based on Expectation Maximization (EM). In a typical application of EM, data points are modeled as belonging to one of several classes that are unknown. Each data point has a feature vector and for each class, this feature vector can be modeled with a parameter distribution represented by a mean vector and a covariance matrix, such as a multidimensional gaussian. In the context of the models presented herein, a straightforward EM implementation would work as follows:
Figure BDA0002916856570000441
wherein G is a Gaussian function
Figure BDA0002916856570000442
Wherein delta is kronecker delta
Figure BDA0002916856570000443
Figure BDA0002916856570000444
(membership probability)
Figure BDA0002916856570000445
Figure BDA0002916856570000446
Figure BDA0002916856570000451
Figure BDA0002916856570000452
Figure BDA0002916856570000453
The main problem with this simple model is that it does not encode any higher order structures for the pixel. There is no prior probability associated with a more realistic pixel arrangement. Only tau determines the proportion of analyte species. Thus, once the tau variable can be used to insert in the blob prior probability model, especially in the step of updating the membership probabilities.
Thus, a modified bayesian inference procedure can be applied with a much more complex bayesian prior. In the basic EM embodiment, there is no true a priori distribution. The variable tau represents a priori relative proportions for each class, but even so, this variable is unspecified and estimated in the inference procedure. Therefore, there is no a priori belief about class distribution in the basic EM model. In the model, the model prior is represented by a multi-scale analyte model. Tau becomes a function of position (and other variables) and not just global scale.
Figure BDA0002916856570000454
Figure BDA0002916856570000455
Figure BDA0002916856570000461
Figure BDA0002916856570000462
The membership probability function is defined as follows:
Figure BDA0002916856570000463
Figure BDA0002916856570000464
Figure BDA0002916856570000465
Figure BDA0002916856570000466
Figure BDA0002916856570000467
Figure BDA0002916856570000468
the inference algorithm is as follows. At each step of the iteration, the member probability map is initialized to zero, such that the probabilities for all classes are zero. Then for all possible model configurations, the membership probability map may be added as follows:
Figure BDA0002916856570000469
finally, the probability vectors may be normalized at each pixel in the member probability map to restore the completeness hypothesis. Advantageously, all model configurations can be iterated. This is done by considering the value of N in turn as 0 to a relatively low value, e.g. 9, at which point extremely few segments are observed with so many spots. For each value of N, a different presumed blob configuration may be examined. The putative blobs may be thresholded to a small number (N) based on the individual blob probabilities. Then, all permutations of the N blobs are considered. Thus, all of the most likely blob configurations can be considered simultaneously and each model weighted by its prior probability. This procedure is clearly an approximate inference scheme, as the entire space of the multi-scale model configuration may not be considered. However, it can be assumed that by considering the most likely (in terms of both N and blobs) a good approximation is achieved. This procedure also assumes that a weighted average of the most likely configurations provides a good estimate at each individual pixel. Another alternative is to perform a constrained search of the model configuration and select the highest likelihood model as the MAP (maximum a posteriori) estimate.
Additional exemplary statistical models (e.g., a posteriori P (a \ I)) are also described herein. In CT angiography, the following information is available:
strength
CT Hounsfield Unit or MR intensity
Possible other imaging features
Position relative to anatomy
-wherein in the patch, the pixel is
Neighboring pixels
For example for smoothing contours by level sets
The posterior probability can be calculated as:
P(A|I)∝P(I|A)·P(A)
thus, the following image information may influence the probability ai (x) of the analyte
I (x) is the observed image intensity (possibly vector)
T (x) is the observed relative wall thickness according to image segmentation
F (x) is a CT image feature
S (x) is a feature of the vessel wall shape (e.g., luminal bulge)
In some embodiments, a method like Metropolis-Hastings may be utilized. In other embodiments, a maximum a posteriori approach may be applied.
The following are example algorithmic possibilities for the statistical analysis model. In some embodiments, the model may utilize belief propagation (AKA max sum, max product, sum product messaging). Thus, for example, a viterbi (HMM) type of method may be utilized, e.g. where the hidden state is the analyte assignment a and the observed state is the image intensity I. This approach may advantageously find that the MAP estimate may be the selection mechanism P (a | I). In some embodiments, a Soft Output Viterbi Algorithm (SOVA) may be utilized. Note that the reliability of each decision may be indicated by the difference between the selected (survivor) path and the dropped path. This may therefore indicate the reliability of the analyte classification for each pixel. In further example embodiments, a forward/backward Baum-Welch (HMM) approach may be utilized. For example, the most likely state can be computed at any point in time, but the most likely sequence cannot be computed (see viterbi).
Another possible technique is the metterol bolis-blacknstein (MCMC) method, for example, where one method is to resample and weight a by probability and a priori. In some embodiments, a simple version of MRF for sampling may be utilized. Note that it may be particularly advantageous to sample the posterior directly. In an example embodiment, a per-pixel histogram of the analyte species may be established.
Other algorithm possibilities include the application of Gibbs (Gibbs) samplers, variational bayes (similar to EM), mean field approximation, kalman filters or other techniques.
As described above, in some embodiments, an expectation-maximization (EM) posterior approach may be utilized. Under this approach, the observed data X are imaging values, the unknown parameter θ is due to the analyte map (but not including the analyte probability), and the latent variable Z is the analyte probability vector. A key feature of this method is that it enables iteration between estimating class members (Z) and model parameters (θ) due to their respective dependencies on each other. However, since the analyte map will isolate the analyte probabilities, the method can be modified so that the current class membership does not have to affect the model parameters (as these are learned in the training step). Thus, the EM basically learns the model parameters as it iterates through the current data. Advantageously, an exemplary embodiment of the EM method iteratively calculates the maximum likelihood, but assuming a flat prior.
Techniques for representing longitudinal covariance are also provided herein. Due to the wide spacing of the histological sections (e.g., 4mm), sampling may not faithfully capture longitudinal variations in the analyte. However, 3D image analysis is typically performed and, presumably, there is some true longitudinal covariance. The problem is that the histological information is not usually provided for the longitudinal covariance. Nonetheless, the example statistical models disclosed herein may reflect slow changes in the longitudinal direction.
In some embodiments, a markov model/chain may be applied. FIG. 15 depicts an exemplary implantation of 1D Markov chains for text/DNA. Conventionally, markov chains are usually of the lowest order possible when applied to images in MRF. However, higher order chains may be advantageous due to conditional independence (markov properties). Otherwise the data may be too out of order to be of value. This is demonstrated by 1D sampling as applied to an exemplary markov chain of text:
uniform rate sample output:
–earryjnv anr jakroyvnbqkrxtgashqtzifzstqaqwgktlfgidmxxaxmmhzmgbya mjgxnlyattvc rwpsszwfhimovkvgknlgddou nmytnxpvdescbg k syfdhwqdrj jmcovoyodzkcofmlycehpcqpuflje xkcykcwbdaifculiluyqerxfwlmpvtlyqkv
0 order Markov chain output:
–ooyusdii eltgotoroo tih ohnnattti gyagditghreay nm roefnnasos r naa euuecocrrfca ayas el s yba anoropnn laeo piileo hssiod idlif beeghec ebnnioouhuehinely neiis cnitcwasohs ooglpyocp h trog 1
output of Markov chain 1 st order:
–icke inginatenc blof ade and jalorghe y at helmin by hem owery fa st sin r d n cke s t w anks hinioro e orin en s ar whes ore jot j whede chrve blan ted sesourethegebe inaberens s ichath fle watt o
output of Markov chain 2 nd order:
–he ton th a s my caroodif flows an the er ity thayertione wil ha m othenre re creara quichow mushing whe so mosing bloack abeenem used she sighembs inglis day p wer wharon the graiddid wor thad k
output of Markov chain of 3 rd order:
–es in angull o shoppinjust stees ther a kercourats allech is hote temal liked be weavy because in coy mrs hand room him rolio und ceran in that he mound a dishine when what to bitcho way forgot p
fig. 16 depicts an example first order markov chain of a text probability table. Note that the size of such tables is set exponentially in order:
rank of Markov chain
Number of letters
Size is ND
Thus, higher orders lead to dimensional problems. Advantageously, the histological samples have a very high resolution. However, since the histological samples are not statistically independent, this may lead to overfitting, as will be described in more detail later. In general, the higher the condition dependence of the modeling, the more predictive the model is.
In an example embodiment, a 2D Markov Random Field (MRF) may be used for pixel values rather than a 1D sequence as for letters. Fig. 17 depicts the conditional dependence of a first pixel (black) based on its neighboring pixels (gray). In an example embodiment, cliques may utilize symmetry to reduce the number of dependencies by half. In some embodiments, the value of the pixel may be a simple image intensity or may be a probability value of a classification problem. The use of typical MRFs is problematic. Conventional MRF is almost always limited to nearest neighbor pixels that provide conditional dependencies, which greatly reduce the specificity of the represented probability space, for very general, usually only black/white blobs,
segmentation/filtering of the object; very short range dependence. However, despite the high degree of pixel dispersion, a blob that merely misses one pixel and falls on the next pixel may completely change the probability distribution. Thus, the true image structure is much more continuous than what is typically solved using MRF.
For this reason, the systems and methods of the present disclosure may advantageously utilize inference procedures, e.g., bayesian-type rules of a posteriori: likelihood x prior (P (a/I) ap (I/a) × P (a)). Using the cross-word type analogy, the inference procedure implemented by the systems and methods of the present application is somewhat like attempting to OCR cross-word puzzles based on noise scans. Knowledge (even imperfect knowledge of a few squares) can help inform an unknown square in a crosspuzzle. By considering both the vertical and horizontal directions simultaneously, this is more effectively improved. In an example embodiment, the inference procedure may be heuristic. For example, initialization may be done without a priori notification, and then the simpler problem is solved first, which may provide clues about the harder problem to be solved later. Thus, the relative ease of detecting biological properties such as calcium concentrate can inform the existence of other more difficult problems for detecting analytes such as lipids. Each step of the inference procedure may narrow the probability distribution of unresolved pixels.
As described above, in order to obtain usable data, a high order markov chain is preferred. A disadvantage of using higher order markov methods is that there may not be enough data to tell the inference program. In example embodiments, this problem may be solved by using a density estimation method such as a Parzen window or by using a kriging technique.
To form the inference procedure, initialization can be done with an unconditional prior probability of the analyte, and then the highest level of evidence is used to start narrowing the probability. For example, in some embodiments, an uncertainty width may be associated with each analyte probability estimate. In other embodiments, approaching 1/N may represent such uncertainty.
It should be noted that the term "markov" is used loosely herein, since the proposed markov implementation is not memoryless but rather explicitly attempts to model the long-range (spatial) dependencies.
Because CT resolution is low compared to histology and plaque anatomy, it may be preferable in some embodiments to utilize a continuous spatial (temporal) markov model rather than discrete spatial (temporal). This works well with the level set representation of the probability map, as it works naturally well with sub-pixel interpolation. Discrete analyte states make the model a discrete spatial model. However, if a model represents a continuous probability rather than analyte presence/absence, it becomes a continuous spatial model.
Turning to lung-based applications, table 4 below depicts exemplary biological properties/analytes that may be utilized with respect to a hierarchical analysis framework for such applications.
Table 5: biologically objective measurands supported by lung-based applications
Figure BDA0002916856570000511
Figure BDA0002916856570000521
In particular, the system may be configured to detect lung lesions. Thus, an exemplary system may be configured for whole lung segmentation. In some embodiments, this may involve using minimal curvature evolution to address the proximal pleural lesion problem. In some embodiments, the system may perform pulmonary composition analysis (blood vessels, fissures, bronchi, lesions, etc.). Advantageously, a Hessian filter may be utilized to facilitate pulmonary composition analysis. In some embodiments, the pulmonary composition analysis may further comprise pleural involvement, for example, depending on the slit geometry. In further embodiments, attachment to anatomical structures is also contemplated. In addition to pulmonary compositional analysis, separate analyses of the ground glass state and the solid state can be applied. This may include determining geometric features such as volume, diameter, spherical, image features such as density and mass, and fractal analysis.
Fractal analysis can be used to infer scale growth patterns. To perform fractal analysis on a very small region of interest, the method adaptively modifies the support for the convolution kernel to confine it to the region of interest (i.e., the pulmonary nodule). Intersecting vessels/bronchi and non-diseased features can be masked out for fractal analysis purposes. This is done by applying an IIR gaussian filter over the masked local neighborhood and normalizing with the binary masking of IIR blur. In some embodiments, the fractal analysis may further include determining porosity (based on variance of local means). This may be applied with respect to lung lesions, sub-parts of lesions. In an example embodiment, an IIR gaussian filter or a circular neighborhood may be applied. In some embodiments, the variance may be calculated using IIR. The mean of local variance (AVL) may also be calculated, for example, as applied to lung lesions. Also, the variance of the local variance can be calculated.
In an example embodiment, both lesion structure and composition may be calculated. Advantageously, calculating lesion structure may utilize the full volume of the thin section, thereby improving the calculation of the sizing change. Measurements such as sub-solid and Ground Glass Opacity (GGO) volumes may also be determined as part of assessing lesion structure. Turning to lesion composition, tissue properties such as consolidation, trauma, proximity and perfusion may be calculated, thereby reducing false positive rates relative to conventional analysis, for example.
Referring now to fig. 18, a further exemplary hierarchical analysis framework 1800 of the system of the present disclosure is depicted. FIG. 18 may be understood as a detailed illustration of FIG. 1, which is set forth in greater detail with respect to an exemplary intermediate processing layer of the hierarchy inference system. Advantageously, the hierarchy inference still flows from the imaging data 1810 to the underlying biological information 1820 to the clinical disease 1800. Notably, however, the framework 1800 contains multiple levels of data points for processing imaging data to determine biological properties/analytes. At a pre-processing stage 1812, physical parameters, registration transformation, and region segmentation may be determined. This preprocessed imaging information can then be utilized to extract imaging features 1814 of the next level of the data point, such as intensity features, shape, texture, temporal characteristics, and the like. The extracted image features may then be utilized at stage 1816 to fit one or more biological models to the imaged anatomy. Example models may include bayesian/markov web lesion substructures, fractal growth models, or other models as described herein. The biological model may advantageously serve as a bridge for correlating the imaging features with the underlying biological properties/analytes at stage 1822. Exemplary biological properties/analytes include anatomical structure, tissue composition, biological function, gene expression correlation, and the like. Finally, at stage 1832, the biological properties/analytes may be utilized to determine clinical findings related to pathology, including, for example, disease subtype, prognosis, decision support, and the like.
Figure 19 is an example application of the goal of phenotyping in guided vessel therapy using the Stary plaque typing system employed by AHA as a basis, with the type determined in vivo shown in color overlay. The left figure shows an example of labeling according to the possible dynamic behavior of plaque lesions based on their physical properties, and the right figure shows an example of using the classification results to guide patient treatment. Examples are mapped to [ 'I', 'II', 'III', 'IV', 'V', 'VI', 'VII', 'VIII' ], resulting in a class _ map [ subclinical, unstable, stable ]. This method is not linked to Stary, for example, the Virmani system [ "calcified nodules", "CTO", "FA", "FCP", "healing plaque rupture", "PIT", "IPH", "rupture", "TCFA", "ULC" ] has been used with class-map [ stable, unstable ], and other typing systems can achieve similar high performance. In example embodiments, the systems and methods of the present disclosure may incorporate disparate typing systems, may change class diagrams, or other variations. For FFR phenotypes, equivalence as normal or abnormal and/or quantity may be used to facilitate comparison with, for example, physical FFR.
FIG. 20 is an example of different diseases, such as lung cancer. In this example, the subtype of the mass is determined in order to guide the most likely beneficial treatment to the patient based on the apparent phenotype.
CNN is expected to perform better than read vector classification because CNN contains filters that extract spatial context not included in (only) analyte area measurements. Although the amount of training is reduced, it may be practical to use CNN because
1) Corresponding to significantly different treatment alternatives, there are relatively few classes (rather than being fully granular as can be done in research assays requiring ex vivo tissue), such as three phenotypes of a classification problem, three risk levels of an outcome prediction/risk stratification problem, and thus the problem is generally easier.
2) The analyte regions are processed, for example, by a level set or other class of algorithms, as pseudo-color regions, and a substantial portion of the image interpretation is performed by generating a simplified but rather enriched data set that is segmented and presented to the classifier. The measurable pipeline stages reduce the dimensionality of the data (reducing the complexity of the problem that CNN must solve), while also providing verifiable intermediate values that can increase the confidence of the overall pipeline.
3) Reformatting the data using a normalized coordinate system removes noise variations due to variables that do not materially affect the classification (e.g., vessel size in the example of plaque phenotyping).
To examine this idea, a pipeline is built consisting of three stages:
1) semantic segmentation to identify which regions of biomass fall into certain classes
2) A spatial deployment for transforming the vein/artery cross-section into a rectangle, an
3) A trained CNN that is used to read the annotated rectangle and identify which class it belongs to (stable or unstable).
Without loss of generality, the example systems and methods described herein may apply spatial unfolding (e.g., training and testing CNNs with (unfolded datasets) and without (doughnut-shaped datasets)). Observing the unfolding can improve validation accuracy
Semantic segmentation and spatial expansion:
first, the image volume is preprocessed. This may include target initialization, normalization, and other pre-processing, such as deblurring or restoration to form a region of interest containing a physiological target to be phenotyped. The region is a volume formed by a cross-section through the volume. The body part is determined automatically or is provided explicitly by the user. A target that is a tubular body part in nature is accompanied by a centerline. The centerline may have branches when present. The branches may be marked automatically or by the user. Generalizing the centerline concept may represent anatomy that is not tubular, but that benefits from some structural directionality, such as a region of a tumor. In any case, the centroid of each cross section in the volume is determined. For a tubular structure, the centroid would be the center of the channel, e.g., the lumen of a blood vessel. For lesions, the centroid will be the center of the mass of the tumor.
FIG. 21 represents an exemplary image pre-processing step, in this case, deblurring or recovery uses a patient-specific point spread determination algorithm to mitigate artifacts or image limitations caused by the image formation process that may degrade the ability to determine characteristics of the predicted phenotype. The figure shows a portion of the analysis applied to the radiological analysis of plaques according to CT. Shown here is a deblurred or restored image that is the result of iteratively fitting a physical model of the scanner point spread function with regularization assumptions about the true latent density of different regions of the image. This figure is included to demonstrate that various image processing operations can be performed to assist in being able to perform quantitative steps, and is in no way indicative that this method is essential to the particular invention in this disclosure, but rather as an example of steps that can be taken to improve overall performance.
The (optionally deblurred or restored) image is represented in a cartesian dataset in which x is used to represent how far from the centroid, y represents the rotation θ, and z represents the cross-section. Each branch or region will form one such cartesian set. When multiple sets are used, a "null" value is used for the overlap region, that is each physical voxel will be presented only once in the entire set, in this way to fit together geometrically. Each dataset will be paired with additional datasets having sub-regions marked by objectively verifiable tissue composition (see, e.g., fig. 36). Example labels for vascular tissue may be lumen, calcification, LRNC, etc. Example labels for lesions may be necrotic, neovascularization, and the like. These tags can be objectively verified, for example, by histology (see, e.g., fig. 37). The paired data sets will be used as input to the training step for building the convolutional neural network. Two-layer analysis is supported, one on a separate cross-section layer, optionally with the output continuously varying across adjacent cross-sections, and a second on a volume layer (where the separate cross-sections may be considered stationary frames and vessel tree traversal may be considered analogous to a movie).
Exemplary CNN design:
AlexNet is CNN, which was done in the ImageNet Large Scale Visual Recognition Challenge changer (ImageNet Large Scale Visual Recognition Challenge changer) in 2012. The first 5 errors achieved by the network are 15.3%. AlexNet was designed by a supervisory team consisting of Alex krimphevsky, Geoffrey Hinton and Ilya sutskver, the university of Toronto (U Toronto) at that time. AlexNet is trained from scratch to classify a set of independent images (not used in the training and validation steps during network training). For the expanded data, a 400 × 200 pixel input AlexNet type network was used, and the doughnut-shaped network was a 280 × 280 pixel input AlexNet type (resolution approximately the same but aspect ratio different). All of the convolution filter values are initialized, with weights extracted from AlexNet trained on the ImageNet dataset. Although the ImageNet dataset is a natural image dataset, this is only used as an efficient method of weight initialization. Once training begins, all weights are adjusted to better fit the new task. Most of the training program was extracted directly from the open source AlexNet implementation, but some adjustments were required. Specifically, for both AlexNet donut and AlexNet unrolled networks, the basic learning rate was reduced to 0.001 (solvent. prototxt) and the batch size was reduced to 32(train _ val. prototxt). All models were trained to 10,000 iterations and compared to snapshots after training until only 2,000 iterations. Although more intensive studies of overfitting can be performed, it is generally found that both training and validation errors are reduced between 2k and 10k iterations.
The substitute features (prefixes) may include:
·ResNet-https://arxiv.org/abs/1512.03385
·GoogLeNet-https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf
·ResNext-https://arxiv.org/abs/1611.05431
·ShuffleNet V2-https://arxiv.org/abs/1807.11164
·MobileNet V2-https://arxiv.org/abs/1801.04381
run-time optimizations such as frame-to-frame redundancy between cross-sections (sometimes referred to as "temporal" redundancy, but in this case, a form of intra-cross-section redundancy) can be utilized to save computation (e.g., http:// arxiv. org/abs/1803.06312). Many optimizations to training or inference may be implemented.
In an exemplary test embodiment, AlexNet is trained to classify two categories of clinical significance, e.g., independent image sets between "unstable" and "stable" plaques, based on the histological benchmark true phase plaque types of V and VI, the latter comprising plaque types VII and VIII subject to the industry-actual standard plaque classification nomenclature accepted by the American Heart Association (AHA) and on the relevant but unique classification system of Virmani.
Without loss of generality, in the illustrated example, both the overall accuracy and the confusion matrix are utilized to evaluate performance. This formalism is based on the concept of computing four probabilities in a binary classification system: true positive, true negative, false positive and false negative. In an example embodiment, other outcome variables may be used, however, for example, sensitivity and specificity may be utilized as outcome variables or F1 scores (the harmony and average of precision and sensitivity). Alternatively, AUC characteristics may be calculated for a binary classifier. Furthermore, the classifier need not be binary based. For example, in some embodiments, the classifier may sort based on more than two possible states.
Data set enhancement:
data annotated by physicians is expensive and it is therefore desirable to manually augment medical data sets (e.g., for training and/or validation). Two different enhancement techniques are used in the example embodiments described herein. The donuts were randomly turned horizontally and rotated to random angles of 0 to 360 degrees. The resulting rotated donut is then trimmed to the extent that the donut exists and then filled in with black pixels to fill the image to have a square aspect ratio. The results are then scaled to 280 x 280 size and saved as PNG.
The unfolded data set is enhanced by random horizontal flipping, and then "scrolled" by a random number of pixels in the range of 0 to the width of the image. The results are then scaled to 400 x 200 size and saved as PNG.
Both data sets are increased by a factor of 15, meaning that the total number of images after enhancement is 15 times the original number. Class normalization is performed, meaning that the final data set has approximately the same number of images belonging to each class. This is important because the original number of images for each class may be different, thereby biasing the classifier to classes that have a larger number of images in the training set.
Any number of tissue types may be used by each radiologist performing the annotation without loss of generality.
Fig. 22 illustrates an exemplary application for demonstrating aspects of the present invention, in this case classification for atherosclerotic plaque phenotypic classification (e.g., using specific subject data). Different colors indicate different tissue analyte types, with additional normal walls shown in dark gray. The figure shows the results of baseline true annotation of tissue properties indicating the plaque phenotype and how it exists in the spatial context of a cross-section taken orthogonal to the axis of the vessel. It also demonstrates a coordinate system that has been developed to provide a common basis for analysis of a large number of histological cross-sections. Grid lines are added to show the coordinate system (tangent line versus radial distance) and superimposed on top of the color-coded pathologist annotation. An important aspect is that this kind of data set can be effectively used for deep learning methods, since it uses relatively simple pseudo-color images instead of higher resolution full images to simplify the information, but without losing spatial context, e.g. with formal representations of such representations as "napkin ring signs", near luminal calcium, thin (or thick) caps (spacing between LRNC and lumen), etc.
Fig. 23 illustrates tangential and radial direction variables using unit phasors and internal representations of phasor angles shown encoded here in grey scale, illustrating the use of normalized axes for tubular structures associated with blood vessels and other pathophysiology associated with such structures (e.g., the gastrointestinal tract). Note that the vertical bars from black to white are purely arbitrary boundaries due to the grey scale encoding, and the normalized radial distance has a value of 0 at the lumen boundary and 1 at the outer boundary.
Fig. 24 illustrates an exemplary overlay of annotations produced by a radiology analysis application based on CTA (unfilled color profile) on annotations produced by a pathologist based on histology (solid color regions). An example aspect of the systems and methods presented herein is that the contour form of in vivo non-invasive imaging can be used with a classification scheme to non-invasively determine phenotypes, where the classifier is trained on known baseline truth. In particular, filling in the outline shown unfilled in this figure (so as not to obscure the relationship with the ex vivo annotation to this particular section, which is provided to show correspondence) creates input data for the classifier.
Fig. 25 illustrates an additional step of data enrichment, in particular, using a normalized coordinate system to avoid irrelevant variations associated with wall thickness and radial representation. Specifically, the "donut shape" is "unrolled" while maintaining pathologist notes. The left panel shows the pathological area annotation of the histological section of the plaque after deformation to convert the incised "C" shape back to the in vivo "O" shape of the intact vessel wall. The horizontal axis is a tangential direction around the wall. The vertical axis is the normalized radial direction (bottom is the luminal surface and top is the outer surface). Note also that the finer granularity of pathologist annotations has collapsed to match the granularity intended for extraction by in vivo radiology analysis applications (e.g. LRNC, CALC, IPH). The right panel shows a comparable expanded radiology analysis application annotation. The axes and colors are the same as the pathologist notes.
FIG. 26 shows the next refinement relating to the example of plaque phenotyping. Working from the deployed modality, lumen irregularities (e.g., caused by ulcers or thrombi) and locally varying wall thickening are present. The light grey at the bottom represents the lumen (added in this step to indicate lumen surface irregularity) and the black used in the previous step has now been replaced with dark grey to indicate varying wall thickening. The black color now fully represents the area outside the wall.
Figure 27 shows additional examples to include, for example, intra-plaque hemorrhage and/or other morphological aspects (e.g., using specific subject data) as desired. The left figure shows a doughnut-like representation and the right figure is expanded, showing luminal surface and localized wall thickening.
Example CNNs tested included CNNs based on AlexNet and inclusion frameworks.
AlexNet results:
in the example embodiment tested, the convolution filter values were initialized, with weights extracted from AlexNet trained on the ImageNet (referenced herein) dataset. Although the ImageNet dataset is a natural image dataset, this is only used as an efficient method of weight initialization. Once training begins, all weights are adjusted to better fit the new task.
Most of the training program was extracted directly from the open source AlexNet implementation, but some adjustments were required. Specifically, for both the alexnet donut-shaped network and the alexnet unrolled network, the basic learning rate is reduced to 0.001 (solvent. prototxt) and the batch size is reduced to 32 (drain _ val. prototxt).
All models were trained to 10,000 iterations and compared to snapshots after training until only 2,000 iterations. Although more intensive studies of overfitting can be performed, it is generally found that both training and validation errors are reduced between 2k and 10k iterations.
A completely new AlexNet network model was trained from scratch for 4 (four) different combinations of baseline true phase results, two different ways of processing images (see above), as well as expanded images and donut-like images for two major pathologists. The results are shown in FIG. 28. With class normalization enabled, each data set variation enhanced its training data by a factor of 15. The network is trained with this enhanced data and then tested with corresponding unenhanced validation data corresponding to the changes. For the expanded data, a 400 × 200 pixel input AlexNet type network was used, and the doughnut-shaped network was a 280 × 280 pixel input AlexNet type (resolution approximately the same but aspect ratio different). Note that in the test examples, the dimensions of the regular layer and the fully connected layer were changed. Thus, the network in the AlexNet test embodiment can be described as a five convolutional layer, three fully connected layer network. Without loss of generality, there are some high-level conclusions presented here based on these results:
1) in addition to the WN _ RV dataset, it does appear that the spread data is easier to data analyze because it receives more verification accuracy overall
2) Non-normalized data proved to be more representative as expected.
3) With regard to the WN-RV dataset, the original idea was to pool WN and RV truth data to see the compatibility of the typing system and the extent to which the sets could be merged. In so doing, significant differences were observed in WN and RV data. The initial purpose of the WN _ RV experiment was to pool training data from multiple pathologists to see if the information contributed to efficacy. In contrast, degradation was observed rather than improvement. This is determined to be due to a change in the color scheme that hinders this pooling of data. Thus, normalization of the color scheme may be considered to achieve pooling.
An exemplary alternate network: inclusion:
transfer learning retraining by inclusion v3 CNN begins with the 2016 year 8 month 8 day version of a network for public use on the tensrflow website. The training was performed for 10,000 steps. The training and validation set was normalized by image enhancement over the number of images, so both subsets had the same number of annotated images in total. All other network parameters are taken to their default values.
The pre-trained CNN can be used to classify the imaged features using the output of the final convolutional layer, which in the case of google inclusion v3 CNN is a digital tensor with dimensions 2048 x 2. Then, the SVM classifier is trained to recognize the object. This process is typically performed on the inclusion model after the transfer learning and fine tuning steps in which the model initially trained with the ImageNet 2014 dataset has its last softmax layer removed and retrained to identify new classes of images.
Alternative embodiments:
fig. 29 provides an alternative embodiment for phenotyping a potentially cancerous lung lesion. The left-most plot indicates the contour of the segmented lesion, where pre-treatment was performed to separate it into solid and semi-solid regions ("ground glass") sub-regions. The middle diagram indicates its position in the lung and the right-most diagram shows it with a pseudo-color overlay. In this case, the 3-dimensional nature of the lesion may be considered significant, and thus techniques like video interpretation from computer vision may be applied to the classifier input data set as opposed to processing the 2D cross-section alone. In fact, processing multiple cross-sections sequentially as arranged in a "movie" sequence along the centerline can generalize these methods for tubular structures.
Another generalization is that the pseudo-colors are not selected from a discrete palette, but have continuous values at pixel or voxel locations. Using the lung example, fig. 30 shows a set of features, sometimes described as so-called "radiologic" features that can be computed for each voxel. This set of values may exist in any number of preprocessed stacks and fed into the phenotype classifier.
Other alternative embodiments include using, for example, change data collected from multiple points in time, rather than (only) data from a single point in time. For example, if the amount or nature of a negative cell type increases, it may be referred to as a "progression variable" phenotype, while a "regression variable" phenotype is directed to a decrease. The regression variable may be, for example, due to a response to the drug. Alternatively, if the rate of change of, for example, LRNC is rapid, this may imply a different phenotype. It will be apparent to those skilled in the art that the examples extend to the use of delta values or rates of change.
As a further alternative embodiment, non-spatial information, as derived from other assays (e.g., laboratory results) or demographic/risk factors, or other measurements extracted from radiological images, may be fed to the final layer of the CNN to combine the spatial information with the non-spatial information. Also, localization information may be determined, such as by inferring the full 3D coordinates at the imaging site, using pressure lines and readings at one or more certain locations along the vessel from references such as branches or holes.
Although the focus of these examples is on phenotypic classification, as additional embodiments of the invention, similar methods may be applied to the problem of outcome prediction.
Example embodiments:
the system and method of the present disclosure may advantageously include a pipeline comprised of a plurality of stages. FIG. 34 provides additional example embodiments of a hierarchical analysis framework. Biological properties are identified and quantified by semantic segmentation to identify, individually or in combination, the biological properties numerically represented and in a spatially expanded enriched dataset (in an example application, lipid-rich necrotic nuclei, cap thickness, stenosis, dilation, remodeling rates, tortuosity (e.g., entrance and exit angles), calcification, IPH and/or ulcers) to transform venous/arterial cross-sections into rectangles and then into medical conditions (e.g., fractional flow reserve FFR causing ischemia, high risk phenotypic HRP and/or time of risk stratification event (TTE) using one or more trained CNNs to read the enriched dataset to identify and characterize the condition; collecting images of the patient, using raw slice data in a set of algorithms to measure biological properties that can be objectively validated, these biological properties are then formed into an enriched data set for feeding one of a plurality of CNNs, in this example where the results are propagated forward and backward using recursive CNNs to enforce constraints or to produce a continuous condition (such as a monotonically decreasing fractional flow reserve from proximal to distal throughout the vessel tree, or a constant HRP value in focal lesions or other constraints). Baseline true phase data for HRP may exist as a pathologist determines the type of plaque at a given cross section that has been determined to be ex vivo. The reference true phase data for FFR may be from a physical pressure line, with one or more measurements and network training, the position of the proximal side of a given measurement along the centerline being constrained to be greater than or equal to the measurement, the position of the distal side being constrained to be less than or equal to the measurement, and when two measurements on the same centerline are known, the value between the two measurements will be limited to within the interval.
These properties and/or conditions may be measured at a given point in time and/or may vary across time (longitudinal). Other embodiments that perform similar steps in plaque phenotyping or other applications would be embodiments of the present invention without loss of generality.
In an example embodiment, the biological property may comprise one or more of:
angiogenesis
Neovascularization
Inflammation of the skin
Calcification
Deposition of lipids
Necrosis of the cell
Bleeding
Ulcer of the stomach
Rigidity of
Density of
Stenosis
Expansion of
Remodeling ratio
Tortuosity
Blood flow (e.g., blood flow in a blood vessel)
Pressure (e.g. pressure of blood in the channel or pressure of one tissue against another)
Cell type (e.g. macrophage)
Cell alignment (e.g., of smooth muscle cells)
Shear stress (e.g. of blood in the channel)
The analysis may comprise determining one or more of the quantity, extent, and/or trait of each of the aforementioned biological properties.
The condition that may be determined based on the biological property may comprise one or more of:
perfusion/ischemia (e.g. limited) (e.g. perfusion/ischemia of brain or heart tissue)
Perfusion/infarction (e.g. resection) (e.g. perfusion/infarction of brain or heart tissue)
Oxygenation
Metabolism of
Blood flow reserve (perfusion capacity), e.g. FFR (+) and (-) and/or continuous number
Malignant tumor
Invasion
High risk plaques, e.g. HRP (+) and (-) and/or labeled phenotype
Risk stratification (whether as probability of event or time of event) (e.g. MACCE mentioned explicitly)
Verification of the ground truth form may include the following:
biopsy
Expert tissue annotation to form the cut tissue (e.g., endarterectomy or autopsy)
Expert phenotypic annotation of excised tissue (e.g., endarterectomy or autopsy)
Physical pressure line
Other imaging modalities
Physiological monitoring (e.g. ECG, SaO2, etc.)
Genomics and/or proteomics and/or metabolomics and/or transcriptomics assays
Clinical results
The analysis may be both at a given point in time and may be longitudinal (i.e., time-varying)
Exemplary System architecture:
FIG. 31 illustrates a high-level view of a user and other systems interacting with an analytics platform, in accordance with the systems and methods of the present disclosure. Stakeholders of this view include system administrators, support technicians with interoperability, security, failover and disaster recovery, regulatory issues.
Platforms can be deployed in two main configurations: a preset or remote server (fig. 32). Platform deployment may be standalone configuration (top left), preset server configuration (bottom left), or remote server configuration (right). The preset deployment configuration may have two sub-configurations; desktop or rack only. In a remote configuration, the platform may be deployed on a HIPAA compliance data center. The client accesses the API server over a secure HTTP connection. The client may be a desktop or tablet browser. No hardware is deployed on the customer site except for the computer running the Web browser. The deployed servers may be on a public cloud or an extension of a customer private network using a VPN.
Exemplary embodiments include a client and a server. For example, fig. 33 shows a client as a C + + application and a server as a Python application. These components interact using HTML 5.0, CSS 5.0, and JavaScript. Where possible, open standards are used for the interface, including but not limited to; HTTP (S), REST, DICOM, SPARQL and JSON. As shown in this view, which shows the main elements of the technology stack, a third party library is also used. Many variations and different methods will be appreciated by those skilled in the art.
Various embodiments of the above-described systems and methods may be implemented in digital electronic circuitry, in computer hardware, firmware, and/or software. Embodiments may be implemented as a computer program product (i.e., a computer program tangibly embodied in an information carrier). Embodiments may be implemented, for example, in machine-readable storage and/or in a propagated signal, for execution by, or to control the operation of, data processing apparatus. Embodiments may be, for example, a programmable processor, a computer, and/or multiple computers.
A computer program can be written in any form of programming language, including compiled and/or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a subroutine, element, and/or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site.
Method steps can be performed by one or more programmable processors executing a computer program to perform functions of the invention by operating on input data and generating output. Method steps can also be performed by, and apparatus can be implemented as, special purpose logic circuitry. The circuitry may be, for example, an FPGA (field programmable gate array) and/or an ASIC (application specific integrated circuit). Modules, subroutines, and software agents may refer to portions of a computer program, processor, dedicated circuitry, software, and/or hardware that implement the described functionality.
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer can include, be operatively coupled to receive data from and/or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
Data transmission and instructions may also occur over a communication network. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including, for example, semiconductor memory devices. The information carrier may be, for example, an EPROM, an EEPROM, a flash memory device, a diskette, an internal hard disk, a removable disk, a magneto-optical disk, a CD-ROM and/or a DVD-ROM disk. The processor and the memory can be supplemented by, and/or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, the above-described techniques may be implemented on a computer having a display device. The display device may be, for example, a Cathode Ray Tube (CRT) and/or a Liquid Crystal Display (LCD) monitor. The interaction with the user may be, for example, a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user may present information to the user and by which the user may provide input to the computer (e.g., interact with user interface elements). Other kinds of devices may be used to provide for interaction with the user. The other means may be feedback provided to the user, for example, in any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback). Input from the user may be received, for example, in any form, including acoustic input, speech input, and/or tactile input.
The techniques described above may be implemented in a distributed computing system that includes a back-end component. The back-end component can be, for example, a data server, a middleware component, and/or an application server. The techniques described above may be implemented in a distributed computing system that includes a front-end component. The front-end component can be, for example, a client computer having a graphical user interface, a Web browser through which a user can interact with the example embodiments, and/or other graphical user interfaces for a transmission device. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include a Local Area Network (LAN), a Wide Area Network (WAN), the internet, a wired network, and/or a wireless network.
A system may comprise a client and a server. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The packet-based network may include, for example, the internet, a carrier Internet Protocol (IP) network (e.g., Local Area Network (LAN), Wide Area Network (WAN), Campus Area Network (CAN), Metropolitan Area Network (MAN), Home Area Network (HAN), private IP network, IP private branch exchange (IPBX), a wireless network (e.g., Radio Access Network (RAN), 802.11 network, 802.16 network, General Packet Radio Service (GPRS)) network, HiperLAN), and/or other packet-based networks. The circuit-based network may comprise, for example, a Public Switched Telephone Network (PSTN), a private branch exchange (PBX), a wireless network (e.g., RAN, bluetooth, Code Division Multiple Access (CDMA) network, Time Division Multiple Access (TDMA) network, global system for mobile communications (GSM) network, and/or other circuit-based network.
The computing device may include, for example, a computer with a browser device, a telephone, an IP phone, a mobile device (e.g., a cellular phone, a Personal Digital Assistant (PDA) device, a laptop computer, an email device), and/or other communication device. Browser means include, for example, having a web browser (e.g., available from Microsoft Corporation)
Figure BDA0002916856570000641
Internet
Figure BDA0002916856570000642
Available from Mosela Corporation (Mozilla Corporation)
Figure BDA0002916856570000643
Firefox) for a computer (e.g., desktop computer, laptop computer). The mobile computing device includes, for example
Figure BDA0002916856570000644
Or other smartphone devices.
Whereas many alterations and modifications of the present disclosure will no doubt become apparent to a person of ordinary skill in the art after having read the foregoing description, it is to be understood that the particular embodiments shown and described by way of illustration are in no way intended to be considered limiting. Further, the present subject matter has been described with reference to specific embodiments, but variations within the spirit and scope of the disclosure will occur to those skilled in the art. Note that the above examples are provided for illustrative purposes only, and are in no way to be construed as limiting the present disclosure.
Although the present disclosure has been described with reference to particular embodiments, the present disclosure is not intended to be limited to the specific details disclosed herein; rather, the disclosure extends to all modifications and generalizations thereof that will be obvious to those skilled in the art, including those which are within the broadest scope of the appended claims.

Claims (15)

1. A method for computer-assisted phenotyping or outcome prediction of a pathology using an enriched radiology data set, the method comprising:
receiving a radiology data set of a patient;
enriching the data set by performing analyte measurements and/or classification of anatomical, shape or geometric and/or tissue characteristics, types or traits and objective validation of a set of analytes relevant to pathology; and
processing the enriched data set using a machine learning classification method based on known baseline truth and determining one or both of: (i) a phenotype of the pathology; or (ii) a predicted outcome associated with the pathology;
wherein enriching the data set further comprises a spatial transformation of the data set to emphasize a biologically significant spatial context;
wherein the transforming comprises spatially transforming with respect to a cross-section of a pathology-specific structure in the radiological dataset to produce a pathology-appropriate transformed dataset; and is
Wherein the machine learning classification method uses a trained Convolutional Neural Network (CNN) that is applied to the pathologically appropriate transformed data set.
2. The method of claim 1, wherein an image volume in the radiological data set is preprocessed to form a region of interest containing a physiological target, lesion, and/or group of lesions to be analyzed, and wherein the region of interest includes one or more cross-sections, each cross-section consisting of a projection through the volume.
3. The method of claim 2, wherein if the region of interest contains an essentially oriented volume, a centerline of the volume is indicated such that the cross sections are taken along the centerline and a centroid of each cross section is determined, and wherein the pre-processed radiological data set is represented in a cartesian coordinate system in which axes are used to represent a perpendicular distance from the centerline and a rotation θ about the centerline, respectively.
4. The method of claim 3, wherein the region of interest includes a plurality of branches in a branch pathology-specific network with corresponding directionality, wherein a different Cartesian coordinate system is applied with respect to each branch.
5. The method of claim 4, wherein a single branch is assigned any initial overlapping section of the branch.
6. The method of claim 2, wherein if the region of interest contains an essentially directional volume, a centerline of the volume is indicated such that the cross-sections are taken along the centerline and a centroid of each cross-section is determined, and wherein sub-regions within each cross-section are classified according to objectively verifiable characteristics based on tissue composition.
7. The method of claim 6, wherein the tissue composition classification comprises classification of pathology-specific tissue characteristics, alone and/or in combination.
8. The method of claim 6, wherein tissue composition classification further comprises classification of intra-pathological bleeding (IPH).
9. The method of claim 6, wherein the sub-regions are further classified based on abnormal morphology.
10. The method of claim 9, wherein abnormal morphological classification comprises identification and/or classification of a lesion.
11. The method of claim 1, wherein the machine learning classification method is using a trained Convolutional Neural Network (CNN), wherein the CNN is based on reconstruction of AlexNET, inclusion, cafneet, or other open source or commercially available frameworks.
12. The method of claim 1, wherein data set enrichment comprises fiducial true phase annotation of analyte sub-regions and providing a spatial context of how such analyte sub-regions exist in cross-sections, and wherein the spatial context comprises providing a coordinate system based on polar coordinates relative to a centroid of each cross-section.
13. The method of claim 12, wherein the coordinate system is normalized by normalizing radial coordinates relative to a tubular structure.
14. The method of claim 1, wherein the dataset is enriched by visually representing different analyte sub-regions with different colors, and wherein color-coded analyte regions are visually depicted against an annotated background that distinguishes between: (i) a central lumen region inside the inner lumen of the visualized pathology-specific target, lesion, and/or group of lesions; (ii) non-analyte sub-regions of the pathology-specific target, pathology and/or pathology group; and (iii) an outer region outside the outer wall of the pathology-specific target, lesion, and/or group of lesions.
15. The method of claim 1, wherein the machine learning classification method comprises constructing one or more machine learning models for computer-aided phenotyping or outcome prediction of pathology.
CN201980049912.9A 2018-05-27 2019-05-24 Method and system for utilizing quantitative imaging Pending CN112567378A (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US201862676975P 2018-05-27 2018-05-27
US62/676,975 2018-05-27
US201862771448P 2018-11-26 2018-11-26
US62/771,448 2018-11-26
PCT/US2019/033930 WO2019231844A1 (en) 2018-05-27 2019-05-24 Methods and systems for utilizing quantitative imaging

Publications (1)

Publication Number Publication Date
CN112567378A true CN112567378A (en) 2021-03-26

Family

ID=68697312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201980049912.9A Pending CN112567378A (en) 2018-05-27 2019-05-24 Method and system for utilizing quantitative imaging

Country Status (5)

Country Link
EP (1) EP3803687A4 (en)
JP (2) JP7113916B2 (en)
KR (1) KR102491988B1 (en)
CN (1) CN112567378A (en)
WO (1) WO2019231844A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113499098A (en) * 2021-07-14 2021-10-15 上海市奉贤区中心医院 Carotid plaque detector based on artificial intelligence and evaluation method
CN113935258A (en) * 2021-10-15 2022-01-14 北京百度网讯科技有限公司 Computational fluid dynamics acceleration method, device, equipment and storage medium
CN114533201A (en) * 2022-01-05 2022-05-27 华中科技大学同济医学院附属协和医院 Novel in vitro broken blood clot auxiliary equipment
CN115546612A (en) * 2022-11-30 2022-12-30 中国科学技术大学 Image interpretation method and device combining graph data and graph neural network
CN115797333A (en) * 2023-01-29 2023-03-14 成都中医药大学 Personalized customized intelligent vision training method
CN116110589A (en) * 2022-12-09 2023-05-12 东北林业大学 Retrospective correction-based diabetic retinopathy prediction method
CN117132577A (en) * 2023-09-07 2023-11-28 湖北大学 Method for non-invasively detecting myocardial tissue tension and vibration

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111504675B (en) * 2020-04-14 2021-04-09 河海大学 On-line diagnosis method for mechanical fault of gas insulated switchgear
CN111784704B (en) * 2020-06-24 2023-11-24 中国人民解放军空军军医大学 MRI hip joint inflammation segmentation and classification automatic quantitative classification sequential method
US20220036560A1 (en) * 2020-07-30 2022-02-03 Biosense Webster (Israel) Ltd. Automatic segmentation of anatomical structures of wide area circumferential ablation points
CN112669439B (en) * 2020-11-23 2024-03-19 西安电子科技大学 Method for establishing intracranial angiography enhanced three-dimensional model based on transfer learning
JP2023551869A (en) * 2020-12-04 2023-12-13 コーニンクレッカ フィリップス エヌ ヴェ Pressure and X-ray image prediction of balloon inflation events
CN112527374A (en) * 2020-12-11 2021-03-19 北京百度网讯科技有限公司 Marking tool generation method, marking method, device, equipment and storage medium
US20230115927A1 (en) * 2021-10-13 2023-04-13 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for plaque identification, plaque composition analysis, and plaque stability detection
KR102597081B1 (en) * 2021-11-30 2023-11-02 (주)자비스 Method, apparatus and system for ensemble non-constructing inspection of object based on artificial intelligence
KR102602559B1 (en) * 2021-11-30 2023-11-16 (주)자비스 Method, apparatus and system for non-constructive inspection of object based on selective artificial intelligence engine
KR102567138B1 (en) * 2021-12-30 2023-08-17 가천대학교 산학협력단 Method and system for diagnosing hair health based on machine learning
CN115115654B (en) * 2022-06-14 2023-09-08 北京空间飞行器总体设计部 Object image segmentation method based on saliency and neighbor shape query
CN115294191B (en) * 2022-10-08 2022-12-27 武汉楚精灵医疗科技有限公司 Marker size measuring method, device, equipment and medium based on electronic endoscope

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050118632A1 (en) * 2003-11-06 2005-06-02 Jian Chen Polynucleotides and polypeptides encoding a novel metalloprotease, Protease-40b
US20050262031A1 (en) * 2003-07-21 2005-11-24 Olivier Saidi Systems and methods for treating, diagnosing and predicting the occurrence of a medical condition
US20070019846A1 (en) * 2003-08-25 2007-01-25 Elizabeth Bullitt Systems, methods, and computer program products for analysis of vessel attributes for diagnosis, disease staging, and surfical planning
CN1952981A (en) * 2005-07-13 2007-04-25 西门子共同研究公司 Method for knowledge based image segmentation using shape models
US20100088264A1 (en) * 2007-04-05 2010-04-08 Aureon Laboratories Inc. Systems and methods for treating diagnosing and predicting the occurrence of a medical condition
US20170046839A1 (en) * 2015-08-14 2017-02-16 Elucid Bioimaging Inc. Systems and methods for analyzing pathologies utilizing quantitative imaging
CN106682060A (en) * 2015-11-11 2017-05-17 奥多比公司 Structured Knowledge Modeling, Extraction and Localization from Images
CN107730489A (en) * 2017-10-09 2018-02-23 杭州电子科技大学 Wireless capsule endoscope small intestine disease variant computer assisted detection system and detection method

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006181025A (en) 2004-12-27 2006-07-13 Fuji Photo Film Co Ltd Abnormal shadow detecting method, device and program
US8734823B2 (en) * 2005-12-14 2014-05-27 The Invention Science Fund I, Llc Device including altered microorganisms, and methods and systems of use
EP1976567B1 (en) * 2005-12-28 2020-05-13 The Scripps Research Institute Natural antisense and non-coding rna transcripts as drug targets
US8774479B2 (en) * 2008-02-19 2014-07-08 The Trustees Of The University Of Pennsylvania System and method for automated segmentation, characterization, and classification of possibly malignant lesions and stratification of malignant tumors
US9878177B2 (en) 2015-01-28 2018-01-30 Elekta Ab (Publ) Three dimensional localization and tracking for adaptive radiation therapy
JP7075882B2 (en) 2016-03-02 2022-05-26 多恵 岩澤 Diagnostic support device for lung field lesions, control method and program of the device
US10163040B2 (en) 2016-07-21 2018-12-25 Toshiba Medical Systems Corporation Classification method and apparatus
KR101980955B1 (en) * 2016-08-22 2019-05-21 한국과학기술원 Method and system for analyzing feature representation of lesions with depth directional long-term recurrent learning in 3d medical images

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050262031A1 (en) * 2003-07-21 2005-11-24 Olivier Saidi Systems and methods for treating, diagnosing and predicting the occurrence of a medical condition
US20070019846A1 (en) * 2003-08-25 2007-01-25 Elizabeth Bullitt Systems, methods, and computer program products for analysis of vessel attributes for diagnosis, disease staging, and surfical planning
US20050118632A1 (en) * 2003-11-06 2005-06-02 Jian Chen Polynucleotides and polypeptides encoding a novel metalloprotease, Protease-40b
CN1952981A (en) * 2005-07-13 2007-04-25 西门子共同研究公司 Method for knowledge based image segmentation using shape models
US20100088264A1 (en) * 2007-04-05 2010-04-08 Aureon Laboratories Inc. Systems and methods for treating diagnosing and predicting the occurrence of a medical condition
US20170046839A1 (en) * 2015-08-14 2017-02-16 Elucid Bioimaging Inc. Systems and methods for analyzing pathologies utilizing quantitative imaging
CN106682060A (en) * 2015-11-11 2017-05-17 奥多比公司 Structured Knowledge Modeling, Extraction and Localization from Images
CN107730489A (en) * 2017-10-09 2018-02-23 杭州电子科技大学 Wireless capsule endoscope small intestine disease variant computer assisted detection system and detection method

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113499098A (en) * 2021-07-14 2021-10-15 上海市奉贤区中心医院 Carotid plaque detector based on artificial intelligence and evaluation method
CN113935258A (en) * 2021-10-15 2022-01-14 北京百度网讯科技有限公司 Computational fluid dynamics acceleration method, device, equipment and storage medium
CN114533201A (en) * 2022-01-05 2022-05-27 华中科技大学同济医学院附属协和医院 Novel in vitro broken blood clot auxiliary equipment
CN115546612A (en) * 2022-11-30 2022-12-30 中国科学技术大学 Image interpretation method and device combining graph data and graph neural network
CN116110589A (en) * 2022-12-09 2023-05-12 东北林业大学 Retrospective correction-based diabetic retinopathy prediction method
CN116110589B (en) * 2022-12-09 2023-11-03 东北林业大学 Retrospective correction-based diabetic retinopathy prediction method
CN115797333A (en) * 2023-01-29 2023-03-14 成都中医药大学 Personalized customized intelligent vision training method
CN117132577A (en) * 2023-09-07 2023-11-28 湖北大学 Method for non-invasively detecting myocardial tissue tension and vibration
CN117132577B (en) * 2023-09-07 2024-02-23 湖北大学 Method for non-invasively detecting myocardial tissue tension and vibration

Also Published As

Publication number Publication date
JP7113916B2 (en) 2022-08-05
KR20210042267A (en) 2021-04-19
EP3803687A1 (en) 2021-04-14
EP3803687A4 (en) 2022-03-23
JP2022123103A (en) 2022-08-23
KR102491988B1 (en) 2023-01-27
WO2019231844A1 (en) 2019-12-05
JP2021525929A (en) 2021-09-27

Similar Documents

Publication Publication Date Title
US11607179B2 (en) Non-invasive risk stratification for atherosclerosis
US20240078672A1 (en) Functional measures of stenosis significance
US11120312B2 (en) Quantitative imaging for cancer subtype
US11696735B2 (en) Quantitative imaging for instantaneous wave-free ratio
US11113812B2 (en) Quantitative imaging for detecting vulnerable plaque
KR102491988B1 (en) Methods and systems for using quantitative imaging
US11676359B2 (en) Non-invasive quantitative imaging biomarkers of atherosclerotic plaque biology
US20220012877A1 (en) Quantitative imaging for detecting histopathologically defined plaque fissure non-invasively
Singh et al. Machine learning in cardiac CT: basic concepts and contemporary data
Chao et al. Deep learning predicts cardiovascular disease risks from lung cancer screening low dose computed tomography
US20220012865A1 (en) Quantitative imaging for detecting histopathologically defined plaque erosion non-invasively
US20230386026A1 (en) Spatial analysis of cardiovascular imaging
Babu et al. A review on acute/sub-acute ischemic stroke lesion segmentation and registration challenges
Anderson Deep learning for lung cancer analysis
Danala Developing and Applying CAD-generated Image Markers to Assist Disease Diagnosis and Prognosis Prediction
Christina Sweetline et al. A Comprehensive Survey on Deep Learning-Based Pulmonary Nodule Identification on CT Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination