US20220406410A1 - System and method for creating, querying, and displaying a miba master file - Google Patents

System and method for creating, querying, and displaying a miba master file Download PDF

Info

Publication number
US20220406410A1
US20220406410A1 US17/582,601 US202217582601A US2022406410A1 US 20220406410 A1 US20220406410 A1 US 20220406410A1 US 202217582601 A US202217582601 A US 202217582601A US 2022406410 A1 US2022406410 A1 US 2022406410A1
Authority
US
United States
Prior art keywords
miba
data
matrix
moving window
master file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/582,601
Inventor
Moira F. Schieke
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cubisme Inc
Original Assignee
Cubisme Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cubisme Inc filed Critical Cubisme Inc
Priority to US17/582,601 priority Critical patent/US20220406410A1/en
Publication of US20220406410A1 publication Critical patent/US20220406410A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B40/00ICT specially adapted for biostatistics; ICT specially adapted for bioinformatics-related machine learning or data mining, e.g. knowledge discovery or pattern finding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B5/00ICT specially adapted for modelling or simulations in systems biology, e.g. gene-regulatory networks, protein interaction networks or metabolic networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B25/00ICT specially adapted for hybridisation; ICT specially adapted for gene or protein expression

Definitions

  • the present disclosure relates to a system and method for creating highly embedded medical image files with high density bioinformatics and annotation data for use in patient medical care using image and data displays in multimedia devices.
  • Precision medicine is a medical model that proposes the customization of healthcare practices by creating advancements in disease treatments and prevention by taking into account individual variability in genes, environment, and lifestyle for each person.
  • diagnostic testing is often deployed for selecting appropriate and optimal therapies based on the context of a patient's genetic content or other molecular or cellular analysis.
  • a biomarker is a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to a treatment. Such biomarkers are particularly useful in cancer diagnosis and treatment, as well as radiogenomics.
  • Radiogenomics is an emerging field of research where cancer imaging features are correlated with indices of gene expression. Identification of new biomarkers, such as for radiogenomics, will be facilitated by advancements in big data technology.
  • Big data represents the information assets characterized by such a high volume, velocity and variety to require specific technology and analytical methods for its transformation into value. Big data is used to describe a wide range of concepts: from the technological ability to store, aggregate, and process data, to the cultural shift that is pervasively invading business and society, both drowning in information overload. Machine learning methods, such as classifiers, can be used to output probabilities of features in sets of individual patient medical data based on comparisons to population-based big data datasets.
  • a method includes receiving, by a medical imaging bioinformatics annotated (“MIBA”) system, image data from a sample, registering, by the MIBA system, the image data to a three-dimensional (3D) model selected from a population database for obtaining source data, and receiving selection, by the MIBA system, of a volume of interest.
  • the method also includes extracting, by the MIBA system, a portion of the source data corresponding to the volume of interest, defining, by the MIBA system, a moving window, applying, by the MIBA system, the moving window to the portion of the source data for obtaining a dataset, and applying, by the MIBA system, a convolution algorithm to the dataset for obtaining convoluted data.
  • the method further includes creating, by the MIBA system, a MIBA master file from the convoluted data and determining, by the MIBA system, a probability of a biomarker from the MIBA master file.
  • a medical imaging bioinformatics annotated (“MIBA”) system includes a database configured to store a MIBA master file and a MIBA creation unit.
  • the MIBA creation unit is configured to receive image data from a sample, register the image data to a three-dimensional (3D) model selected from a population database for obtaining source data, and extract voxel data from the source data and enter the voxel data into the database.
  • the MIBA creation unit is also configured to receive selection of a volume of interest, extract a portion of the voxel data from the database corresponding to the volume of interest, and create the MIBA master file from the portion of the voxel data.
  • the MIBA creation unit is additionally configured to store the MIBA master file in the database.
  • the MIBA system further includes a MIBA query system configured to receive the MIBA master file from the database, extract data from the MIBA master file in response to the query, and present the extracted data on an output interface.
  • the method includes creating, by a medical imaging bioinformatics annotated (“MIBA”) system, a MIBA master file.
  • MIBA medical imaging bioinformatics annotated
  • Creating the MIBA master file includes receiving, by the MIBA system, image data from a sample, performing, by the MIBA system, a first registration on the image data for obtaining in-slice registered data, and performing, by the MIBA system, a second registration for registering the in-slice registered data to a three-dimensional (3D) model selected from a population database for obtaining source data.
  • 3D three-dimensional
  • Creating the MIBA master file also includes extracting, by the MIBA system, voxel data from the source data and storing the voxel data in a MIBA database, receiving, by the MIBA system, selection of a volume of interest, extracting, by the MIBA system, a portion of the voxel data corresponding to the volume of interest, creating, by the MIBA system, the MIBA master file from the portion of the voxel data, and storing, by the MIBA system, the MIBA master file in the MIBA database.
  • the method further includes receiving, by the MIBA system, a query, extracting, by the MIBA system, data from the MIBA master file in response to the query, and presenting, by the MIBA system, the extracted data on an output interface.
  • FIG. 1 illustrates at least some limitations of conventional DICOM images for medical imaging.
  • FIG. 2 is an example flowchart outlining operations for creating and using a Medical Imaging Bioinformatics Annotated master file” (MIBA master file), in accordance with some embodiments of the present disclosure.
  • MIBA master file Medical Imaging Bioinformatics Annotated master file
  • FIG. 3 illustrates an overview of creating the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIGS. 4 A and 4 B are example flowcharts outlining operations for creating the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 5 illustrates selection of a 3D model for creating the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 6 illustrates an in-slice registration on image data for creating the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 7 illustrates a secondary registration on the in-sliced registered data for creating the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 8 illustrates extracting voxel data from the output of the secondary registration and entering into a MIBA database for creating the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 9 is an example flowchart outlining operations for entering the voxel data in the MIBA database for creating the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 10 illustrates another example of a portion of the MIBA database, in accordance with some embodiments of the present disclosure.
  • FIGS. 11 A, 11 B, and 11 C depict example moving window configurations used for creating the MBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 12 A is an example moving window and an output value defined within the moving window, in accordance with some embodiments of the present disclosure.
  • FIG. 12 B is a cross-sectional view of the image from FIG. 11 A in which the moving window has a cylindrical shape, in accordance with some embodiments of the present disclosure.
  • FIG. 12 C is a cross-sectional view of the image of FIG. 11 A in which the moving window has a spherical shape, in accordance with some embodiments of the present disclosure.
  • FIG. 13 is an example moving window and how the moving window is moved along x and y directions, in accordance with some embodiments of the present disclosure.
  • FIG. 14 A is a perspective view of multiple slice planes and moving windows in those slice planes, in accordance with some embodiments of the present disclosure.
  • FIG. 14 B is an end view of multiple slice planes and their corresponding moving windows, in accordance with some embodiments of the present disclosure.
  • FIG. 14 C is an example in which image slices for the sample are taken at multiple different angles, in accordance with some embodiments of the present disclosure.
  • FIG. 14 D is an example in which the image slices are taken at additional multiple different angles in a radial pattern, in accordance with some embodiments of the present disclosure.
  • FIG. 15 A shows assembling multiple two-dimensional (“2D”) image slices into a 3D matrix, in accordance with some embodiments of the present disclosure.
  • FIG. 15 B shows an example matrix operation applied to 3D matrices, in accordance with some embodiments of the present disclosure.
  • FIG. 15 C shows a 2D matrix obtained by applying a machine learning convolution algorithm (“MLCA”) to a 3D matrix, in accordance with some embodiments of the present disclosure.
  • MLCA machine learning convolution algorithm
  • FIG. 16 shows selecting corresponding matrix columns from various 3D matrices and applying the MLCA on the matrix columns, in accordance with some embodiments of the present disclosure.
  • FIG. 17 shows multiple 2D matrices obtained for a particular region of interest from various moving windows, in accordance with some embodiments of the present disclosure.
  • FIG. 18 A shows an example “read count kernel” for determining a number of moving window reads per voxel, in accordance with some embodiments of the present disclosure.
  • FIG. 18 B shows a reconstruction example in which a 2D final voxel grid is produced from various intermediate 2D matrices, in accordance with some embodiments of the present disclosure.
  • FIG. 18 C is another example of obtaining the 2D final voxel grid, in accordance with some embodiments of the present disclosure.
  • FIG. 19 shows an updated MIBA database including data from the 2D final voxel grid, in accordance with some embodiments of the present disclosure.
  • FIG. 20 shows an example of the MIBA master file including MIBA voxel data, in accordance with some embodiments of the present disclosure.
  • FIG. 21 is an example flowchart outlining operations for entering annotation data in the MIBA master file, in in accordance with some embodiments of the present disclosure.
  • FIG. 22 shows an example of an updated MIBA master file including the annotation data, in accordance with some embodiments of the present disclosure.
  • FIG. 23 shows multiple MIBA master files at varying time points, in accordance with some embodiments of the present disclosure.
  • FIG. 24 shows an example of the MIBA master file at one timepoint, in accordance with some embodiments of the present disclosure.
  • FIG. 25 is an example block diagram of a MIBA creation unit of a MIBA system, in accordance with some embodiments of the present disclosure.
  • FIG. 26 is an example block diagram of a MIBA query system of the MIBA system, in accordance with some embodiments of the present disclosure.
  • FIG. 27 illustrates creating and using the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 28 shows an example of using a population database along with the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 29 shows examples of labeling anatomy in the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIGS. 30 A- 30 K are charts of example matching parameters for use in analyzing image datasets, in accordance with some embodiments of the present disclosure.
  • the present disclosure is directed to a new singular “rich data” medical imaging and biodata organizational system that is configured to power a new level of precision analytics and display of a patient's body for a new era of precision medicine. Therefore, systems and methods for creating and querying a new data structure, referred to herein as a “Medical Imaging Bioinformatics Annotated Master File” (MIBA master file), are described.
  • the MIBA master file is a compilation of a variety of information pertaining to a sample (e.g., a patient, whether human or non-human).
  • the MIBA master file may include a compilation of every voxel of the human body coded with multiple forms of metadata.
  • the MIBA master file may additionally include information such as a date when the MIBA master file was created, any notes added by a user, specific information pertaining to one or more regions of interest of the sample, attributes of the sample (e.g., age, height, weight, etc.), type of image data collected from the sample, etc.
  • the MIBA master file may include other types of data, as described herein, or as considered desirable to include in the MIBA master file.
  • the MIBA master file is configured to be queried by associated computing systems and displayed in various forms, including for example, on virtual body maps, high resolution 3D displays of metadata, sparse data displayed on Avatars on smartphones, etc. Doctors may be able to edit MIBA master file, such as by creating markings of anatomical regions, text annotations, etc.
  • DICOM Digital Imaging and Communications in Medicine
  • FIG illustrates several limitations of using DICOM images for medical imaging.
  • DICOM images are individually acquired slices or anatomically segmented volumes of a human body. Each medical imaging study acquires a multitude of individual DICOM image files, akin to separate single pages in a book. A single medical imaging study today can take up memory roughly equivalent to 800 books, and experts predict each study memory will be roughly equivalent to 800,000 books, or 1 TB of memory, in the near future.
  • the number of DICOM images per patient study is increasing rapidly over time. For example, medical image DICOM volumes per patient study have been increasing, perhaps even exponentially, over the last years and these increases are projected to continue.
  • DICOM images are stored in the PACS (Picture Archiving and Communication System)—a system that was developed in the 1980's when DICOM volumes were low and before widespread use of electronic health records (EHR). Although efforts have been pursued to integrate PACS with EHR, the systems suffer from core design limitations. These increasing volumes of non-collated DICOM, stored in an antiquated PACS, are causing increasing strains for healthcare professionals and limit needed progress for precision medicine.
  • DICOM image files are voluminous, non-collated (e.g., separated), and widely dispersed (e.g., in the form of individual slices or segmental volumes), and generally unsuitable for present day medical imaging purposes.
  • DICOM images consume a lot of memory, are not designed to be integrated with currently used systems, and are otherwise unmanageable.
  • other digital medical data such as biopsy, genetics, and other clinical data is also exploding. DICOM image based systems are not able to keep pace with this exploding clinical data.
  • (B) State-of-the-art DICOM is based on a rigid Cartesian Coordinate System, which has limited multidimensionality (e.g., up to approximately 6 dimensions per voxel), such as those used for four dimensional (4D) flow MR imaging.
  • a voxel is a unit of graphical information that defines a point in three-dimensional space, and here defined as a unit where all sides of the voxel form 90 degree angles.
  • 3D three-dimensional
  • Avatars are anatomically incorrect and imprecisely topologically mapped. These Avatars also do not contain mapped patient precision medical imaging data which has been anatomically collated with the other digital health data. Thus, although patient avatars may be used in limited capacity to display patient medical data, no high-dimensionality precision virtual patient model system exists for integrated precision analytics and high-dimensionality virtual patient display.
  • the present disclosure provides solutions.
  • the present disclosure provides for the creation of a MIBA master file.
  • the MIBA master file allows deployment of multiple new functionalities for clinical patient care.
  • An encoding system is created which codes each individual voxel in a master file standardized volume with metadata including specific biomarker signature information generated in concert with big data population databases (such as early detection of cancer, tissue changes over time, and treatment effects), as well as data from annotations made by physicians and radiologists.
  • the MIBA master file may be leveraged by multiple types of image processors and output interfaces, such as Query engines for data mining, database links for automatic uploads to pertinent big data databases, and output apps for output image viewing, information viewing, and annotation creation by radiologists, surgeons, interventionists, individual patients, and referring physicians.
  • image processors and output interfaces such as Query engines for data mining, database links for automatic uploads to pertinent big data databases, and output apps for output image viewing, information viewing, and annotation creation by radiologists, surgeons, interventionists, individual patients, and referring physicians.
  • New cloud-based systems will be a core for new informatics technology for seamless integration of massive datasets across large networks and deployment via a multitude of potential applications.
  • healthcare delivery systems will have the capacity to compare individual patient data to vast population databases at the speed of accumulation of new patient data.
  • Patient care may be advanced by each patient having a transparent and holistic view of their entire medical status from full and complete proprietary datasets of their own records that are powered with informatics data.
  • These new powerful systems form the basis for identification and use of a multitude of new imaging and other biomarkers, which will be the cornerstones for advancing patient care in a new era of precision medicine.
  • FIG. 2 an example flowchart outlining a process 100 for creating and using a MIBA master file is shown, in accordance with some embodiments of the present disclosure.
  • the process 100 provides an overview of various user interfaces used in the creation, storage, querying, and display of a MIBA master file.
  • a MIBA system receives, from a user, a selection of a patient and a volume of interest (VOI) of the patient, for example “head.”
  • the MIBA system automatically selects or receives selection from the user of a matching (or substantially matching) virtual 3D patient model (referred to herein as “ref3D” ( 55 )) from a population of previously compiled 3D patient models, which most closely resembles the patient (e.g., 35 yo female weighing 135 lbs, T1 type images).
  • ref3D virtual 3D patient model
  • the patient has a prior MIBA file ( 155 ) of the matching volume of interest, it can be used instead of the ref3D.
  • the MIBA system creates a MIBA master file for the patient based on the ref3D. Creation of the MIBA master file is discussed in greater detail below.
  • the MIBA system stores the MIBA master file within database associated with the MIBA system and updates the MIBA master file, if necessary, at operations 120 - 135 .
  • the MIBA system makes the MIBA master file available to the user for querying (e.g., for extracting certain types of data or information) and the MIBA system may display the results on a display interface associated with the MIBA system at operation 140 .
  • the MIBA system may receive selection of additional new data from the user, and in response, the MIBA system may update the MIBA master file, as indicated as operations 120 - 135 .
  • a user decides whether to create updates to the created MIBA file. For example, a clinical Radiologist may place an annotation into the MIBA file stating that a lesion requires surveillance imaging in six months.
  • the MIBA file is sent to storage.
  • the MIBA file can be updated. For example, a patient returns for surveillance imaging of the prior annotated MIBA file.
  • a user interface allows a user to allow the additional data to be added to the MIBA file, again starting at operation 105 .
  • MIBA master file also referred to herein as MIBA file, and the like
  • a multitude of stacks of patient image slices of various types of image modalities MRI, CT, US, etc.
  • various types of MRI sequences T1, T2, DWI, etc.
  • These patient image slices may be DICOM images or other types of medical images.
  • the input image files are registered to a ref3D, as indicated via reference numeral 55 , or alternately a prior matching MIBA file ( 150 ) if available.
  • the registration may be rigid or non-rigid as needed for precision mapping but while maintaining anatomical correctness to the patient's true body proportions.
  • voxel values in the input image files are mapped to ref3D voxels or prior MIBA file.
  • Biomarkers are also mapped to voxels either via encoding in the ref3D or prior MIBA file or via Moving Window Algorithms detailed below.
  • the voxel is identified as a voxel in the patient lung.
  • the population ref3D can be made of any and all imaging modalities, and may contain metadata, including data on anatomical location.
  • Inputs to an anatomically organized MIBA file include standard DICOM images from CT, MRI, US, as well as any other file type such as tiff or jpeg files for optical cameras and other sources of images. These images can come from alternate sources other than machines directly, such as from iPhone interfaces.
  • FIGS. 4 A and 4 B are example flowcharts outlining a process 200 for forming a Medical Imaging Bioinformatics Annotated master file (“MIBA master file”).
  • MIBA master file Medical Imaging Bioinformatics Annotated master file
  • source images are obtained from a scanner (e.g., any type of medical imaging device, including imaging devices used for small animal studies (e.g., charts shown in FIGS. 30 A-K )).
  • the image data may be obtained from various imaging modalities such as magnetic resonance imaging (MRI), computed tomography (CT) imaging, positron emission tomography (PET) imaging, single-photon emission computed tomography (SPECT) imaging, micro-PET imaging, micro-SPECT imaging, Raman imaging, bioluminescence optical (BLO) imaging, ultrasound imaging, or any other suitable imaging technique.
  • imaging modalities include Raman imaging, images may have a resolution of 25 nanometers or as desired, such that the created MIBA master file or a portion of it, has super high resolution on a nanometer scale allowing a user to “zoom in” to a very small structure.
  • Standard post-processing may be used to generate parameter maps which are generated from simple image based calculations, such as ADC measures from multiple diffusion weighted images (DWI) or K trans measures from multiple images in dynamic contrast-enhanced MRI image sets.
  • acquired images and maps are re-sliced in plane to match the x-y resolution of a reference standard 3D volume (ref3D).
  • In-slice registrations are performed such that sets of images acquired during a single scanning session place each anatomical location in a matching position on each image in the image set.
  • images obtained at multiple slice orientations are secondarily registered to a reference standard 3D volume (ref3D) or prior MIBA file of the specified body part, such as head, limb, or whole-body.
  • a database to hold aggregate voxel data is started with standardized labels for each voxel in the ref3D or prior MIBA file at operation 225 .
  • Data is systematically entered into the database rows for each corresponding labelled voxel within that row with five general types of data: source values from source images, moving window (MW) data, MW classifier and parameter calculation output data, super-resolution (SR) solution output data, and annotation data.
  • MW moving window
  • SR super-resolution
  • a volume of interest is selected from the MIBA file dataset at operation 220 for further analytics to add biomarker information, either by user selection or computer software commands.
  • the further steps for adding biomarker data include defining moving windows (MW) at operation 230 , applying MW at operation 231 , creating 3D matrices at operation 232 , refining 3D matrices at operation 233 , applying matrix operations at operation 234 , selecting user columns at operation 235 , applying biomarker specific machine learning convolutional algorithm (MLCA) to create 2D matrices at operation 245 , apply super-resolution algorithms to solve for each associated MIBA file output voxel value at operation 246 , add annotations at operation 250 , allow data compression at operation 251 , storage of MIBA file at operation 125 , versus further analytics at operation 255 .
  • MW moving windows
  • MLCA machine learning convolutional algorithm
  • Further analytics could include a multitude of possible algorithms in the future, but specifically can include adding new biomarker information at operation 260 . If more biomarker data is to be added, the process repeats and loops back to operation 220 . At various points along the process, voxelwise data can be added to the MIBA file in operation 240 , as will be further described below.
  • FIG. 5 shows a schematic of the process for creating a reference 3D image volume 55 , which is composed of a standard size high-resolution volume covering a reference patient anatomical volume 35 from a similar population as the patient (for example, man aged 50 years old, T1 type images). Any type of image modality or type or parameter maps may be used (e.g., see charts of FIGS. 30 A-K ) for obtaining the image volume 35 .
  • a 3D grid is selected with voxels of a desired resolution 45 .
  • FIG. 5 shows a sparse example with a total number of voxels of, for example, 324 voxels covering a reference head and neck of the image volume 35 . It is to be noted that files may need to be much larger for clinical use.
  • a 3D reference volume voxel grid resolution may be set at 0.5 mm ⁇ 0.5 mm ⁇ 0.5 mm
  • the X-Y-Z field of view (FOV) may be set at 30 cm ⁇ 30 cm ⁇ 30 cm for a total of 216,000,000 voxels when used for clinical purposes.
  • a large population of ref3D may be required for inputs for the systems in order to obtain close matching with each individual patient and selected ref3D.
  • source images are registered in operation 211 to in-slice images obtained at the same timepoint on the same machine or on coordinated machines, such as registration of PET and CT on separate machines—as part of creating the MIBA master file.
  • re-slicing of the images may be needed to obtain matching datasets with matching resolutions per modality across various time points.
  • re-slicing may also be needed to align voxel boundaries when resolutions between modalities are different.
  • FIG. 6 depicts registration of the image coordinates associated with the datasets of selected time point 2.
  • FIG. 6 depicts registration of the image coordinates associated with the datasets of selected time point 2. Specifically, FIG.
  • FIG. 6 illustrates a number of parameter maps for parameters associated with various imaging modalities (e.g., DCE-MRI, ADC, DWI, T2, T1, tau, and PET).
  • the image coordinates for the various parameter maps are registered to enable the combined use of the various parameter maps in the creation of the MIBA master file.
  • Registration may be performed using rigid marker based registration or any other suitable rigid or non-rigid registration technique.
  • Example registration techniques may include B-Spline automatic registration, optimized automatic registration, Landmark least squares registration, midsagittal line alignment, or any other suitable registration technique.
  • FIG. 7 describes the secondary rigid registration of in-slice registered image sets 211 to a ref3D 55.
  • the image sets may be acquired using a medical imaging scanner 205 at multiple slice angles to create a resultant patient specific volume MIBA file 155 with resolution matching the original ref3D volume.
  • four types of the image sets 211 e.g., image sets A, B, C, D
  • images from image set A are registered to ref3D volume 55 which may include the prior example image set of T1, T2, and DWI images.
  • images from the other image sets are registered to corresponding ref3D volumes.
  • the new registered pt3Dvol MIBA file 155 would contain source voxel data with matching resolution (for example, 0.5 mm ⁇ 0.5 mm ⁇ 0.5 mm) to the ref3D. This process would be repeated for each image set (B, C, D) to generate voxel metadata for the singular pt3Dvol MIBA file 155 .
  • FIG. 7 shows a rigid registration mechanism
  • a non-rigid registration technique may be used to map image slices from any orientation into a warped plane in an x-, y-, or z-plane.
  • FIG. 8 displays how voxel source image data to the registered ref3D or prior MIBA file is entered into a MIBA file 155 associated with a MIBA creation system.
  • the MIBA file can take the form of a 3D file 155 , or organized in a spreadsheet format showing collated and coded data for each voxel in the MIBA file 155 .
  • An example spreadsheet format 225 of a portion of the MIBA database includes a variety of information pertaining to the registered image data.
  • the format 225 includes voxels labelled and organized by rows.
  • Voxel values are entered in locations where registration of source images led to a new registered voxel value in the registered MIBA file 3D volume.
  • Column headings are entered as common data elements (CDE), such as those provided by the NIH (https://www.nlm.nih.gov/cde/summary ⁇ _table ⁇ _1.html) or other desired standard or created codes.
  • CDE common data elements
  • a column header code for the source data from image acquisition A for T1 images is labelled, “A_SoD_T1” and voxel data is entered in corresponding voxels at the corresponding database location coded to the MIBA file 3D volume.
  • A_SoD_T1 voxel data is entered in corresponding voxels at the corresponding database location coded to the MIBA file 3D volume.
  • the format 225 is only an example. In other embodiments, additional, fewer, or different information may be included in the format 225 .
  • FIG. 9 shows a flowchart outlining a process 225 for source image voxel data entry for MIBA file 3D volume.
  • Data entry into the MIBA spreadsheet database is similar for all potential image datasets registered to the ref3D or prior MIBA file.
  • the MIBA database includes a compilation of data or records from the image sets 211 and each of the records may be in the format 225 .
  • each of the records 225 in the MIBA database for the image sets 211 may have formats (e.g., the format 225 ) that are somewhat different. For example, based upon the information that is extracted from the image sets 211 , the corresponding format 211 of those image sets may vary as well.
  • a record for the image set A of the image sets 211 is created and added to the MIBA spreadsheet database
  • a record for the image set B is created and added to the MIBA spreadsheet database
  • records for image sets C and Disclosure respectively, are created and added to the MIBA spreadsheet database.
  • Standard registration technique methods are used to determine the specific voxel values in the MIBA file grid from registered inputted data.
  • FIG. 10 shows source data entry into the MIBA database as depicted in FIG. 9 .
  • a Volume-of-Interest is selected from the MIBA file either by user selection of image display or via a computer software algorithm.
  • MW moving window
  • a slice of a MIBA file may be chosen and displayed in axial orientation, slice thickness of 1 mm, and in-plane resolution of 1 mm ⁇ 1 mm.
  • the source data is then chosen for display; example would include T1 values or parameter map values, such as K from DCE-MRI data.
  • the Volume-of-Interest (VOI) for running MW algorithms is selected from the displayed images.
  • FIG. 10 provides an overview of MW matrix data entry into the MIBA spreadsheet file.
  • Moving window parameters are chosen which include MW size, shape, point of origin, step size, and path.
  • Selected MW is run across the images and a matrix of data is created. The process is repeated for each desired source data input and data is collated into the 3D matrix where each column holds data for matching MW coordinates and parameters for the various types of source data.
  • a single column of the 3D matrix may have data for the same MW including T1, T2, DWI, ADC, and K values at matching anatomical locations.
  • the resultant MW 3D matrix file can be entered as an embedded metadata file into a selected corresponding cell of the MIBA spreadsheet database. Details are further described below.
  • FIGS. 11 A- 11 C show defining of a MW.
  • one or more moving windows are defined and the defined moving windows are used for analyzing the registered images.
  • a “moving window” is a “window” or “box” of a specific shape and size that is moved over the registered images in a series of steps or stops, and data within the “window” or “box” at each step is statistically summarized.
  • the step size of the moving window may also vary. In some embodiments, the step size may be equal to the width of the moving window. In other embodiments, other step sizes may be used. Further, a direction in which the moving window moves over the data may vary from one embodiment to another.
  • the moving window is used to successively analyze discrete portions of each image within the selected image datasets to measure aspects of the selected parameters.
  • the moving window may be used to successively analyze one or more voxels in the image data.
  • other features may be analyzed using the moving window.
  • the shape, size, step-size, and direction of the moving window may be varied. By changing one or more attributes (e.g., the shape, size, step size, and direction), multiple moving windows may be defined, and the data collected by each of the defined moving windows may be varied.
  • the moving window may be defined to encompass any number or configuration of voxels at one time. Based upon the number and configuration of voxels that are to be analyzed at one time, the size, shape, step size, and direction of the moving window may be defined. Moving window volume may be selected to match the volumes of corresponding biomarker data within a volume-coded population database. Further, in some embodiments, the moving window may be divided into a grid having two or more adjacent subsections.
  • the moving window may have a circular shape with a grid disposed therein defining a plurality of smaller squares.
  • FIGS. 11 A, 11 B, and 11 C depict various example moving window configurations having a circular shape with a square grid, in accordance with some embodiments.
  • FIGS. 11 A, 11 B, and 11 C each include a moving window 280 having a grid 285 and a plurality of square subsections 290 .
  • FIG. 11 A has four of the subsections 290
  • FIG. 11 B has nine of the subsections
  • FIG. 11 C has sixteen of the subsections.
  • the configurations shown in FIGS. 11 A, 11 B, and 11 C are only an example.
  • the moving window 280 may assume other shapes and sizes such as square, rectangular, triangle, hexagon, or any other suitable shape.
  • the grid 285 and the subsections 290 may assume other shapes and sizes.
  • FIGS. 11 A, 11 B, and 11 C shows various possible configurations where the moving window encompasses 4, 9, or 16 full voxels within the source images and a single moving window read measures the mean and variance of the 4, 9, and 12 voxels respectively.
  • the grid 285 and the subsections 290 need not always have the same shape. Additionally, while it may be desirable to have all of the subsections 290 be of the same (or similar) size, in some embodiments, one or more of the subsections may be of different shapes and sizes.
  • each moving window may include multiple grids, with each grid having one or more subsections, which may be configured as discussed above. In the embodiments of FIGS.
  • the shape and size of each of the subsections 290 may correspond to the shape and size of one MIBA master file output voxel in the MIBA file output voxel grid (defined as discussed above by the ref3D or prior MIBA file).
  • the step size of the moving window in the x, y, and z directions determines the output matrix dimensions in the x, y, and z directions, respectively.
  • the specific shape(s), size(s), starting point(s), etc. of the applied moving windows determines the exact size of the matrix output grid.
  • the moving window may be either two-dimensional or three-dimensional.
  • the moving window 280 shown in FIGS. 11 A, 11 B, and 11 C is two-dimensional.
  • the moving window may assume three-dimensional shapes, such as a sphere, cube, etc.
  • the size of the moving window 280 may vary from one embodiment to another.
  • the moving window 280 is configured to be no smaller than the size of the largest single input image voxel in the image dataset, such that the edges of the moving window encompass at least one complete voxel within its borders.
  • the size of the moving window 280 may depend upon the shape of the moving window. For example, for a circular moving window, the size of the moving window 280 may be defined in terms of radius, diameter, area, etc. Likewise, if the moving window 280 has a square or rectangular shape, the size of the moving window may be defined in terms of length and width, area, volume, etc.
  • a step size of the moving window 280 may also be defined.
  • the step size defines how far the moving window 280 is moved across an image between measurements.
  • each of the subsections 290 corresponds to one source image voxel.
  • the moving window 280 is defined as having a step size of a half voxel, the moving window 280 is moved by a distance of one half of each of the subsections 290 in each step.
  • the resulting matrix from a half voxel step size has a number of readings equal to the number of steps taken.
  • the step size of the moving window 280 and the size and dimensions of each output matrix may be varied.
  • the step size of the moving window 280 determines a size (e.g., the number of columns, rows) of intermediary matrices into which the moving window output values are placed into the MBA master file, as described below.
  • the size of the intermediary matrices may be determined before application of the moving window 280 , and the moving window may be used to fill the intermediary matrices in any way based on any direction or random movement. Such a configuration allows for much greater flexibility in the application of the moving window 280 .
  • FIGS. 12 A- 12 C show an example where the moving window read inputs all voxels fully or partially within the boundary of the moving window and calculates a read as the weighted average by volume with standard deviation.
  • FIG. 12 A shows various examples of defining an output value within a moving window 330 in an image 335 at one step.
  • the moving window 330 defines a grid 340 covering source image voxels and divided into multiple subsections 345 , 350 , 355 , 360 , 365 , and 370 . Further, as discussed above, each of the subsections 345 - 370 corresponds to one voxel in the source image.
  • the output value of the moving window 330 may be an average (or some other function) of those subsections 345 - 370 (or voxels) of the grid 340 that are fully or substantially fully encompassed within the moving window.
  • the moving window 330 cuts off the subsections 350 , 355 , 365 , and 370 such that only a portion of these subsections are contained within the moving window.
  • the subsections 345 and 360 are substantially fully contained within the moving window 330 .
  • the output value of the moving window 330 at the shown step may be the average of values in the subsections 345 and 360 .
  • a weighted average may be used to determine the output value of the moving window 330 at each step.
  • the weight may be for percent area or volume of the subsection contained within the moving window 330 .
  • the output value of the moving window 330 at the given step may be an average of all subsections 345 - 370 weighted for their respective areas A 1 , A 2 , A 3 , A 4 , A 5 , and A 6 within the moving window.
  • the weighted average may include a Gaussian weighted average.
  • the output value at each step may be adjusted to account for various factors, such as noise.
  • the output value at each step may be an average value+/ ⁇ noise. Noise may be undesirable readings from adjacent voxels.
  • the output value from each step may be a binary output value.
  • the output probability value at each step may be a probability value of either 0 or 1, where 0 corresponds to a “yes” and 1 corresponds to a “no,” or vice-versa based upon features meeting certain characteristics of any established biomarker.
  • FIG. 12 B shows a cross-sectional view of the image 335 from FIG. 12 A in which the moving window 330 has a cylindrical shape.
  • FIG. 12 C shows another cross-sectional view of the image 335 in which the moving window 330 has a spherical shape.
  • the image 335 shown in FIG. 12 B has a slice thickness, ST 1 , that is larger than a slice thickness, ST 2 , of the image shown in FIG. 12 C .
  • the image of FIG. 12 B is depicted as having only a single slice
  • the image of FIG. 12 C is depicted as having three slices.
  • the diameter of the spherically-shaped moving window 330 is at least as large as a width (or thickness) of the slice.
  • the shape and size of the moving window 330 may vary with slice thickness as well.
  • the moving window 330 may be a combination of multiple different shapes and sizes of moving windows to better identify particular features of the image 335 . Competing interests may call for using different sizes/shapes of the moving window 330 .
  • a star-shaped moving window may be preferred, but circular or square-shaped moving windows may offer simplified processing. Larger moving windows also provide improved contrast to noise ratios and thus better detect small changes in tissue over time. Smaller moving windows may allow for improved edge detection in regions of heterogeneity of tissue components.
  • a larger region of interest may be preferred for PET imaging, but a smaller region of interest (and moving window) may be preferred for CT imaging with highest resolutions.
  • larger moving windows may be preferred for highly deformable tissues, tissues with motion artifacts, etc., such as liver. By using combinations of different shapes and sizes of moving windows, these competing interests may be accommodated, thereby reducing errors across time-points.
  • different size and shaped moving windows e.g., the moving window 330
  • the size and shape of the moving window 330 may be defined.
  • the size (e.g., dimensions, volume, area, etc.) and the shape of the moving window 330 may be defined in accordance with a data sample match from the precision database.
  • a data sample match may include a biopsy sample or other confirmed test data for a specific tissue sample that is stored in a database.
  • the shape and volume of the moving window 330 may be defined so as to match the shape and volume of a specific biopsy sample for which one or more measured parameter values are known and have been stored in the precision database.
  • the shape and volume of the moving window 330 may be defined so as to match a region of interest (ROI) of tumor imaging data for a known tumor that has been stored in the precision database.
  • ROI region of interest
  • the shape and volume of the moving window 330 may be chosen based on a small sample training set to create more robust images for more general pathology detection. In still further embodiments, the shape and volume of the moving window 330 may be chosen based on whole tumor pathology data and combined with biopsy data or other data associated with a volume of a portion of the tissue associated with the whole tumor.
  • the direction of the moving window may be defined.
  • the direction of the moving window 280 indicates how the moving window moves through the various voxels of the image data.
  • FIG. 15 depicts an example direction of movement of a moving window 300 in a region-of-interest 305 in an x direction 310 and a y direction 320 , in accordance with an illustrative embodiment.
  • the movement direction of the moving window 300 is defined such that the moving window is configured to move across a computation region 325 of the image 305 at regular step sizes or intervals of a fixed distance in the x direction 310 and the y direction 320 .
  • the moving window 300 may be configured to move along a row in the x direction 310 until reaching an end of the row. Upon reaching the end of the row, the moving window 300 moves down a row in the y direction 320 and then proceeds across the row in the x direction 310 until again reaching the end of the row. This pattern is repeated until the moving window 300 reaches the end of the image 305 .
  • the moving window 300 may be configured to move in different directions. For example, the moving window 300 may be configured to move first down a row the y direction 320 until reaching then end of the row and then proceed to a next row in the x direction 310 before repeating its movement down this next row in the y direction. In another alternative embodiment, the moving window 300 may be configured to move randomly throughout the computation region 325 .
  • the step size of the moving window 300 may be a fixed (e.g., regular) distance.
  • the fixed distance in the x direction 310 and the y direction 320 may be substantially equal to a width of a subsection of the grid (not shown in FIG. 13 ) of the moving window 300 .
  • the step size may vary in either or both the x direction 310 and the y direction 320 .
  • each movement of the moving window 300 by the step size corresponds to one step or stop.
  • the moving window 300 measures certain data values (also referred to as output values).
  • the moving window 300 may measure specific MRI parameters at each step.
  • the measured data values may be measured in any of variety of ways.
  • the data values may be mean values, while in other embodiments, the data values may be a weighted mean value of the data within the moving window 300 .
  • other statistical analysis methods may be used for the data within the moving window 300 at each step.
  • the moving window upon defining, is applied at operation 231 of FIG. 4 B .
  • the defined moving window e.g., the moving window 330
  • a computation region e.g., the computation region 325
  • each image e.g., the image 335
  • an output value and variance such as a standard deviation
  • Each output value is recorded and associated with a specific coordinate on the corresponding computation region of the image.
  • the coordinate is an x-y coordinate. In other embodiments, y-z, x-z, or a three dimensional coordinate may be used.
  • the moving window reading may obtain source data from the imaging equipment prior to reconstruction.
  • magnetic resonance fingerprinting source signal data is reconstructed from a magnetic resonance fingerprinting library to reconstruct standard images, such as T1 and T2 images.
  • Source MR Fingerprinting other magnetic resonance original signal data or data from other machines, may be obtained directly and compared to the volume-coded population database in order to similarly develop a MLCA to identify biomarkers from the original source signal data.
  • the operation 231 of FIG. 4 B involves moving the moving window 330 across the computation region 325 of the image 335 at the defined step sizes and measuring the output value of the selected matching parameters at each step of the moving window. It is to be understood that same or similar parameters of the moving window are used for each image (e.g., the image 335 ) and each of the selected image datasets. Further, at each step, an area of the computation region 325 encompassed by the moving window 330 may overlap with at least a portion of an area of the computation region encompassed at another step.
  • the moving window 330 is moved across an image (e.g., the image 335 ) corresponding to an MRI slice
  • the moving window is moved within only a single slice plane until each region of the slice plane is measured. In this way, the moving window is moved within the single slice plane without jumping between different slice planes.
  • the output values of the moving window 330 from the various steps are aggregated into a 3D matrix according to the x-y-z coordinates associated with each respective moving window output value.
  • the x-y coordinates associated with each output value of the moving window 330 correspond to the x-y coordinate on a 2D slice of the original image (e.g., the image 335 ), and various images and parameter map data is aggregated along the z-axis (e.g., as shown in FIG. 7 ).
  • FIG. 14 A depicts a perspective view of multiple 2D slice planes 373 , 375 , and 380 in accordance with an illustrative embodiment.
  • a spherical moving window 385 is moved within each respective slice planes 373 , 375 , and 380 .
  • FIG. 14 B depicts an end view of slice planes 373 , 375 , and 380 .
  • the spherical moving window 385 is moved within the respective slice planes 373 , 375 , and 380 but without moving across the different slice planes.
  • moving window values may be created and put into a matrix associated with a specific MRI slice and values between different MRI slices do not become confused (e.g., the moving window moving within the slices for each corresponding image and parameter map in the dataset).
  • FIG. 14 C depicts an embodiment in which MRI imaging slices for a given tissue sample are taken at multiple different angles.
  • the different angled imaging slices may be analyzed using a moving window (e.g., the moving window 385 ) and corresponding matrices of the moving window output values may be independently entered into the MIBA file.
  • a moving window e.g., the moving window 385
  • corresponding matrices of the moving window output values may be independently entered into the MIBA file.
  • the use of multiple imaging slices having different angled slice planes allows for improved sub-voxel characterization, better resolution in the output image, reduced partial volume errors, and better edge detection.
  • slice 390 extends along the y-x plane and the moving window 385 moves within the slice plane along the y-x plane.
  • Slice 395 extends along the y-z plane and the moving window 385 moves within the slice plane along the y-z plane.
  • Slice 400 extends along the z′-x′ plane and the moving window 385 moves within the slice plane along the z′-x′ plane. Movement of the moving window 385 along all chosen slice planes preferably has a common step size to facilitate comparison of the various moving window output values. When combined, the slices 390 - 400 provide image slices extending at three different angles.
  • FIG. 14 D depicts an additional embodiment in which MRI imaging slices for a given tissue sample are taken at additional multiple different angles.
  • multiple imaging slices are taken at different angles radially about an axis in the z-plane.
  • the image slice plane is rotated about an axis in the z-plane to obtain a large number of image slices.
  • Each image slice has a different angle rotated slightly from an adjusted image slice angle.
  • moving window data for 2D slices is collated with all selected parameter maps and images registered to the 2D slice that are stacked to form the 3D matrix.
  • FIG. 15 A shows an example assembly of moving window output values 405 for a single 2D slice 410 being transformed into a 3D matrix 415 containing data across nine parameter maps, with parameter data aligned along the z-axis.
  • dense sampling using multiple overlapping moving windows may be used to create a 3D array of parameter measures (e.g., the moving window output values 405 ) from a 2D slice 425 of a human, animal, etc.
  • Sampling is used to generate a two-dimensional (2D) matrix for each parameter map, represented by the moving window output values 405 .
  • the 2D matrices for each parameter map are assembled to form the multi-parameter 3D matrix 415 , also referred to herein as a data array.
  • the 3D matrix 415 may be created for each individual slice of the 2D slice 425 by aggregating moving window output values for the individual slice for each of a plurality of parameters.
  • each layer of the 3D matrix 415 may correspond to a 2D matrix created for a specific parameter as applied to the specific individual slice.
  • the parameter set (e.g., the moving window output values 405 ) for each step of a moving window may include measures for some specific selected matching parameters (e.g., T1 mapping, T2 mapping, delta Ktrans, tau, Dt IVIM, fp IVIM and R*), values of average Ktrans (obtained by averaging Ktrans from TM, Ktrans from ETM, and Ktrans from SSM), and average Ve (obtained by averaging Ve from TM and Ve from SSM).
  • Datasets may also include source data, such as a series of T1 images during contrast injection, such as for Dynamic Contrast Enhanced MRI (DCE-MRI).
  • DCE-MRI Dynamic Contrast Enhanced MRI
  • T2 raw signal, ADC (high b-values), high b-values, and nADC may be excluded from the parameter set because these parameters are not determined to be conditionally independent.
  • T1 mapping, T2 mapping, delta Ktrans, tau, Dt IVIM, fp IVIM, and R* parameters may be included in the parameter set because these parameters are determined to be conditionally independent.
  • a 3D matrix e.g., the 3D matrix 415 ) is created for each image in each image dataset.
  • the 3D matrices are refined at an operation 233 .
  • Refining a 3D matrix may include dimensionality reduction, aggregation, and/or subset selection processes. Other types of refinement operations may also be applied to each of the 3D matrices obtained at the operation 233 . Further, in some embodiments, the same refinement operation may be applied to each of the 3D matrices, although in other embodiments, different refinement operations may be applied to different 3D matrices as well. Refining the 3D matrices may reduce parameter noise, create new parameters, and assure conditional independence needed for future classifications. As an example, FIG. 15 B shows the 3D matrices 430 and 435 being refined into matrices 440 and 445 , respectively. The matrices 440 and 445 , which are refined, are also 3D matrices.
  • one or more matrix operations are applied at operation 234 of FIG. 4 B .
  • the matrix operations generate a population of matrices for use in analyzing the sample.
  • FIG. 15 B shows an example of a matrix operation being applied to the matrices 440 and 445 , in accordance with some embodiments of the present disclosure.
  • a matrix subtraction operation is applied on the matrices 440 and 445 to obtain a matrix 450 .
  • a difference in parameter values across all parameter maps at each stop of the moving window e.g., the moving window 385 ) from each of the matrices 440 and 445 may be obtained.
  • matrix operations may be performed on the matrices 440 and 445 as well.
  • matrix operations may include matrix addition, subtraction, multiplication, division, exponentiation, transposition, or any other suitable and useful matrix operation.
  • Various matrix operations may be selected as needed for later advanced big data analytics. Further, such matrix operations may be used in a specific Bayesian belief network to define a specific biomarker that may help answer a question regarding the tissue being analyzed, e.g., “Did the tumor respond to treatment?
  • FIG. 16 shows the selection of a corresponding matrix column 455 in the matrices 440 - 450 . As shown, the matrix column 455 that is selected corresponds to the first column (e.g., Column 1) of each of the matrices 440 - 450 .
  • the matrix column 455 in each of the matrices 440 - 450 corresponds to the same small area of the sample. It is to be understood that the selection of Column 1 as the matrix column 455 is only an example. In other embodiments, depending upon the area of the sample desired to be analyzed, other columns from each of the matrices 440 - 450 may be selected. Additionally, in some embodiments, multiple columns from each of the matrices 440 - 450 may be selected to analyze and compare multiple areas of the sample. When multiple column selections are used, in some embodiments, all of the desired columns may be selected simultaneously and analyzed together as a group. In other embodiments, when multiple column selections are made, columns may be selected one at a time such that each selected column (e.g., the matrix column 455 ) is analyzed before selecting the next column.
  • the matrix columns selected at the operation 245 of FIGS. 4 A and 4 B are subject to a machine learning convolution algorithm (“MLCA”) and a 2D Matrix (also referred to herein as a convoluted graph) is output from the MLCA.
  • MLCA machine learning convolution algorithm
  • the MLCA 460 may be a Bayesian belief network that is applied to the selected columns (e.g., the matrix column 455 ) of the matrices 440 - 450 .
  • the Bayesian belief network is a probabilistic model that represents probabilistic relationships between the selected columns of the matrices 440 - 450 having various parameter measures or maps 465 .
  • the Bayesian belief network also takes into account several other pieces of information, such as clinical data 470 .
  • the clinical data 470 may be obtained from patient's medical records and matching data in the precision database and/or the volume-coded precision database are used as training datasets. Further, depending upon the embodiment, the clinical data 470 may correspond to the patient whose sample (e.g., the sample 170 ) is being analyzed, the clinical data of other similar patients, or a combination of both. Also, the clinical data 470 that is used may be selected based upon a variety of factors that may be deemed relevant.
  • the Bayesian belief network combines the information from the parameter measures or maps 465 with the clinical data 470 in a variety of probabilistic relationships to provide a biomarker probability 475 .
  • the biomarker probability 475 is determined from the MLCA which inputs the parameter value data (e.g., the parameter measures or maps 465 ) and other desired imaging data in the dataset within each selected column (e.g., the matrix column 455 ) of the matrices 440 - 1220 , the weighting determined by the Bayesian belief network, and determines the output probability based on the analysis of training datasets (e.g., matching imaging and the clinical data 470 ) stored in the precision database.
  • the parameter value data e.g., the parameter measures or maps 465
  • other desired imaging data in the dataset within each selected column (e.g., the matrix column 455 ) of the matrices 440 - 1220
  • the weighting determined by the Bayesian belief network determines the output probability based on the analysis of training datasets (e.g., matching imaging and the clinical data 470 ) stored in the precision database.
  • the biomarker probability 475 varies across moving window reads.
  • the biomarker probability 475 may provide an answer to a clinical question.
  • a biomarker probability (e.g., the biomarker probability 475 ) is determined for each (or some) column(s) of the matrices 440 - 450 , which are then combined to produce a 2D matrix.
  • FIG. 15 C shows a 2D matrix 480 produced by applying the MLCA 460 to the matrices 440 - 450 .
  • the 2D Matrix 480 corresponds to a biomarker probability and answers a specific clinical question regarding the sample 165 .
  • the 2D matrix 480 may answer clinical questions such as “Is cancer present?,” “Do tissue changes after treatment correlate to expression of a given biomarker?,” “Did the tumor respond to treatment?,” or any other desired questions.
  • the 2D matrix 480 thus, corresponds to a probability density function for a particular biomarker. Therefore, biomarker probabilities (e.g., the biomarker probability 475 ) determined from the matrices 440 - 450 are combined to produce the 2D matrix 480 , represented by a probability density function.
  • the 2D matrix 480 may be viewed directly or converted to a 3D graph for viewing by an interpreting physician to gain an overview of the biomarker probability data. For example, the 2D matrix 480 may be reviewed by a radiologist, oncologist, computer program, or other qualified reviewer to identify unhelpful data prior to completion of full image reconstruction, as detailed below. If the 2D matrix 480 provides no or vague indication of large enough probabilities to support a meaningful image reconstruction or biomarker determination, the image data analysis (e.g., the 2D matrix 480 ) may be discarded.
  • modifications may be made to the image data analysis parameters (e.g., modifications in the selected columns of the matrices 440 - 1220 , the clinical data 470 , etc.) and the MLCA 460 may be reapplied and another 2D matrix obtained.
  • the moving window size, shape, and/or other parameter may be modified and operations of FIGS. 4 A and 4 B re-applied.
  • different 2D matrices e.g., the 2D matrix 480
  • FIG. 17 An example collection of data from moving windows of different shapes and sizes is shown in FIG. 17 . Specifically, FIG.
  • FIG. 17 shows a collection of data using a circular moving window 485 , a square moving window 490 , and a triangular moving window 495 .
  • a corresponding 3D matrix 500 - 510 is obtained.
  • MLCA is applied to obtain a respective 2D matrix 515 - 525 .
  • multiple 2D matrices e.g., the 2D matrices 515 - 525 ) may be created for a particular region of interest.
  • FIG. 17 shows variation in the shape of the moving window, in other embodiments, other aspects, such as size, step size, and direction may additionally or alternatively be varied to obtain each of the 2D matrix 515 - 525 . Likewise, in some embodiments, different angled slice planes may be used to produce the different instances of the 2D matrix 515 - 525 .
  • the data collected from each moving window in the 2D matrix 515 - 525 is entered into first and second matrices and is combined into a combined matrix using a matrix addition operation, as discussed below.
  • different convolution algorithms may be used to produce parameter maps and/or parameter change maps.
  • a 2D matrix map may be created from a 3D matrix input using such a convolution algorithm.
  • convolution algorithms may include pharmacokinetic equations for Ktrans maps or signal decay slope analysis used to calculated various diffusion-weighted imaging calculations, such as ADC. Such algorithms may be particularly useful in creating final images with parameter values instead of probability values.
  • a super-resolution reconstruction algorithm is applied to the 2D matrix (e.g., the 2D matrix 480 and/or the 2D matrices 515 - 525 ) to produce an output solution value at a defined voxel within the MIBA file for each desired biomarker and for the specific case in which voxels within the MW ( 290 in FIG. 11 A-C ) correspond to the size and shape of the MIBA file output voxel.
  • the super-resolution algorithm produces a final super-resolution voxel output value from a combination of the 2D matrices 555 - 565 , as depicted in FIGS. 18 A and 18 B , which provide the multiple MW reads for each voxel for input into the super-resolution algorithm. More specifically, the super-resolution algorithm converts each 2D matrix 555 - 565 into an output grid, as shown in FIGS.
  • This final super-resolution output voxel grid corresponds to the MIBA file output voxel grid in the MIBA file 3D volume and for coded entry into the MIBA spreadsheet format.
  • a read count kernel 530 may be used to determine the number of moving window reads within each voxel of the defined final super-resolution output voxel grid which matches the MIBA FILE output voxel grid.
  • a defined threshold is set to determine which voxels receive a reading as a voxel fully enclosed within the moving window, or at a set threshold, such as 98% enclosed.
  • Each of these voxels within the read count kernel 530 has a value of 1 within the read count kernel.
  • the read count kernel 530 moves across the output grid at step size matching the size of the super resolution voxels and otherwise matches the shape, size, and movement of the corresponding specified moving window defined during creation the 3D matrices.
  • Moving window readings are mapped to voxels that are fully contained within the moving window, such as the four voxels labeled with reference numeral 535 .
  • moving window read voxel may be defined as those having a certain percentage enclosed in the moving window, such as 98%.
  • values from moving window reads are mapped to the location on the final super-resolution output voxel grid which matches the MIBA FILE output voxel grid and the corresponding values is assigned to each full voxel contained within the moving window (or partially contained at a desired threshold, such as 98% contained).
  • the post-MLCA 2D matrix contains the moving window reads for each moving window, corresponding to the values in the first three columns of the first row.
  • Each of the 9 full final super-resolution output voxel grid which matches the MIBA FILE output voxels within the first moving window (MW 1) receives a value of A+/ ⁇ sd
  • each of the 9 full output SR voxels within the second moving window (MW 2) receives a value of B+/ ⁇ sd
  • each of the 9 full output SR voxels within the third moving window (MW 3) receives a value of C+/ ⁇ sd.
  • FIGS. 18 C depicts another embodiment of obtaining an output MIBA FILE output voxels grid.
  • neural network methods may be deployed such that full image or full organ neural network read may return a single moving window read per entire image or organ region of interest.
  • Such a read may represent a probability that a tissue is normal or abnormal as binary “0” or “1” or a probability, or the odds of a specific diagnosis, depending on type of input labelled data inputted into the neural networks.
  • Moving window reads may be added as for other reads, discussed above, and only voxels contained with organ ROI may be added with this notation into the MIBA file.
  • standard classifier methods such as vector machines, can be used to solve for a probability of a given biomarker with a segmented region, such as a tumor.
  • all voxels values for voxels meeting volume criteria for example, 98% inclusion within output voxel are entered into the MIBA file.
  • Examples of simplified existing clinical imaging tumor biomarkers that are based on standard whole tumor ROI and standard classifiers include, but are not limited to, multi-parameter MRI for detection of prostate tumors using the PI-RADS system (using scoring with T2, DWI, and DCE-MRI sequences), liver tumor detection with LI-RADS system (using scoring with T1 post contrast, T2, and DWI sequences), and PET uptake changes after GIST treatment with Gleevac. Additional parameters may include, but are not limited to, DCE-MRI, ADC, DWI, T1, T2, and tau parameters. Additional example parameters are included in the charts depicted in FIG. 30 A-K .
  • the possible parameters may be obtained from different modalities including, but not limited to, MRI, PET, SPECT, CT, fluoroscopy, ultrasound imaging, BLO imaging, micro-PET, nano-MRI, micro-SPECT, and Raman imaging.
  • the matching parameters may include any of the types of MRI parameters depicted in FIGS. 30 A-K , one or more types of PET parameters depicted, one or more types of heterogeneity features depicted, and other parameters depicted in FIGS. 30 A-K .
  • the biomarker may be defined as a set of defined thresholds for various image data or parameters (for example, T1>500, T2 ⁇ 2000, and DWI>2000) and the algorithm would return a simple “yes” or “no” solution of the MW data fits the defined biomarker thresholds.
  • This most simplified version of the convolution algorithm (MLCA) would be most similar to established clinical biomarkers that define probabilities of cancer, such as Li-RADS. New and more complex imaging biomarkers may be discovered in the future and could be similarly applied to the described method.
  • a set of biomarkers provides a reliable prediction of whether a given voxel contains normal or abnormal anatomy.
  • a first 2D matrix 555 is converted into a first MIBA file intermediate voxel grid 560 and a second 2D matrix 565 is converted into a second output intermediate voxel grid 570 .
  • the output intermediate voxel grid 560 and the output intermediate voxel grid 570 are then combined according to a super-resolution algorithm (e.g., addition algorithm) to obtain a final super-resolution output grid matching the final MIBA file voxel output grid 575 .
  • FIGS. 18 A- 18 B provide examples where the output intermediate voxel grids and the final MIBA file voxel grid are both represented as 2D matrices.
  • the final super-resolution output grid matching the final MIBA file voxel grid may be a represented as a 3D matrix.
  • the separate biomarkers may be entered as separate values in the specific designated region, such as a column, for a given voxel (collated voxel data contained in a given row) in the MIBA spreadsheet file.
  • FIG. 18 C shows that a moving window (“MW”) may equal a single segmentation, such as a segmentation of the liver (LIV_SEG). All voxels with the LIV_SEG are labelled as “liver.” This single segmentation of the liver can be created by a human user or by automated techniques, such as using data-driven neural networks.
  • MW moving window
  • LIV_SEG segmentation of the liver
  • FIG. 19 depicts the mapping of convoluted graph data back to the MIBA file output voxel grid.
  • Data cells in the post-MLCA 2D matrix are mapped to the MIBA file output voxel grid such that any voxel fully or almost fully (for a defined percentage; for example, greater than 90%) within the borders of the original MW is mapped as a MW reads for the corresponding pt3Dvol voxel MIBA file output voxel grid.
  • the top edge voxels for each convoluted graph have one MW read each, while the center top row voxels have four MW reads each.
  • the resulting grid has two MW reads at top edges, and eight MW reads at the central top row.
  • FIG. 19 shows entry of the final mapped grid MW data into the MIBA database in corresponding labelled rows.
  • FIG. 20 describes entry of FINAL super-resolution voxel solutions from the collated multiple MW reads for each designated voxel with MIBA file output grid.
  • a set of MW reads is selected, for example, eight MW reads in row 4,1,1 are selected.
  • a FINAL super-resolution voxel solution algorithm is selected and applied to obtain FINAL output MIBA file voxel values.
  • the FINAL super-resolution voxel solution takes the multiple input MW reads which may be discrete values, probabilities, and binary solutions (yes or no) and outputs a solution aimed at finding the “true” solution.
  • the FINAL voxel super-resolution solution algorithm could be a simple calculation, such as the simple average of all MW reads. If the input MW reads are binary answers (such as yes and no), the super-resolution algorithm could return the most common solution (e.g.. yes MW reads>no MW reads.)
  • the specific super-resolution voxel solution algorithm alternately be chosen from various types which could include general families of frequency domain (wavelet and fourier) and probabilistic (maximum likelihood and Maximum a priori (MAP) algorithms which include markov random fields, total variation, and bimodality priori, as well as single image techniques such as neural network techniques, principal component analysis, and tensor techniques, as well as others.
  • output final super-resolution MIBA file voxel values are entered at corresponding locations within the MIBA file spreadsheet format.
  • FIG. 21 describes the last type of data for entry into the MIBA database, namely annotation data.
  • Annotation data can take many forms, and is simply not primary medical imaging data. For example, it can include physician notations, genetics data corresponding to a biopsy segmentation, anatomically mapped data from digital health record.
  • Annotation data imprinted from a user, such as a physician is collected from all images in the original dataset or the image sets 211 (see FIG. 7 ) generated from the image processor and output display unit.
  • Annotation data from annotations added directly to images by people, such as Radiologists and other physicians, is entered into the MIBA database as metadata within a single cell, or entered in each corresponding voxel location.
  • FIG. 22 shows that the annotations from FIG. 21 are entered into the MIBA file database.
  • Annotation can be hand-drawn regions-of-interest (hd-ROI) or computer generated segmentations on any image type or parameter map and notations are made in the MIBA database to indicate whether a given voxel is contained within the ROI.
  • metadata such as a DICOM header for an original image may be embedded in a single corresponding cell in the MIBA database.
  • Metadata entry could also include lines in reports for specific ROI of lesions, as well as follow-up recommendations or differential diagnoses by the Radiologist.
  • Annotations can also mark the data for potential upload to the volume-coded population database.
  • annotations may include biopsy data obtained from the selected image datasets 210 and may be labelled as biopsy for all voxels contained in the segmentation of the biopsy sample.
  • any pathology or genetics related information gleaned from the biopsy data may also be added to the MIBA master file as an annotation.
  • other relevant notes may be added to the MIBA master file as annotations.
  • FIG. 23 describes an overview of how successive imaging data collected at later time points would be incorporated into a prior MIBA file 150 and updated into a new MIBA file 155 using the process outlined in FIG. 4 A-B .
  • a matching process would be followed as previously described, but specifically using the prior MIBA file 150 instead of the reference 3D volume (ref3D) using rigid of affine registration.
  • Any possible image data contained in both the prior MIBA and new dataset 211 could be used for registration, including data showing “yes” voxel data for normal anatomy, allowing great potential power in registering a new MIBA file to a prior MIBA file.
  • Additional of multiple time points would also allow for assessing changes in MW reads across time points. Data would be compressed as a last step to deleted or otherwise compress unneeded data, such as redundant normal tissue data. Further, registration data may be saved such that original source DICOM images may be recovered from post-registration data.
  • FIG. 23 describes how successive MIBA files are created across time points. Data compression allows decrease of memory demands, such as deletion of redundant normal anatomy imaging data.
  • FIG. 24 shows a schematic of a final MIBA spreadsheet file at time point 5 . Similar MIBA database files may exist for other time points shown in FIG. 23 .
  • FIG. 25 shows an example block diagram of a portion of a MIBA system 805 that may be used to create a MIBA master file, as discussed above.
  • the MIBA system 805 may be used for generating the MIBA master file, as discussed above.
  • the MIBA system 805 includes a MIBA creation unit 810 having a precision database 815 , a volume-coded precision database 820 , a 3D matrix computing unit 825 , an MLCA computing unit 830 , and a MIBA voxel grid unit 835 .
  • the specific sub-units and databases of image computing unit 810 may be separate devices or components that are communicatively coupled.
  • the precision database 815 and the volume-coded precision database 820 are configured to store image data, as discussed above.
  • the MIBA creation unit 810 may be connected to one more imaging modalities 840 to receive image data corresponding to those modalities.
  • the imaging modalities 840 may also provide image data for the sample that is to be analyzed and for which the MIBA master file is to be generated.
  • the MIBA creation unit 810 may be connected to another computing unit, which receives the image data from the imaging modalities, and provides that data to the image computing unit.
  • the precision database 815 and the volume-coded precision database 820 stores clinical data 845 as well.
  • the clinical data 845 may be input into the MIBA creation unit 810 by a user.
  • various attributes 850 e.g., parameters and parameter maps of interest, moving window parameters, various thresholds, and any other user defined settings
  • the MIBA creation unit 810 may also include the 3D matrix computing unit 825 that is configured to compute 3D matrices, the MLCA computing unit 830 , which transforms the 3D matrices into 2D matrices, and a MIBA voxel grid unit 835 to convert the 2D matrices into the MIBA master file, as discussed above.
  • the MIBA creation unit 810 may output a MIBA master file 855 upon creation.
  • the MIBA master file 855 may be stored within a database associated with the MIBA system 805 and may be used by a query system (described in FIG. 29 ) to provide a variety of relevant information.
  • the MIBA creation unit 810 and the units therein may include one or more processing units configured to execute instructions.
  • the instructions may be carried out by a special purpose computer, logic circuits, or hardware circuits.
  • the processing units may be implemented in hardware, firmware, software, or any combination thereof.
  • execution is, for example, the process of running an application or the carrying out of the operation called for by an instruction.
  • the instructions may be written using one or more programming language, scripting language, assembly language, etc.
  • the image computing unit 810 and the units therein thus, execute an instruction, meaning that they perform the operations called for by that instruction.
  • the processing units may be operably coupled to the precision database 815 and the volume-coded precision database 820 to receive, send, and process information for generating the MIBA master file 855 .
  • the MIBA creation unit 810 and the units therein may retrieve a set of instructions from a memory unit and may include a permanent memory device like a read only memory (ROM) device.
  • the MIBA creation unit 810 and the units therein copy the instructions in an executable form to a temporary memory device that is generally some form of random access memory (RAM).
  • RAM random access memory
  • the MIBA creation unit 810 and the units therein may include a single stand-alone processing unit, or a plurality of processing units that use the same or different processing technology.
  • the precision database 815 and the volume-coded precision database 820 may be configured as one or more storage units having a variety of types of memory devices.
  • the precision database 815 and the volume-coded precision database 820 may include, but not limited to, any type of RAM, ROM, flash memory, magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, etc.), optical disks (e.g., compact disk (CD), digital versatile disk (DVD), etc.), smart cards, solid state devices, etc.
  • the MIBA master file 855 may be provided on an output unit, which may be any of a variety of output interfaces, such as printer, color display, a cathode-ray tube (CRT), a liquid crystal display (LCD), a plasma display, an organic light-emitting diode (OLED) display, etc.
  • output unit may be any of a variety of output interfaces, such as printer, color display, a cathode-ray tube (CRT), a liquid crystal display (LCD), a plasma display, an organic light-emitting diode (OLED) display, etc.
  • information may be entered into the image computing unit 810 using any of a variety of unit mechanisms including, for example, keyboard, joystick, mouse, voice, etc.
  • the present disclosure provides a system and method that includes identifying aggregates of features using classifiers to identify biomarkers within tissues, including cancer tissues, using a precision database having volume-coded imaging-to-tissue data.
  • the method involves the application of a super-resolution algorithm specially adapted for use in medical images, and specifically magnetic resonance imaging (MRI), which minimizes the impact of partial volume errors.
  • MRI magnetic resonance imaging
  • the method determines probability values for each relevant super-resolution voxel for each desired biomarker, as well as each desired parameter measure or original signal. In this way, innumerable points of output metadata (up to 10, 1000, 10000 data points) can be collated for each individual voxel within the MIBA master file.
  • FIG. 26 is another block diagram of a portion of the MIBA system depicting use of the MIBA master file upon creation.
  • the MIBA master file from MIBA database 900 is entered into an input interface 905 of a MIBA query system 910 .
  • the MIBA query system 910 collects inputs from a user which is processed by an image processor which outputs the results as an image or image/data display on an output interface 915 .
  • a sample query to the query system 910 may ask to return all rows from the MIBA master File where MIBA voxels show high probability of vessel.
  • the database management and query system includes an interface for a user or computer software program, query request memory, memory for holding results of a query, a query kernel, an instruction set generator, an execution engine, a processor, and 3D voxel mapping rules.
  • a software application with its own user interface, can be used to act on these various components.
  • the MIBA system 26 is intended to include various components similar to the MIBA system 805 , including, for example, processors, memory systems, interfaces, etc.
  • An output interface 915 is used to display the MIBA file in 3D which can be (1) via mapping of query data to specific anatomical locations in a virtual display of the patient body (akin to a “google maps” of the human body), (2) a 3D dissection view where the user can define view of the actual MIBA output voxel grids and the contained metadata within the MIBA voxels, such as viewing all vessel data, all T1 images, or all voxels showing a specific biomarker, and (3) standard images can be outputted matching standard DICOM images in axial, coronal, and sagittal planes.
  • MIBA system 805 It is also to be understood that only some components of the MIBA system 805 have been shown and described in FIGS. 25 and 26 . Nevertheless, other components that are desired or considered necessary to perform the functions described herein are contemplated and considered within the scope of the present disclosure.
  • FIG. 27 describes an example of an application for using the MIBA master file system for querying the MIBA file to identify datasets, such as all voxels labelled as “liver tumor” and for user annotation.
  • An image processor allows a user to select a display of a patient's liver lesion for which a doctor or other person can add an annotation that is entered back into the MIBA file for the specific region-of-interest.
  • the MIBA file can be stored and executed from cloud or local storage. Data can also be uploaded to a population database.
  • the image display unit could specific colors for image voxel display characteristics. Images could be displayed on apps for smartphones, iPads, and iPhones, etc. MIBA could also be used for input data during scanning or during an intervention.
  • FIG. 28 describes how multiple MIBA files could be storage in a single system, such as the cloud or blockchain, and users can query for data across multiple patients, such as all biopsy imaging data for all breast cancer patients that showed BRCA+ genetics.
  • multiple MIBA files are held in the database management system and a user can enter a query to allow selection of specific data, for example, all imaging and related data contained within region-of-interest for BRCA+ breast cancer lesions. The collated data could be outputted for future use.
  • this process can be repeated in any fashion to fit to a given desired anatomical topological mapping of the human body.
  • FIG. 29 A the skin surfaces of the human body are mapped and correspond to topological human body segments that can be matched across a population of human bodies.
  • this topological mapping can be applied to human heads, and various configurations can be used to describe tissue around the eyes.
  • FIG. 29 C such mapping can also align with defined anatomy, such as the various Couinaud segments of a liver.
  • the smoothness of the anatomical segment edges are a function of the underlying resolution of the MIBA file voxel output voxel grid.
  • a finer MIBA file voxel grid will create a more smooth output anatomical segment edge.
  • topological maps require that the edges between segments are fully aligned with no spaces in between anatomical segments.
  • the present disclosure has been discussed with respect to cancer imaging, the present disclosure may be applied for obtaining imaging for other diseases as well. Likewise, the present disclosure may be applicable to non-medical applications, particularly where detailed super-resolution imagery is needed or desired to be obtained.

Abstract

A system and method for creating and using a medical imaging bioinformatics annotated (“MIBA”) master file is disclosed. Creating the MIBA master file includes receiving image data, performing a first registration on the image data for obtaining in-slice registered data, and performing a second registration for registering the in-slice registered data to a three-dimensional (3D) model for obtaining source data. Creating also includes extracting voxel data from the source data and storing the voxel data in a MIBA database, receiving selection of a volume of interest, and extracting a portion of the voxel data corresponding to the volume of interest. The MIBA master file is created from the portion of the voxel data, which is stored in the MIBA database. The MIBA system receives a query, extracts data from the MIBA master file in response to the query, and presents the extracted data on an output interface.

Description

    CROSS-REFERENCES TO RELATED PATENT APPLICATIONS
  • This present application is a continuation of U.S. application Ser. No. 15/959142, filed on Apr. 20, 2018, which claims priority from U.S. Provisional Patent Application No. 62/488,581, filed on Apr. 21, 2017, and U.S. Provisional Patent Application No. 62/580,543, filed on Nov. 2, 2017, all of which are incorporated by reference in their entireties herein.
  • TECHNICAL FIELD
  • The present disclosure relates to a system and method for creating highly embedded medical image files with high density bioinformatics and annotation data for use in patient medical care using image and data displays in multimedia devices.
  • BACKGROUND
  • The following description is provided to assist the understanding of the reader. None of the information provided or references cited is admitted to be prior art.
  • Precision medicine is a medical model that proposes the customization of healthcare practices by creating advancements in disease treatments and prevention by taking into account individual variability in genes, environment, and lifestyle for each person. In this model, diagnostic testing is often deployed for selecting appropriate and optimal therapies based on the context of a patient's genetic content or other molecular or cellular analysis. A biomarker is a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to a treatment. Such biomarkers are particularly useful in cancer diagnosis and treatment, as well as radiogenomics. Radiogenomics is an emerging field of research where cancer imaging features are correlated with indices of gene expression. Identification of new biomarkers, such as for radiogenomics, will be facilitated by advancements in big data technology. Big data represents the information assets characterized by such a high volume, velocity and variety to require specific technology and analytical methods for its transformation into value. Big data is used to describe a wide range of concepts: from the technological ability to store, aggregate, and process data, to the cultural shift that is pervasively invading business and society, both drowning in information overload. Machine learning methods, such as classifiers, can be used to output probabilities of features in sets of individual patient medical data based on comparisons to population-based big data datasets.
  • Extrapolated over an entire population, these trends in clinical data volume explosion and fundamental data management reorganization represent both a tremendous opportunity and a significant challenge. Although the benefits of individual patient stewardship of their own medical data have clear advantages, these complex datasets cannot be safely interpreted by individuals without a substantial medical and technical background. Therefore, new basic organizational systems are needed to successfully deploy the data in a healthcare environment and assure proper recording and communication with the patient.
  • SUMMARY
  • In accordance with one aspect of the present disclosure, a method is disclosed. The method includes receiving, by a medical imaging bioinformatics annotated (“MIBA”) system, image data from a sample, registering, by the MIBA system, the image data to a three-dimensional (3D) model selected from a population database for obtaining source data, and receiving selection, by the MIBA system, of a volume of interest. The method also includes extracting, by the MIBA system, a portion of the source data corresponding to the volume of interest, defining, by the MIBA system, a moving window, applying, by the MIBA system, the moving window to the portion of the source data for obtaining a dataset, and applying, by the MIBA system, a convolution algorithm to the dataset for obtaining convoluted data. The method further includes creating, by the MIBA system, a MIBA master file from the convoluted data and determining, by the MIBA system, a probability of a biomarker from the MIBA master file.
  • In accordance with another aspect of the present disclosure, a medical imaging bioinformatics annotated (“MIBA”) system is disclosed. The MIBA system includes a database configured to store a MIBA master file and a MIBA creation unit. The MIBA creation unit is configured to receive image data from a sample, register the image data to a three-dimensional (3D) model selected from a population database for obtaining source data, and extract voxel data from the source data and enter the voxel data into the database. The MIBA creation unit is also configured to receive selection of a volume of interest, extract a portion of the voxel data from the database corresponding to the volume of interest, and create the MIBA master file from the portion of the voxel data. The MIBA creation unit is additionally configured to store the MIBA master file in the database. The MIBA system further includes a MIBA query system configured to receive the MIBA master file from the database, extract data from the MIBA master file in response to the query, and present the extracted data on an output interface.
  • In accordance with yet other aspects of the present disclosure, another method is disclosed. The method includes creating, by a medical imaging bioinformatics annotated (“MIBA”) system, a MIBA master file. Creating the MIBA master file includes receiving, by the MIBA system, image data from a sample, performing, by the MIBA system, a first registration on the image data for obtaining in-slice registered data, and performing, by the MIBA system, a second registration for registering the in-slice registered data to a three-dimensional (3D) model selected from a population database for obtaining source data. Creating the MIBA master file also includes extracting, by the MIBA system, voxel data from the source data and storing the voxel data in a MIBA database, receiving, by the MIBA system, selection of a volume of interest, extracting, by the MIBA system, a portion of the voxel data corresponding to the volume of interest, creating, by the MIBA system, the MIBA master file from the portion of the voxel data, and storing, by the MIBA system, the MIBA master file in the MIBA database. The method further includes receiving, by the MIBA system, a query, extracting, by the MIBA system, data from the MIBA master file in response to the query, and presenting, by the MIBA system, the extracted data on an output interface.
  • The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the following drawings and the detailed description.
  • BRIEF DESCRIPTION OF DRAWINGS
  • The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are therefore, not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings.
  • FIG. 1 illustrates at least some limitations of conventional DICOM images for medical imaging.
  • FIG. 2 is an example flowchart outlining operations for creating and using a Medical Imaging Bioinformatics Annotated master file” (MIBA master file), in accordance with some embodiments of the present disclosure.
  • FIG. 3 illustrates an overview of creating the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIGS. 4A and 4B are example flowcharts outlining operations for creating the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 5 illustrates selection of a 3D model for creating the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 6 illustrates an in-slice registration on image data for creating the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 7 illustrates a secondary registration on the in-sliced registered data for creating the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 8 illustrates extracting voxel data from the output of the secondary registration and entering into a MIBA database for creating the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 9 is an example flowchart outlining operations for entering the voxel data in the MIBA database for creating the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 10 illustrates another example of a portion of the MIBA database, in accordance with some embodiments of the present disclosure.
  • FIGS. 11A, 11B, and 11C depict example moving window configurations used for creating the MBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 12A is an example moving window and an output value defined within the moving window, in accordance with some embodiments of the present disclosure.
  • FIG. 12B is a cross-sectional view of the image from FIG. 11A in which the moving window has a cylindrical shape, in accordance with some embodiments of the present disclosure.
  • FIG. 12C is a cross-sectional view of the image of FIG. 11A in which the moving window has a spherical shape, in accordance with some embodiments of the present disclosure.
  • FIG. 13 is an example moving window and how the moving window is moved along x and y directions, in accordance with some embodiments of the present disclosure.
  • FIG. 14A is a perspective view of multiple slice planes and moving windows in those slice planes, in accordance with some embodiments of the present disclosure.
  • FIG. 14B is an end view of multiple slice planes and their corresponding moving windows, in accordance with some embodiments of the present disclosure.
  • FIG. 14C is an example in which image slices for the sample are taken at multiple different angles, in accordance with some embodiments of the present disclosure.
  • FIG. 14D is an example in which the image slices are taken at additional multiple different angles in a radial pattern, in accordance with some embodiments of the present disclosure.
  • FIG. 15A shows assembling multiple two-dimensional (“2D”) image slices into a 3D matrix, in accordance with some embodiments of the present disclosure.
  • FIG. 15B shows an example matrix operation applied to 3D matrices, in accordance with some embodiments of the present disclosure.
  • FIG. 15C shows a 2D matrix obtained by applying a machine learning convolution algorithm (“MLCA”) to a 3D matrix, in accordance with some embodiments of the present disclosure.
  • FIG. 16 shows selecting corresponding matrix columns from various 3D matrices and applying the MLCA on the matrix columns, in accordance with some embodiments of the present disclosure.
  • FIG. 17 shows multiple 2D matrices obtained for a particular region of interest from various moving windows, in accordance with some embodiments of the present disclosure.
  • FIG. 18A shows an example “read count kernel” for determining a number of moving window reads per voxel, in accordance with some embodiments of the present disclosure.
  • FIG. 18B shows a reconstruction example in which a 2D final voxel grid is produced from various intermediate 2D matrices, in accordance with some embodiments of the present disclosure.
  • FIG. 18C is another example of obtaining the 2D final voxel grid, in accordance with some embodiments of the present disclosure.
  • FIG. 19 shows an updated MIBA database including data from the 2D final voxel grid, in accordance with some embodiments of the present disclosure.
  • FIG. 20 shows an example of the MIBA master file including MIBA voxel data, in accordance with some embodiments of the present disclosure.
  • FIG. 21 is an example flowchart outlining operations for entering annotation data in the MIBA master file, in in accordance with some embodiments of the present disclosure.
  • FIG. 22 shows an example of an updated MIBA master file including the annotation data, in accordance with some embodiments of the present disclosure.
  • FIG. 23 shows multiple MIBA master files at varying time points, in accordance with some embodiments of the present disclosure.
  • FIG. 24 shows an example of the MIBA master file at one timepoint, in accordance with some embodiments of the present disclosure.
  • FIG. 25 is an example block diagram of a MIBA creation unit of a MIBA system, in accordance with some embodiments of the present disclosure.
  • FIG. 26 is an example block diagram of a MIBA query system of the MIBA system, in accordance with some embodiments of the present disclosure.
  • FIG. 27 illustrates creating and using the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 28 shows an example of using a population database along with the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIG. 29 shows examples of labeling anatomy in the MIBA master file, in accordance with some embodiments of the present disclosure.
  • FIGS. 30A-30K are charts of example matching parameters for use in analyzing image datasets, in accordance with some embodiments of the present disclosure.
  • DETAILED DESCRIPTION
  • In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be used, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and make part of this disclosure.
  • The present disclosure is directed to a new singular “rich data” medical imaging and biodata organizational system that is configured to power a new level of precision analytics and display of a patient's body for a new era of precision medicine. Therefore, systems and methods for creating and querying a new data structure, referred to herein as a “Medical Imaging Bioinformatics Annotated Master File” (MIBA master file), are described. The MIBA master file is a compilation of a variety of information pertaining to a sample (e.g., a patient, whether human or non-human). For example, in some embodiments, the MIBA master file may include a compilation of every voxel of the human body coded with multiple forms of metadata. The MIBA master file may additionally include information such as a date when the MIBA master file was created, any notes added by a user, specific information pertaining to one or more regions of interest of the sample, attributes of the sample (e.g., age, height, weight, etc.), type of image data collected from the sample, etc. The MIBA master file may include other types of data, as described herein, or as considered desirable to include in the MIBA master file. The MIBA master file is configured to be queried by associated computing systems and displayed in various forms, including for example, on virtual body maps, high resolution 3D displays of metadata, sparse data displayed on Avatars on smartphones, etc. Doctors may be able to edit MIBA master file, such as by creating markings of anatomical regions, text annotations, etc.
  • Conventionally, standard medical image files, called DICOM (Digital Imaging and Communications in Medicine) have been used for medical imaging purposes. FIG illustrates several limitations of using DICOM images for medical imaging. (A) DICOM images are individually acquired slices or anatomically segmented volumes of a human body. Each medical imaging study acquires a multitude of individual DICOM image files, akin to separate single pages in a book. A single medical imaging study today can take up memory roughly equivalent to 800 books, and experts predict each study memory will be roughly equivalent to 800,000 books, or 1 TB of memory, in the near future. Further, the number of DICOM images per patient study is increasing rapidly over time. For example, medical image DICOM volumes per patient study have been increasing, perhaps even exponentially, over the last years and these increases are projected to continue. If a single DICOM image is thought of as a single page in a book, past patient studies required the same memory as approximately 40 books, today's studies require approximately 800 books, and studies in the near future are projected to require approximately 800,000 books, or 1 TB, of memory. Further, DICOM images are stored in the PACS (Picture Archiving and Communication System)—a system that was developed in the 1980's when DICOM volumes were low and before widespread use of electronic health records (EHR). Although efforts have been pursued to integrate PACS with EHR, the systems suffer from core design limitations. These increasing volumes of non-collated DICOM, stored in an antiquated PACS, are causing increasing strains for healthcare professionals and limit needed progress for precision medicine. Thus, these DICOM image files are voluminous, non-collated (e.g., separated), and widely dispersed (e.g., in the form of individual slices or segmental volumes), and generally unsuitable for present day medical imaging purposes. Further, DICOM images consume a lot of memory, are not designed to be integrated with currently used systems, and are otherwise unmanageable. In addition, other digital medical data such as biopsy, genetics, and other clinical data is also exploding. DICOM image based systems are not able to keep pace with this exploding clinical data.
  • (B) State-of-the-art DICOM is based on a rigid Cartesian Coordinate System, which has limited multidimensionality (e.g., up to approximately 6 dimensions per voxel), such as those used for four dimensional (4D) flow MR imaging. A voxel is a unit of graphical information that defines a point in three-dimensional space, and here defined as a unit where all sides of the voxel form 90 degree angles. Thus, while current techniques using standard DICOM images allow for some advanced three-dimensional (3D) visualization, only limited current techniques integrates higher dimensions of imaging data into 3D files, such as time series flow information. With the advancement of precision medicine, core new medical imaging information technology solutions are needed to better integrate medical imaging files with other digital health information for many order higher dimensionality to create “rich data” datasets to optimize to power of precision analytics on human tissues. Human anatomy warping is a considerable challenge in medical imaging. For example, the liver may compress by 30% during breathing and edema surrounding a brain tumor may cause significant warping and lead to registration errors in trying to topologically map tumor tissue across various types of images and time-points. Thus, with limited multidimensionality, precise registrations remain a technical hurdle for medical imaging in attempts for precise image quantification using DICOM images. (C) Electronic Health Record (E.H.R.) companies may use “Patient Avatars” which create annotation on an artist-rendered likeness of the patient. These Avatars are anatomically incorrect and imprecisely topologically mapped. These Avatars also do not contain mapped patient precision medical imaging data which has been anatomically collated with the other digital health data. Thus, although patient avatars may be used in limited capacity to display patient medical data, no high-dimensionality precision virtual patient model system exists for integrated precision analytics and high-dimensionality virtual patient display.
  • Therefore, DICOM images, DICOM image based systems, and current avatar displays suffer from several disadvantages. The present disclosure provides solutions. For example, the present disclosure provides for the creation of a MIBA master file. The MIBA master file allows deployment of multiple new functionalities for clinical patient care. An encoding system is created which codes each individual voxel in a master file standardized volume with metadata including specific biomarker signature information generated in concert with big data population databases (such as early detection of cancer, tissue changes over time, and treatment effects), as well as data from annotations made by physicians and radiologists. Upon creation, the MIBA master file may be leveraged by multiple types of image processors and output interfaces, such as Query engines for data mining, database links for automatic uploads to pertinent big data databases, and output apps for output image viewing, information viewing, and annotation creation by radiologists, surgeons, interventionists, individual patients, and referring physicians.
  • New cloud-based systems will be a core for new informatics technology for seamless integration of massive datasets across large networks and deployment via a multitude of potential applications. In the future, by using MIBA master files, healthcare delivery systems will have the capacity to compare individual patient data to vast population databases at the speed of accumulation of new patient data. Patient care may be advanced by each patient having a transparent and holistic view of their entire medical status from full and complete proprietary datasets of their own records that are powered with informatics data. These new powerful systems form the basis for identification and use of a multitude of new imaging and other biomarkers, which will be the cornerstones for advancing patient care in a new era of precision medicine.
  • Turning now to FIG. 2 , an example flowchart outlining a process 100 for creating and using a MIBA master file is shown, in accordance with some embodiments of the present disclosure. The process 100 provides an overview of various user interfaces used in the creation, storage, querying, and display of a MIBA master file. At operation 105, a MIBA system receives, from a user, a selection of a patient and a volume of interest (VOI) of the patient, for example “head.” At operation 110, the MIBA system automatically selects or receives selection from the user of a matching (or substantially matching) virtual 3D patient model (referred to herein as “ref3D” (55)) from a population of previously compiled 3D patient models, which most closely resembles the patient (e.g., 35 yo female weighing 135 lbs, T1 type images). Alternately, if the patient has a prior MIBA file (155) of the matching volume of interest, it can be used instead of the ref3D. At operation 115, the MIBA system creates a MIBA master file for the patient based on the ref3D. Creation of the MIBA master file is discussed in greater detail below. Upon creation, the MIBA system stores the MIBA master file within database associated with the MIBA system and updates the MIBA master file, if necessary, at operations 120-135. Upon updating the MIBA master file (or if no updates are needed), the MIBA system makes the MIBA master file available to the user for querying (e.g., for extracting certain types of data or information) and the MIBA system may display the results on a display interface associated with the MIBA system at operation 140. As indicated at operation 145, the MIBA system may receive selection of additional new data from the user, and in response, the MIBA system may update the MIBA master file, as indicated as operations 120-135. At operation 120, a user decides whether to create updates to the created MIBA file. For example, a clinical Radiologist may place an annotation into the MIBA file stating that a lesion requires surveillance imaging in six months. At operation 125, the MIBA file is sent to storage. At operation 130, the MIBA file can be updated. For example, a patient returns for surveillance imaging of the prior annotated MIBA file. At operation 135, a user interface allows a user to allow the additional data to be added to the MIBA file, again starting at operation 105.
  • Referring now to FIG. 3 , an overview diagram for creating a MIBA master file (also referred to herein as MIBA file, and the like) is shown, in accordance with some embodiments of the present disclosure. As indicated at 150, a multitude of stacks of patient image slices of various types of image modalities (MRI, CT, US, etc.), and various types of MRI sequences (T1, T2, DWI, etc.), are obtained. These patient image slices may be DICOM images or other types of medical images. The input image files are registered to a ref3D, as indicated via reference numeral 55, or alternately a prior matching MIBA file (150) if available. The registration may be rigid or non-rigid as needed for precision mapping but while maintaining anatomical correctness to the patient's true body proportions. As part of the registration, voxel values in the input image files are mapped to ref3D voxels or prior MIBA file. Biomarkers are also mapped to voxels either via encoding in the ref3D or prior MIBA file or via Moving Window Algorithms detailed below. In the example shown in FIG. 3 , the voxel is identified as a voxel in the patient lung. The population ref3D can be made of any and all imaging modalities, and may contain metadata, including data on anatomical location.
  • Inputs to an anatomically organized MIBA file include standard DICOM images from CT, MRI, US, as well as any other file type such as tiff or jpeg files for optical cameras and other sources of images. These images can come from alternate sources other than machines directly, such as from iPhone interfaces.
  • FIGS. 4A and 4B are example flowcharts outlining a process 200 for forming a Medical Imaging Bioinformatics Annotated master file (“MIBA master file”). At operation 205, source images are obtained from a scanner (e.g., any type of medical imaging device, including imaging devices used for small animal studies (e.g., charts shown in FIGS. 30A-K)). The image data may be obtained from various imaging modalities such as magnetic resonance imaging (MRI), computed tomography (CT) imaging, positron emission tomography (PET) imaging, single-photon emission computed tomography (SPECT) imaging, micro-PET imaging, micro-SPECT imaging, Raman imaging, bioluminescence optical (BLO) imaging, ultrasound imaging, or any other suitable imaging technique. Further, when the imaging modalities includes Raman imaging, images may have a resolution of 25 nanometers or as desired, such that the created MIBA master file or a portion of it, has super high resolution on a nanometer scale allowing a user to “zoom in” to a very small structure. Standard post-processing may be used to generate parameter maps which are generated from simple image based calculations, such as ADC measures from multiple diffusion weighted images (DWI) or K trans measures from multiple images in dynamic contrast-enhanced MRI image sets. At operation 211, acquired images and maps are re-sliced in plane to match the x-y resolution of a reference standard 3D volume (ref3D). In-slice registrations are performed such that sets of images acquired during a single scanning session place each anatomical location in a matching position on each image in the image set. At operation 212, images obtained at multiple slice orientations are secondarily registered to a reference standard 3D volume (ref3D) or prior MIBA file of the specified body part, such as head, limb, or whole-body. A database to hold aggregate voxel data is started with standardized labels for each voxel in the ref3D or prior MIBA file at operation 225. Data is systematically entered into the database rows for each corresponding labelled voxel within that row with five general types of data: source values from source images, moving window (MW) data, MW classifier and parameter calculation output data, super-resolution (SR) solution output data, and annotation data. After all desired data is entered for each voxel in ref3D or prior MIBA file, data is compressed to eliminate unnecessary data, such as redundant normal tissue data. Further analytics and data compression can be performed before final MIBA file creation. After source data is entered at operation 225 to create the first data entry into the MIBA file (155), a volume of interest (VOI) is selected from the MIBA file dataset at operation 220 for further analytics to add biomarker information, either by user selection or computer software commands. As will be described in more detail, the further steps for adding biomarker data include defining moving windows (MW) at operation 230, applying MW at operation 231, creating 3D matrices at operation 232, refining 3D matrices at operation 233, applying matrix operations at operation 234, selecting user columns at operation 235, applying biomarker specific machine learning convolutional algorithm (MLCA) to create 2D matrices at operation 245, apply super-resolution algorithms to solve for each associated MIBA file output voxel value at operation 246, add annotations at operation 250, allow data compression at operation 251, storage of MIBA file at operation 125, versus further analytics at operation 255. Further analytics could include a multitude of possible algorithms in the future, but specifically can include adding new biomarker information at operation 260. If more biomarker data is to be added, the process repeats and loops back to operation 220. At various points along the process, voxelwise data can be added to the MIBA file in operation 240, as will be further described below.
  • FIG. 5 shows a schematic of the process for creating a reference 3D image volume 55, which is composed of a standard size high-resolution volume covering a reference patient anatomical volume 35 from a similar population as the patient (for example, man aged 50 years old, T1 type images). Any type of image modality or type or parameter maps may be used (e.g., see charts of FIGS. 30A-K) for obtaining the image volume 35. A 3D grid is selected with voxels of a desired resolution 45. FIG. 5 shows a sparse example with a total number of voxels of, for example, 324 voxels covering a reference head and neck of the image volume 35. It is to be noted that files may need to be much larger for clinical use. As an example, a 3D reference volume voxel grid resolution may be set at 0.5 mm×0.5 mm×0.5 mm, the X-Y-Z field of view (FOV) may be set at 30 cm×30 cm×30 cm for a total of 216,000,000 voxels when used for clinical purposes. A large population of ref3D may be required for inputs for the systems in order to obtain close matching with each individual patient and selected ref3D.
  • Further, as indicated above, source images are registered in operation 211 to in-slice images obtained at the same timepoint on the same machine or on coordinated machines, such as registration of PET and CT on separate machines—as part of creating the MIBA master file. In some embodiments, as part of the registration, re-slicing of the images may be needed to obtain matching datasets with matching resolutions per modality across various time points. To facilitate more efficient image processing, such re-slicing may also be needed to align voxel boundaries when resolutions between modalities are different. As an example, FIG. 6 depicts registration of the image coordinates associated with the datasets of selected time point 2. Specifically, FIG. 6 illustrates a number of parameter maps for parameters associated with various imaging modalities (e.g., DCE-MRI, ADC, DWI, T2, T1, tau, and PET). The image coordinates for the various parameter maps are registered to enable the combined use of the various parameter maps in the creation of the MIBA master file. Registration may be performed using rigid marker based registration or any other suitable rigid or non-rigid registration technique. Example registration techniques may include B-Spline automatic registration, optimized automatic registration, Landmark least squares registration, midsagittal line alignment, or any other suitable registration technique.
  • FIG. 7 describes the secondary rigid registration of in-slice registered image sets 211 to a ref3D 55. The image sets may be acquired using a medical imaging scanner 205 at multiple slice angles to create a resultant patient specific volume MIBA file 155 with resolution matching the original ref3D volume. In this schematic example, four types of the image sets 211 (e.g., image sets A, B, C, D) after in-plane registration are shown, which are then registered to the ref3D volume using rigid registrations. For example, images from image set A are registered to ref3D volume 55 which may include the prior example image set of T1, T2, and DWI images. Similarly, images from the other image sets are registered to corresponding ref3D volumes. After registration of the image set A and entry of data at operation 255, the new registered pt3Dvol MIBA file 155 would contain source voxel data with matching resolution (for example, 0.5 mm×0.5 mm×0.5 mm) to the ref3D. This process would be repeated for each image set (B, C, D) to generate voxel metadata for the singular pt3Dvol MIBA file 155.
  • Although FIG. 7 shows a rigid registration mechanism, in some embodiments, it may be desirable to use a non-rigid registration technique. For example, a non-rigid registration technique may be used to map image slices from any orientation into a warped plane in an x-, y-, or z-plane.
  • FIG. 8 displays how voxel source image data to the registered ref3D or prior MIBA file is entered into a MIBA file 155 associated with a MIBA creation system. The MIBA file can take the form of a 3D file 155, or organized in a spreadsheet format showing collated and coded data for each voxel in the MIBA file 155. An example spreadsheet format 225 of a portion of the MIBA database includes a variety of information pertaining to the registered image data. For example, the format 225 includes voxels labelled and organized by rows. For example, voxel code 1,1,1 is the voxel in the X=1, Y=1, and Z=1 position within the 3D volume. Voxel values are entered in locations where registration of source images led to a new registered voxel value in the registered MIBA file 3D volume. Column headings are entered as common data elements (CDE), such as those provided by the NIH (https://www.nlm.nih.gov/cde/summary\_table\_1.html) or other desired standard or created codes. In this example, a column header code for the source data from image acquisition A for T1 images is labelled, “A_SoD_T1” and voxel data is entered in corresponding voxels at the corresponding database location coded to the MIBA file 3D volume. It is to be understood that the format 225 is only an example. In other embodiments, additional, fewer, or different information may be included in the format 225.
  • FIG. 9 shows a flowchart outlining a process 225 for source image voxel data entry for MIBA file 3D volume. Data entry into the MIBA spreadsheet database is similar for all potential image datasets registered to the ref3D or prior MIBA file. For example, for the image sets 211 (e.g., the image sets A, B, C, and D) of FIG. 7 , information that is shown in the format 225 of FIG. 8 is extracted from each of the image sets and entered into the MIBA database. Thus, the MIBA database includes a compilation of data or records from the image sets 211 and each of the records may be in the format 225. In some embodiments, each of the records 225 in the MIBA database for the image sets 211 may have formats (e.g., the format 225) that are somewhat different. For example, based upon the information that is extracted from the image sets 211, the corresponding format 211 of those image sets may vary as well. Thus, as shown in the process 225, at operation 225A, a record for the image set A of the image sets 211 is created and added to the MIBA spreadsheet database, at operation 225B, a record for the image set B is created and added to the MIBA spreadsheet database, and at operations 225C and 225D, records for image sets C and Disclosure, respectively, are created and added to the MIBA spreadsheet database. Standard registration technique methods are used to determine the specific voxel values in the MIBA file grid from registered inputted data. FIG. 10 shows source data entry into the MIBA database as depicted in FIG. 9 .
  • Referring back to FIG. 4A, after source data is entered into the MIBA file at operation 225, analytics steps are initiated. At operation 220, a Volume-of-Interest (VOI) is selected from the MIBA file either by user selection of image display or via a computer software algorithm. At operation 230, moving window (MW) algorithms are initiated.
  • For example, a slice of a MIBA file may be chosen and displayed in axial orientation, slice thickness of 1 mm, and in-plane resolution of 1 mm×1 mm. The source data is then chosen for display; example would include T1 values or parameter map values, such as K from DCE-MRI data. The Volume-of-Interest (VOI) for running MW algorithms is selected from the displayed images.
  • FIG. 10 provides an overview of MW matrix data entry into the MIBA spreadsheet file. Moving window parameters are chosen which include MW size, shape, point of origin, step size, and path. Selected MW is run across the images and a matrix of data is created. The process is repeated for each desired source data input and data is collated into the 3D matrix where each column holds data for matching MW coordinates and parameters for the various types of source data. For example, a single column of the 3D matrix may have data for the same MW including T1, T2, DWI, ADC, and K values at matching anatomical locations. The resultant MW 3D matrix file can be entered as an embedded metadata file into a selected corresponding cell of the MIBA spreadsheet database. Details are further described below.
  • FIGS. 11A-11C show defining of a MW. Upon registration of the images, one or more moving windows are defined and the defined moving windows are used for analyzing the registered images. As used herein, a “moving window” is a “window” or “box” of a specific shape and size that is moved over the registered images in a series of steps or stops, and data within the “window” or “box” at each step is statistically summarized. The step size of the moving window may also vary. In some embodiments, the step size may be equal to the width of the moving window. In other embodiments, other step sizes may be used. Further, a direction in which the moving window moves over the data may vary from one embodiment to another. These aspects of the moving window are described in greater detail below.
  • The moving window is used to successively analyze discrete portions of each image within the selected image datasets to measure aspects of the selected parameters. For example, in some embodiments, the moving window may be used to successively analyze one or more voxels in the image data. In other embodiments, other features may be analyzed using the moving window. Based upon the features that are desired to be analyzed, the shape, size, step-size, and direction of the moving window may be varied. By changing one or more attributes (e.g., the shape, size, step size, and direction), multiple moving windows may be defined, and the data collected by each of the defined moving windows may be varied.
  • As an example and in some embodiments, the moving window may be defined to encompass any number or configuration of voxels at one time. Based upon the number and configuration of voxels that are to be analyzed at one time, the size, shape, step size, and direction of the moving window may be defined. Moving window volume may be selected to match the volumes of corresponding biomarker data within a volume-coded population database. Further, in some embodiments, the moving window may be divided into a grid having two or more adjacent subsections.
  • In some embodiments, the moving window may have a circular shape with a grid disposed therein defining a plurality of smaller squares. FIGS. 11A, 11B, and 11C depict various example moving window configurations having a circular shape with a square grid, in accordance with some embodiments. FIGS. 11A, 11B, and 11C each include a moving window 280 having a grid 285 and a plurality of square subsections 290. For example, FIG. 11A has four of the subsections 290, FIG. 11B has nine of the subsections, and FIG. 11C has sixteen of the subsections. It is to be understood that the configurations shown in FIGS. 11A, 11B, and 11C are only an example. In other embodiments, the moving window 280 may assume other shapes and sizes such as square, rectangular, triangle, hexagon, or any other suitable shape. Likewise, in other embodiments, the grid 285 and the subsections 290 may assume other shapes and sizes.
  • Thus, FIGS. 11A, 11B, and 11C shows various possible configurations where the moving window encompasses 4, 9, or 16 full voxels within the source images and a single moving window read measures the mean and variance of the 4, 9, and 12 voxels respectively. Further, the grid 285 and the subsections 290 need not always have the same shape. Additionally, while it may be desirable to have all of the subsections 290 be of the same (or similar) size, in some embodiments, one or more of the subsections may be of different shapes and sizes. In some embodiments, each moving window may include multiple grids, with each grid having one or more subsections, which may be configured as discussed above. In the embodiments of FIGS. 11A, 11B, and 11C, the shape and size of each of the subsections 290 may correspond to the shape and size of one MIBA master file output voxel in the MIBA file output voxel grid (defined as discussed above by the ref3D or prior MIBA file).
  • The step size of the moving window in the x, y, and z directions determines the output matrix dimensions in the x, y, and z directions, respectively. The specific shape(s), size(s), starting point(s), etc. of the applied moving windows determines the exact size of the matrix output grid. Furthermore, the moving window may be either two-dimensional or three-dimensional. The moving window 280 shown in FIGS. 11A, 11B, and 11C is two-dimensional. When the moving window 280 is three-dimensional, the moving window may assume three-dimensional shapes, such as a sphere, cube, etc.
  • Similarly, the size of the moving window 280 may vary from one embodiment to another. Generally speaking, the moving window 280 is configured to be no smaller than the size of the largest single input image voxel in the image dataset, such that the edges of the moving window encompass at least one complete voxel within its borders. Further, the size of the moving window 280 may depend upon the shape of the moving window. For example, for a circular moving window, the size of the moving window 280 may be defined in terms of radius, diameter, area, etc. Likewise, if the moving window 280 has a square or rectangular shape, the size of the moving window may be defined in terms of length and width, area, volume, etc.
  • Furthermore, a step size of the moving window 280 may also be defined. The step size defines how far the moving window 280 is moved across an image between measurements. In general, each of the subsections 290 corresponds to one source image voxel. Thus, if the moving window 280 is defined as having a step size of a half voxel, the moving window 280 is moved by a distance of one half of each of the subsections 290 in each step. The resulting matrix from a half voxel step size has a number of readings equal to the number of steps taken. Thus, based upon the desired specificity desired in the matrix data, the step size of the moving window 280 and the size and dimensions of each output matrix may be varied.
  • In addition, the step size of the moving window 280 determines a size (e.g., the number of columns, rows) of intermediary matrices into which the moving window output values are placed into the MBA master file, as described below. Thus, the size of the intermediary matrices may be determined before application of the moving window 280, and the moving window may be used to fill the intermediary matrices in any way based on any direction or random movement. Such a configuration allows for much greater flexibility in the application of the moving window 280.
  • FIGS. 12A-12C show an example where the moving window read inputs all voxels fully or partially within the boundary of the moving window and calculates a read as the weighted average by volume with standard deviation. Specifically, FIG. 12A shows various examples of defining an output value within a moving window 330 in an image 335 at one step. As shown in 12A, the moving window 330 defines a grid 340 covering source image voxels and divided into multiple subsections 345, 350, 355, 360, 365, and 370. Further, as discussed above, each of the subsections 345-370 corresponds to one voxel in the source image. In some embodiments, the output value of the moving window 330 may be an average (or some other function) of those subsections 345-370 (or voxels) of the grid 340 that are fully or substantially fully encompassed within the moving window. For example, in FIG. 12A, the moving window 330 cuts off the subsections 350, 355, 365, and 370 such that only a portion of these subsections are contained within the moving window. In contrast, the subsections 345 and 360 are substantially fully contained within the moving window 330. Thus, the output value of the moving window 330 at the shown step may be the average of values in the subsections 345 and 360.
  • In other embodiments, a weighted average may be used to determine the output value of the moving window 330 at each step. When the values are weighted, the weight may be for percent area or volume of the subsection contained within the moving window 330. For example, in FIG. 12A, if a weighted average is used, the output value of the moving window 330 at the given step may be an average of all subsections 345-370 weighted for their respective areas A1, A2, A3, A4, A5, and A6 within the moving window. In some embodiments, the weighted average may include a Gaussian weighted average.
  • In other embodiments, other statistical functions may be used to compute the output value at each step of the moving window 330. Further, in some embodiments, the output value at each step may be adjusted to account for various factors, such as noise. Thus, the output value at each step may be an average value+/−noise. Noise may be undesirable readings from adjacent voxels. In some embodiments, the output value from each step may be a binary output value. For example, in those embodiments where a binary output value is used, the output probability value at each step may be a probability value of either 0 or 1, where 0 corresponds to a “yes” and 1 corresponds to a “no,” or vice-versa based upon features meeting certain characteristics of any established biomarker. In this case, once 0 and 1 moving window probability reads are collated. Similarly, in the case where the convolution algorithm uses a parameter map function, such as pharmacokinetic equations, to output parameter measures, the values within the moving windows would be collated in lieu of probability values, but the same final output voxel solution may otherwise be implemented.
  • It is to be understood that the output values of the moving window 330 at each step may vary based upon the size and shape of the moving window. For example, FIG. 12B shows a cross-sectional view of the image 335 from FIG. 12A in which the moving window 330 has a cylindrical shape. FIG. 12C shows another cross-sectional view of the image 335 in which the moving window 330 has a spherical shape. In addition, the image 335 shown in FIG. 12B has a slice thickness, ST1, that is larger than a slice thickness, ST2, of the image shown in FIG. 12C. Specifically, the image of FIG. 12B is depicted as having only a single slice, and the image of FIG. 12C is depicted as having three slices. In the embodiment of FIG. 12C, the diameter of the spherically-shaped moving window 330 is at least as large as a width (or thickness) of the slice. Thus, the shape and size of the moving window 330 may vary with slice thickness as well.
  • Furthermore, variations in how the moving window 330 is defined are contemplated and considered within the scope of the present disclosure. For example, in some embodiments, the moving window 330 may be a combination of multiple different shapes and sizes of moving windows to better identify particular features of the image 335. Competing interests may call for using different sizes/shapes of the moving window 330. For example, due to the general shape of a spiculated tumor, a star-shaped moving window may be preferred, but circular or square-shaped moving windows may offer simplified processing. Larger moving windows also provide improved contrast to noise ratios and thus better detect small changes in tissue over time. Smaller moving windows may allow for improved edge detection in regions of heterogeneity of tissue components. Accordingly, a larger region of interest (and moving window) may be preferred for PET imaging, but a smaller region of interest (and moving window) may be preferred for CT imaging with highest resolutions. In addition, larger moving windows may be preferred for highly deformable tissues, tissues with motion artifacts, etc., such as liver. By using combinations of different shapes and sizes of moving windows, these competing interests may be accommodated, thereby reducing errors across time-points. In addition, different size and shaped moving windows (e.g., the moving window 330) also allow for size matching to data (e.g., biomarkers) within a precision database, e.g., where biopsy sizes may be different. Thus, based upon the features that are desired to be enhanced, the size and shape of the moving window 330 may be defined.
  • Further, in some embodiments, the size (e.g., dimensions, volume, area, etc.) and the shape of the moving window 330 may be defined in accordance with a data sample match from the precision database. Such a data sample match may include a biopsy sample or other confirmed test data for a specific tissue sample that is stored in a database. For example, the shape and volume of the moving window 330 may be defined so as to match the shape and volume of a specific biopsy sample for which one or more measured parameter values are known and have been stored in the precision database. Similarly, the shape and volume of the moving window 330 may be defined so as to match a region of interest (ROI) of tumor imaging data for a known tumor that has been stored in the precision database. In additional embodiments, the shape and volume of the moving window 330 may be chosen based on a small sample training set to create more robust images for more general pathology detection. In still further embodiments, the shape and volume of the moving window 330 may be chosen based on whole tumor pathology data and combined with biopsy data or other data associated with a volume of a portion of the tissue associated with the whole tumor.
  • In addition to defining the size, shape, and step size of the moving window 280, the direction of the moving window may be defined. The direction of the moving window 280 indicates how the moving window moves through the various voxels of the image data. FIG. 15 depicts an example direction of movement of a moving window 300 in a region-of-interest 305 in an x direction 310 and a y direction 320, in accordance with an illustrative embodiment. As shown in FIG. 13 , the movement direction of the moving window 300 is defined such that the moving window is configured to move across a computation region 325 of the image 305 at regular step sizes or intervals of a fixed distance in the x direction 310 and the y direction 320. Specifically, the moving window 300 may be configured to move along a row in the x direction 310 until reaching an end of the row. Upon reaching the end of the row, the moving window 300 moves down a row in the y direction 320 and then proceeds across the row in the x direction 310 until again reaching the end of the row. This pattern is repeated until the moving window 300 reaches the end of the image 305. In other embodiments, the moving window 300 may be configured to move in different directions. For example, the moving window 300 may be configured to move first down a row the y direction 320 until reaching then end of the row and then proceed to a next row in the x direction 310 before repeating its movement down this next row in the y direction. In another alternative embodiment, the moving window 300 may be configured to move randomly throughout the computation region 325.
  • Further, as noted above, the step size of the moving window 300 may be a fixed (e.g., regular) distance. In some embodiments, the fixed distance in the x direction 310 and the y direction 320 may be substantially equal to a width of a subsection of the grid (not shown in FIG. 13 ) of the moving window 300. In other embodiments, the step size may vary in either or both the x direction 310 and the y direction 320.
  • Additionally, each movement of the moving window 300 by the step size corresponds to one step or stop. At each step, the moving window 300 measures certain data values (also referred to as output values). For example, in some embodiments, the moving window 300 may measure specific MRI parameters at each step. The measured data values may be measured in any of variety of ways. For example, in some embodiments, the data values may be mean values, while in other embodiments, the data values may be a weighted mean value of the data within the moving window 300. In other embodiments, other statistical analysis methods may be used for the data within the moving window 300 at each step.
  • The moving window, upon defining, is applied at operation 231 of FIG. 4B. Specifically, the defined moving window (e.g., the moving window 330) is applied to a computation region (e.g., the computation region 325) of each image (e.g., the image 335) within each of the selected image datasets such that an output value and variance (such as a standard deviation) is determined for each image at each step of the moving window in the computation region. Each output value is recorded and associated with a specific coordinate on the corresponding computation region of the image. In some embodiments, the coordinate is an x-y coordinate. In other embodiments, y-z, x-z, or a three dimensional coordinate may be used. By collecting the output values from the computation region (e.g., the computation region 325), a matrix of moving window output values is created and associated with respective coordinates of the analyzed image (e.g., the image 335).
  • In some cases, the moving window reading may obtain source data from the imaging equipment prior to reconstruction. For example, magnetic resonance fingerprinting source signal data is reconstructed from a magnetic resonance fingerprinting library to reconstruct standard images, such as T1 and T2 images. Source MR Fingerprinting, other magnetic resonance original signal data or data from other machines, may be obtained directly and compared to the volume-coded population database in order to similarly develop a MLCA to identify biomarkers from the original source signal data.
  • More specifically, in some embodiments, the operation 231 of FIG. 4B involves moving the moving window 330 across the computation region 325 of the image 335 at the defined step sizes and measuring the output value of the selected matching parameters at each step of the moving window. It is to be understood that same or similar parameters of the moving window are used for each image (e.g., the image 335) and each of the selected image datasets. Further, at each step, an area of the computation region 325 encompassed by the moving window 330 may overlap with at least a portion of an area of the computation region encompassed at another step. Further, where image slices are involved and the moving window 330 is moved across an image (e.g., the image 335) corresponding to an MRI slice, the moving window is moved within only a single slice plane until each region of the slice plane is measured. In this way, the moving window is moved within the single slice plane without jumping between different slice planes.
  • The output values of the moving window 330 from the various steps are aggregated into a 3D matrix according to the x-y-z coordinates associated with each respective moving window output value. In some embodiments, the x-y coordinates associated with each output value of the moving window 330 correspond to the x-y coordinate on a 2D slice of the original image (e.g., the image 335), and various images and parameter map data is aggregated along the z-axis (e.g., as shown in FIG. 7 ).
  • FIG. 14A depicts a perspective view of multiple 2D slice planes 373, 375, and 380 in accordance with an illustrative embodiment. A spherical moving window 385 is moved within each respective slice planes 373, 375, and 380. FIG. 14B depicts an end view of slice planes 373, 375, and 380. Again, the spherical moving window 385 is moved within the respective slice planes 373, 375, and 380 but without moving across the different slice planes. In this way, moving window values may be created and put into a matrix associated with a specific MRI slice and values between different MRI slices do not become confused (e.g., the moving window moving within the slices for each corresponding image and parameter map in the dataset).
  • FIG. 14C depicts an embodiment in which MRI imaging slices for a given tissue sample are taken at multiple different angles. The different angled imaging slices may be analyzed using a moving window (e.g., the moving window 385) and corresponding matrices of the moving window output values may be independently entered into the MIBA file. The use of multiple imaging slices having different angled slice planes allows for improved sub-voxel characterization, better resolution in the output image, reduced partial volume errors, and better edge detection. For example, slice 390 extends along the y-x plane and the moving window 385 moves within the slice plane along the y-x plane. Slice 395 extends along the y-z plane and the moving window 385 moves within the slice plane along the y-z plane. Slice 400 extends along the z′-x′ plane and the moving window 385 moves within the slice plane along the z′-x′ plane. Movement of the moving window 385 along all chosen slice planes preferably has a common step size to facilitate comparison of the various moving window output values. When combined, the slices 390-400 provide image slices extending at three different angles.
  • FIG. 14D depicts an additional embodiment in which MRI imaging slices for a given tissue sample are taken at additional multiple different angles. In the embodiment of FIG. 14D, multiple imaging slices are taken at different angles radially about an axis in the z-plane. In other words, the image slice plane is rotated about an axis in the z-plane to obtain a large number of image slices. Each image slice has a different angle rotated slightly from an adjusted image slice angle.
  • Further, in some embodiments, moving window data for 2D slices is collated with all selected parameter maps and images registered to the 2D slice that are stacked to form the 3D matrix. FIG. 15A shows an example assembly of moving window output values 405 for a single 2D slice 410 being transformed into a 3D matrix 415 containing data across nine parameter maps, with parameter data aligned along the z-axis. Specifically, dense sampling using multiple overlapping moving windows may be used to create a 3D array of parameter measures (e.g., the moving window output values 405) from a 2D slice 425 of a human, animal, etc. Sampling is used to generate a two-dimensional (2D) matrix for each parameter map, represented by the moving window output values 405. The 2D matrices for each parameter map are assembled to form the multi-parameter 3D matrix 415, also referred to herein as a data array. In some embodiments, the 3D matrix 415 may be created for each individual slice of the 2D slice 425 by aggregating moving window output values for the individual slice for each of a plurality of parameters. According to such an embodiment, each layer of the 3D matrix 415 may correspond to a 2D matrix created for a specific parameter as applied to the specific individual slice.
  • The parameter set (e.g., the moving window output values 405) for each step of a moving window (e.g., the moving window 385) may include measures for some specific selected matching parameters (e.g., T1 mapping, T2 mapping, delta Ktrans, tau, Dt IVIM, fp IVIM and R*), values of average Ktrans (obtained by averaging Ktrans from TM, Ktrans from ETM, and Ktrans from SSM), and average Ve (obtained by averaging Ve from TM and Ve from SSM). Datasets may also include source data, such as a series of T1 images during contrast injection, such as for Dynamic Contrast Enhanced MRI (DCE-MRI). In an embodiment, T2 raw signal, ADC (high b-values), high b-values, and nADC may be excluded from the parameter set because these parameters are not determined to be conditionally independent. In contrast, T1 mapping, T2 mapping, delta Ktrans, tau, Dt IVIM, fp IVIM, and R* parameters may be included in the parameter set because these parameters are determined to be conditionally independent. Further, a 3D matrix (e.g., the 3D matrix 415) is created for each image in each image dataset.
  • Returning back to FIGS. 4A and 4B, the 3D matrices are refined at an operation 233. Refining a 3D matrix may include dimensionality reduction, aggregation, and/or subset selection processes. Other types of refinement operations may also be applied to each of the 3D matrices obtained at the operation 233. Further, in some embodiments, the same refinement operation may be applied to each of the 3D matrices, although in other embodiments, different refinement operations may be applied to different 3D matrices as well. Refining the 3D matrices may reduce parameter noise, create new parameters, and assure conditional independence needed for future classifications. As an example, FIG. 15B shows the 3D matrices 430 and 435 being refined into matrices 440 and 445, respectively. The matrices 440 and 445, which are refined, are also 3D matrices.
  • On the refined matrices (e.g., the matrices 440 and 445), one or more matrix operations are applied at operation 234 of FIG. 4B. The matrix operations generate a population of matrices for use in analyzing the sample. FIG. 15B shows an example of a matrix operation being applied to the matrices 440 and 445, in accordance with some embodiments of the present disclosure. Specifically, a matrix subtraction operation is applied on the matrices 440 and 445 to obtain a matrix 450. By performing the matrix subtraction, a difference in parameter values across all parameter maps at each stop of the moving window (e.g., the moving window 385) from each of the matrices 440 and 445 may be obtained. In other embodiments, other matrix operations may be performed on the matrices 440 and 445 as well. For example, in some embodiments, matrix operations may include matrix addition, subtraction, multiplication, division, exponentiation, transposition, or any other suitable and useful matrix operation. Various matrix operations may be selected as needed for later advanced big data analytics. Further, such matrix operations may be used in a specific Bayesian belief network to define a specific biomarker that may help answer a question regarding the tissue being analyzed, e.g., “Did the tumor respond to treatment?
  • Columns from each 3D matrix (e.g., the matrices 440, 445, and 450) are selected for comparison and analysis 235 in FIG. 4 . In this way, subsets of the various matrices (e.g., the matrices 440, 445, and 450) that correspond to the same small areas of the tissue sample may be compared and analyzed. FIG. 16 shows the selection of a corresponding matrix column 455 in the matrices 440-450. As shown, the matrix column 455 that is selected corresponds to the first column (e.g., Column 1) of each of the matrices 440-450. The matrix column 455 in each of the matrices 440-450 corresponds to the same small area of the sample. It is to be understood that the selection of Column 1 as the matrix column 455 is only an example. In other embodiments, depending upon the area of the sample desired to be analyzed, other columns from each of the matrices 440-450 may be selected. Additionally, in some embodiments, multiple columns from each of the matrices 440-450 may be selected to analyze and compare multiple areas of the sample. When multiple column selections are used, in some embodiments, all of the desired columns may be selected simultaneously and analyzed together as a group. In other embodiments, when multiple column selections are made, columns may be selected one at a time such that each selected column (e.g., the matrix column 455) is analyzed before selecting the next column.
  • The matrix columns selected at the operation 245 of FIGS. 4A and 4B are subject to a machine learning convolution algorithm (“MLCA”) and a 2D Matrix (also referred to herein as a convoluted graph) is output from the MLCA. In some embodiments and as shown in FIGS. 15C and 16A, the MLCA 460 may be a Bayesian belief network that is applied to the selected columns (e.g., the matrix column 455) of the matrices 440-450. The Bayesian belief network is a probabilistic model that represents probabilistic relationships between the selected columns of the matrices 440-450 having various parameter measures or maps 465. The Bayesian belief network also takes into account several other pieces of information, such as clinical data 470. The clinical data 470 may be obtained from patient's medical records and matching data in the precision database and/or the volume-coded precision database are used as training datasets. Further, depending upon the embodiment, the clinical data 470 may correspond to the patient whose sample (e.g., the sample 170) is being analyzed, the clinical data of other similar patients, or a combination of both. Also, the clinical data 470 that is used may be selected based upon a variety of factors that may be deemed relevant. The Bayesian belief network combines the information from the parameter measures or maps 465 with the clinical data 470 in a variety of probabilistic relationships to provide a biomarker probability 475. Thus, the biomarker probability 475 is determined from the MLCA which inputs the parameter value data (e.g., the parameter measures or maps 465) and other desired imaging data in the dataset within each selected column (e.g., the matrix column 455) of the matrices 440-1220, the weighting determined by the Bayesian belief network, and determines the output probability based on the analysis of training datasets (e.g., matching imaging and the clinical data 470) stored in the precision database.
  • Thus, by varying the selection of the columns (e.g., the matrix column 455) providing varying imaging measures and using a biomarker specific MLCA (with the same corresponding clinical data 470), the biomarker probability 475 varies across moving window reads. The biomarker probability 475 may provide an answer to a clinical question. A biomarker probability (e.g., the biomarker probability 475) is determined for each (or some) column(s) of the matrices 440-450, which are then combined to produce a 2D matrix. As an example, FIG. 15C shows a 2D matrix 480 produced by applying the MLCA 460 to the matrices 440-450. Similar to the biomarker probability 475, the 2D Matrix 480 corresponds to a biomarker probability and answers a specific clinical question regarding the sample 165. For example, the 2D matrix 480 may answer clinical questions such as “Is cancer present?,” “Do tissue changes after treatment correlate to expression of a given biomarker?,” “Did the tumor respond to treatment?,” or any other desired questions. The 2D matrix 480, thus, corresponds to a probability density function for a particular biomarker. Therefore, biomarker probabilities (e.g., the biomarker probability 475) determined from the matrices 440-450 are combined to produce the 2D matrix 480, represented by a probability density function.
  • Although Bayesian belief network has been used as the MLCA 460 in the present embodiment, in other embodiments, other types of MLCA such as a convolutional neural network or other classifiers or machine learning algorithms may be used instead or in addition to the Bayesian belief network. In addition to answering certain clinical questions, the 2D matrix 480 may be viewed directly or converted to a 3D graph for viewing by an interpreting physician to gain an overview of the biomarker probability data. For example, the 2D matrix 480 may be reviewed by a radiologist, oncologist, computer program, or other qualified reviewer to identify unhelpful data prior to completion of full image reconstruction, as detailed below. If the 2D matrix 480 provides no or vague indication of large enough probabilities to support a meaningful image reconstruction or biomarker determination, the image data analysis (e.g., the 2D matrix 480) may be discarded.
  • Alternatively or additionally, modifications may be made to the image data analysis parameters (e.g., modifications in the selected columns of the matrices 440-1220, the clinical data 470, etc.) and the MLCA 460 may be reapplied and another 2D matrix obtained. In some embodiments, the moving window size, shape, and/or other parameter may be modified and operations of FIGS. 4A and 4B re-applied. By redefining the moving window, different 2D matrices (e.g., the 2D matrix 480) may be obtained. An example collection of data from moving windows of different shapes and sizes is shown in FIG. 17 . Specifically, FIG. 17 shows a collection of data using a circular moving window 485, a square moving window 490, and a triangular moving window 495. From each of the moving windows 485-495, a corresponding 3D matrix 500-510 is obtained. On each of the 3D matrix 500-510, MLCA is applied to obtain a respective 2D matrix 515-525. Thus, by refining the moving window, multiple 2D matrices (e.g., the 2D matrices 515-525) may be created for a particular region of interest. Although FIG. 17 shows variation in the shape of the moving window, in other embodiments, other aspects, such as size, step size, and direction may additionally or alternatively be varied to obtain each of the 2D matrix 515-525. Likewise, in some embodiments, different angled slice planes may be used to produce the different instances of the 2D matrix 515-525. The data collected from each moving window in the 2D matrix 515-525 is entered into first and second matrices and is combined into a combined matrix using a matrix addition operation, as discussed below.
  • Additionally, in some embodiments, different convolution algorithms may be used to produce parameter maps and/or parameter change maps. For example, a 2D matrix map may be created from a 3D matrix input using such a convolution algorithm. Examples of such convolution algorithms may include pharmacokinetic equations for Ktrans maps or signal decay slope analysis used to calculated various diffusion-weighted imaging calculations, such as ADC. Such algorithms may be particularly useful in creating final images with parameter values instead of probability values.
  • Referring still to FIGS. 4A and 4B, at operation 246, a super-resolution reconstruction algorithm is applied to the 2D matrix (e.g., the 2D matrix 480 and/or the 2D matrices 515-525) to produce an output solution value at a defined voxel within the MIBA file for each desired biomarker and for the specific case in which voxels within the MW (290 in FIG. 11A-C) correspond to the size and shape of the MIBA file output voxel. In this case, multiple MW reads will be available in the MIBA file for a given voxel for a specific biomarker, and the size and shape of the voxel in the MIBA file will meet the criteria described in FIG. 11A-C 290. Specifically, the super-resolution algorithm produces a final super-resolution voxel output value from a combination of the 2D matrices 555-565, as depicted in FIGS. 18A and 18B, which provide the multiple MW reads for each voxel for input into the super-resolution algorithm. More specifically, the super-resolution algorithm converts each 2D matrix 555-565 into an output grid, as shown in FIGS. 18A-18B, which are then combined to form a final super-resolution output voxel grid, as shown in 18B. This final super-resolution output voxel grid corresponds to the MIBA file output voxel grid in the MIBA file 3D volume and for coded entry into the MIBA spreadsheet format.
  • Referring specifically to FIG. 18A, a read count kernel 530 may be used to determine the number of moving window reads within each voxel of the defined final super-resolution output voxel grid which matches the MIBA FILE output voxel grid. A defined threshold is set to determine which voxels receive a reading as a voxel fully enclosed within the moving window, or at a set threshold, such as 98% enclosed. Each of these voxels within the read count kernel 530 has a value of 1 within the read count kernel. The read count kernel 530 moves across the output grid at step size matching the size of the super resolution voxels and otherwise matches the shape, size, and movement of the corresponding specified moving window defined during creation the 3D matrices. Moving window readings are mapped to voxels that are fully contained within the moving window, such as the four voxels labeled with reference numeral 535. Alternatively, moving window read voxel may be defined as those having a certain percentage enclosed in the moving window, such as 98%.
  • Further, values from moving window reads (e.g., A+/−sd, B+/−sd, C+/−sd) are mapped to the location on the final super-resolution output voxel grid which matches the MIBA FILE output voxel grid and the corresponding values is assigned to each full voxel contained within the moving window (or partially contained at a desired threshold, such as 98% contained). For example, the post-MLCA 2D matrix contains the moving window reads for each moving window, corresponding to the values in the first three columns of the first row. Each of the 9 full final super-resolution output voxel grid which matches the MIBA FILE output voxels within the first moving window (MW 1) receives a value of A+/−sd, each of the 9 full output SR voxels within the second moving window (MW 2) receives a value of B+/−sd, and each of the 9 full output SR voxels within the third moving window (MW 3) receives a value of C+/−sd.
  • FIGS. 18C depicts another embodiment of obtaining an output MIBA FILE output voxels grid. Specifically, neural network methods may be deployed such that full image or full organ neural network read may return a single moving window read per entire image or organ region of interest. Such a read may represent a probability that a tissue is normal or abnormal as binary “0” or “1” or a probability, or the odds of a specific diagnosis, depending on type of input labelled data inputted into the neural networks. Moving window reads may be added as for other reads, discussed above, and only voxels contained with organ ROI may be added with this notation into the MIBA file.
  • Alternately, standard classifier methods, such as vector machines, can be used to solve for a probability of a given biomarker with a segmented region, such as a tumor. Similarly, all voxels values for voxels meeting volume criteria (for example, 98% inclusion within output voxel) are entered into the MIBA file.
  • Examples of simplified existing clinical imaging tumor biomarkers that are based on standard whole tumor ROI and standard classifiers include, but are not limited to, multi-parameter MRI for detection of prostate tumors using the PI-RADS system (using scoring with T2, DWI, and DCE-MRI sequences), liver tumor detection with LI-RADS system (using scoring with T1 post contrast, T2, and DWI sequences), and PET uptake changes after GIST treatment with Gleevac. Additional parameters may include, but are not limited to, DCE-MRI, ADC, DWI, T1, T2, and tau parameters. Additional example parameters are included in the charts depicted in FIG. 30A-K. The possible parameters may be obtained from different modalities including, but not limited to, MRI, PET, SPECT, CT, fluoroscopy, ultrasound imaging, BLO imaging, micro-PET, nano-MRI, micro-SPECT, and Raman imaging. Accordingly, the matching parameters may include any of the types of MRI parameters depicted in FIGS. 30A-K, one or more types of PET parameters depicted, one or more types of heterogeneity features depicted, and other parameters depicted in FIGS. 30A-K. In the simplest embodiment of the convolution algorithm, the biomarker may be defined as a set of defined thresholds for various image data or parameters (for example, T1>500, T2<2000, and DWI>2000) and the algorithm would return a simple “yes” or “no” solution of the MW data fits the defined biomarker thresholds. This most simplified version of the convolution algorithm (MLCA) would be most similar to established clinical biomarkers that define probabilities of cancer, such as Li-RADS. New and more complex imaging biomarkers may be discovered in the future and could be similarly applied to the described method. In a specific embodiment, a set of biomarkers provides a reliable prediction of whether a given voxel contains normal or abnormal anatomy.
  • Thus, as shown in FIG. 18B, a first 2D matrix 555 is converted into a first MIBA file intermediate voxel grid 560 and a second 2D matrix 565 is converted into a second output intermediate voxel grid 570. The output intermediate voxel grid 560 and the output intermediate voxel grid 570 are then combined according to a super-resolution algorithm (e.g., addition algorithm) to obtain a final super-resolution output grid matching the final MIBA file voxel output grid 575. FIGS. 18A-18B provide examples where the output intermediate voxel grids and the final MIBA file voxel grid are both represented as 2D matrices. In some embodiments, the final super-resolution output grid matching the final MIBA file voxel grid may be a represented as a 3D matrix.
  • Returning back to FIGS. 4A and 4B, upon generating a final MIBA file voxel grid at the operation 246, it is determined whether any additional biomarkers remain to be analyzed for the given set of 3D matrices. If there are additional biomarkers or features or areas of interest to be analyzed for the given set of 3D matrices, the operations 220-246 are repeated for each additional biomarker. In the case of each newly selected biomarker, a new MLCA is selected based on the specific training population database data for the new biomarker in the volume-code population database. In embodiments where multiple biomarkers are identified in a single voxel, the separate biomarkers may be entered as separate values in the specific designated region, such as a column, for a given voxel (collated voxel data contained in a given row) in the MIBA spreadsheet file.
  • FIG. 18C shows that a moving window (“MW”) may equal a single segmentation, such as a segmentation of the liver (LIV_SEG). All voxels with the LIV_SEG are labelled as “liver.” This single segmentation of the liver can be created by a human user or by automated techniques, such as using data-driven neural networks.
  • FIG. 19 depicts the mapping of convoluted graph data back to the MIBA file output voxel grid. Data cells in the post-MLCA 2D matrix are mapped to the MIBA file output voxel grid such that any voxel fully or almost fully (for a defined percentage; for example, greater than 90%) within the borders of the original MW is mapped as a MW reads for the corresponding pt3Dvol voxel MIBA file output voxel grid. In this example, the top edge voxels for each convoluted graph have one MW read each, while the center top row voxels have four MW reads each. When two mapping grids are combined, the resulting grid has two MW reads at top edges, and eight MW reads at the central top row. FIG. 19 shows entry of the final mapped grid MW data into the MIBA database in corresponding labelled rows.
  • FIG. 20 describes entry of FINAL super-resolution voxel solutions from the collated multiple MW reads for each designated voxel with MIBA file output grid. A set of MW reads is selected, for example, eight MW reads in row 4,1,1 are selected. A FINAL super-resolution voxel solution algorithm is selected and applied to obtain FINAL output MIBA file voxel values. In general, the FINAL super-resolution voxel solution takes the multiple input MW reads which may be discrete values, probabilities, and binary solutions (yes or no) and outputs a solution aimed at finding the “true” solution. In the simplest embodiment, the FINAL voxel super-resolution solution algorithm could be a simple calculation, such as the simple average of all MW reads. If the input MW reads are binary answers (such as yes and no), the super-resolution algorithm could return the most common solution (e.g.. yes MW reads>no MW reads.) The specific super-resolution voxel solution algorithm alternately be chosen from various types which could include general families of frequency domain (wavelet and fourier) and probabilistic (maximum likelihood and Maximum a priori (MAP) algorithms which include markov random fields, total variation, and bimodality priori, as well as single image techniques such as neural network techniques, principal component analysis, and tensor techniques, as well as others.
  • After the FINAL voxel solution algorithm is chosen and applied, in FIG. 20 , output final super-resolution MIBA file voxel values are entered at corresponding locations within the MIBA file spreadsheet format.
  • FIG. 21 describes the last type of data for entry into the MIBA database, namely annotation data. Annotation data can take many forms, and is simply not primary medical imaging data. For example, it can include physician notations, genetics data corresponding to a biopsy segmentation, anatomically mapped data from digital health record. Annotation data imprinted from a user, such as a physician, is collected from all images in the original dataset or the image sets 211 (see FIG. 7 ) generated from the image processor and output display unit. Annotation data from annotations added directly to images by people, such as Radiologists and other physicians, is entered into the MIBA database as metadata within a single cell, or entered in each corresponding voxel location.
  • FIG. 22 shows that the annotations from FIG. 21 are entered into the MIBA file database. Annotation can be hand-drawn regions-of-interest (hd-ROI) or computer generated segmentations on any image type or parameter map and notations are made in the MIBA database to indicate whether a given voxel is contained within the ROI. Alternately, metadata such as a DICOM header for an original image may be embedded in a single corresponding cell in the MIBA database. Metadata entry could also include lines in reports for specific ROI of lesions, as well as follow-up recommendations or differential diagnoses by the Radiologist. Annotations can also mark the data for potential upload to the volume-coded population database. Additionally, annotations may include biopsy data obtained from the selected image datasets 210 and may be labelled as biopsy for all voxels contained in the segmentation of the biopsy sample. In some embodiments, any pathology or genetics related information gleaned from the biopsy data may also be added to the MIBA master file as an annotation. In other embodiments, other relevant notes may be added to the MIBA master file as annotations.
  • FIG. 23 describes an overview of how successive imaging data collected at later time points would be incorporated into a prior MIBA file 150 and updated into a new MIBA file 155 using the process outlined in FIG. 4A-B. A matching process would be followed as previously described, but specifically using the prior MIBA file 150 instead of the reference 3D volume (ref3D) using rigid of affine registration. Any possible image data contained in both the prior MIBA and new dataset 211 could be used for registration, including data showing “yes” voxel data for normal anatomy, allowing great potential power in registering a new MIBA file to a prior MIBA file. Additional of multiple time points would also allow for assessing changes in MW reads across time points. Data would be compressed as a last step to deleted or otherwise compress unneeded data, such as redundant normal tissue data. Further, registration data may be saved such that original source DICOM images may be recovered from post-registration data.
  • FIG. 23 describes how successive MIBA files are created across time points. Data compression allows decrease of memory demands, such as deletion of redundant normal anatomy imaging data. FIG. 24 shows a schematic of a final MIBA spreadsheet file at time point 5. Similar MIBA database files may exist for other time points shown in FIG. 23 .
  • FIG. 25 shows an example block diagram of a portion of a MIBA system 805 that may be used to create a MIBA master file, as discussed above. The MIBA system 805 may be used for generating the MIBA master file, as discussed above. The MIBA system 805 includes a MIBA creation unit 810 having a precision database 815, a volume-coded precision database 820, a 3D matrix computing unit 825, an MLCA computing unit 830, and a MIBA voxel grid unit 835. In alternative embodiments, the specific sub-units and databases of image computing unit 810 may be separate devices or components that are communicatively coupled. The precision database 815 and the volume-coded precision database 820 are configured to store image data, as discussed above. To that end, the MIBA creation unit 810 may be connected to one more imaging modalities 840 to receive image data corresponding to those modalities. The imaging modalities 840 may also provide image data for the sample that is to be analyzed and for which the MIBA master file is to be generated. In some embodiments, instead of receiving image data directly from the imaging modalities 840, the MIBA creation unit 810 may be connected to another computing unit, which receives the image data from the imaging modalities, and provides that data to the image computing unit.
  • As also discussed above, the precision database 815 and the volume-coded precision database 820 stores clinical data 845 as well. The clinical data 845 may be input into the MIBA creation unit 810 by a user. In addition, various attributes 850 (e.g., parameters and parameter maps of interest, moving window parameters, various thresholds, and any other user defined settings) are also input into the MIBA creation unit 810. The MIBA creation unit 810 may also include the 3D matrix computing unit 825 that is configured to compute 3D matrices, the MLCA computing unit 830, which transforms the 3D matrices into 2D matrices, and a MIBA voxel grid unit 835 to convert the 2D matrices into the MIBA master file, as discussed above. The MIBA creation unit 810 may output a MIBA master file 855 upon creation. The MIBA master file 855 may be stored within a database associated with the MIBA system 805 and may be used by a query system (described in FIG. 29 ) to provide a variety of relevant information.
  • The MIBA creation unit 810 and the units therein may include one or more processing units configured to execute instructions. The instructions may be carried out by a special purpose computer, logic circuits, or hardware circuits. The processing units may be implemented in hardware, firmware, software, or any combination thereof. The term “execution” is, for example, the process of running an application or the carrying out of the operation called for by an instruction. The instructions may be written using one or more programming language, scripting language, assembly language, etc. The image computing unit 810 and the units therein, thus, execute an instruction, meaning that they perform the operations called for by that instruction.
  • The processing units may be operably coupled to the precision database 815 and the volume-coded precision database 820 to receive, send, and process information for generating the MIBA master file 855. The MIBA creation unit 810 and the units therein may retrieve a set of instructions from a memory unit and may include a permanent memory device like a read only memory (ROM) device. The MIBA creation unit 810 and the units therein copy the instructions in an executable form to a temporary memory device that is generally some form of random access memory (RAM). Further, the MIBA creation unit 810 and the units therein may include a single stand-alone processing unit, or a plurality of processing units that use the same or different processing technology.
  • With respect to the precision database 815 and the volume-coded precision database 820, those databases may be configured as one or more storage units having a variety of types of memory devices. For example, in some embodiments, one or both of the precision database 815 and the volume-coded precision database 820 may include, but not limited to, any type of RAM, ROM, flash memory, magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips, etc.), optical disks (e.g., compact disk (CD), digital versatile disk (DVD), etc.), smart cards, solid state devices, etc. The MIBA master file 855 may be provided on an output unit, which may be any of a variety of output interfaces, such as printer, color display, a cathode-ray tube (CRT), a liquid crystal display (LCD), a plasma display, an organic light-emitting diode (OLED) display, etc. Likewise, information may be entered into the image computing unit 810 using any of a variety of unit mechanisms including, for example, keyboard, joystick, mouse, voice, etc.
  • Furthermore, only certain aspects and components of the MIBA system 805 are shown herein. In other embodiments, additional, fewer, or different components may be provided within the MIBA system 805. Thus, the present disclosure provides a system and method that includes identifying aggregates of features using classifiers to identify biomarkers within tissues, including cancer tissues, using a precision database having volume-coded imaging-to-tissue data. The method involves the application of a super-resolution algorithm specially adapted for use in medical images, and specifically magnetic resonance imaging (MRI), which minimizes the impact of partial volume errors. The method determines probability values for each relevant super-resolution voxel for each desired biomarker, as well as each desired parameter measure or original signal. In this way, innumerable points of output metadata (up to 10, 1000, 10000 data points) can be collated for each individual voxel within the MIBA master file.
  • FIG. 26 is another block diagram of a portion of the MIBA system depicting use of the MIBA master file upon creation. The MIBA master file from MIBA database 900 is entered into an input interface 905 of a MIBA query system 910. The MIBA query system 910 collects inputs from a user which is processed by an image processor which outputs the results as an image or image/data display on an output interface 915. For example, a sample query to the query system 910 may ask to return all rows from the MIBA master File where MIBA voxels show high probability of vessel. The database management and query system includes an interface for a user or computer software program, query request memory, memory for holding results of a query, a query kernel, an instruction set generator, an execution engine, a processor, and 3D voxel mapping rules. A software application, with its own user interface, can be used to act on these various components. It is to be understood that the MIBA system 26 is intended to include various components similar to the MIBA system 805, including, for example, processors, memory systems, interfaces, etc. An output interface 915 is used to display the MIBA file in 3D which can be (1) via mapping of query data to specific anatomical locations in a virtual display of the patient body (akin to a “google maps” of the human body), (2) a 3D dissection view where the user can define view of the actual MIBA output voxel grids and the contained metadata within the MIBA voxels, such as viewing all vessel data, all T1 images, or all voxels showing a specific biomarker, and (3) standard images can be outputted matching standard DICOM images in axial, coronal, and sagittal planes.
  • It is also to be understood that only some components of the MIBA system 805 have been shown and described in FIGS. 25 and 26 . Nevertheless, other components that are desired or considered necessary to perform the functions described herein are contemplated and considered within the scope of the present disclosure.
  • FIG. 27 describes an example of an application for using the MIBA master file system for querying the MIBA file to identify datasets, such as all voxels labelled as “liver tumor” and for user annotation. An image processor allows a user to select a display of a patient's liver lesion for which a doctor or other person can add an annotation that is entered back into the MIBA file for the specific region-of-interest. The MIBA file can be stored and executed from cloud or local storage. Data can also be uploaded to a population database. The image display unit could specific colors for image voxel display characteristics. Images could be displayed on apps for smartphones, iPads, and iPhones, etc. MIBA could also be used for input data during scanning or during an intervention.
  • FIG. 28 describes how multiple MIBA files could be storage in a single system, such as the cloud or blockchain, and users can query for data across multiple patients, such as all biopsy imaging data for all breast cancer patients that showed BRCA+ genetics. As described, multiple MIBA files are held in the database management system and a user can enter a query to allow selection of specific data, for example, all imaging and related data contained within region-of-interest for BRCA+ breast cancer lesions. The collated data could be outputted for future use.
  • Provided in the above description is a means to label anatomy within the MIBA file. As such, this process can be repeated in any fashion to fit to a given desired anatomical topological mapping of the human body. For example, in FIG. 29A, the skin surfaces of the human body are mapped and correspond to topological human body segments that can be matched across a population of human bodies. In FIGS. 29B-C, this topological mapping can be applied to human heads, and various configurations can be used to describe tissue around the eyes. In FIG. 29C, such mapping can also align with defined anatomy, such as the various Couinaud segments of a liver. In FIG. 29E, the smoothness of the anatomical segment edges are a function of the underlying resolution of the MIBA file voxel output voxel grid. A finer MIBA file voxel grid will create a more smooth output anatomical segment edge. As depicted in FIG. 29A-D, topological maps require that the edges between segments are fully aligned with no spaces in between anatomical segments.
  • It is to be understood that although the present disclosure has been discussed with respect to cancer imaging, the present disclosure may be applied for obtaining imaging for other diseases as well. Likewise, the present disclosure may be applicable to non-medical applications, particularly where detailed super-resolution imagery is needed or desired to be obtained.
  • With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
  • It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (for example, bodies of the appended claims) are generally intended as “open” terms (for example, the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to inventions containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (for example, “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (for example, the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (for example, “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (for example, “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”
  • The foregoing description of illustrative embodiments has been presented for purposes of illustration and of description. It is not intended to be exhaustive or limiting with respect to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the disclosed embodiments. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.

Claims (20)

What is claimed is:
1. A method comprising:
receiving, by a medical imaging bioinformatics annotated (“MIBA”) system, image data from a sample;
registering, by the MIBA system, the image data to a three-dimensional (3D) model selected from a population database for obtaining source data;
receiving selection, by the MIBA system, of a volume of interest;
extracting, by the MIBA system, a portion of the source data corresponding to the volume of interest;
defining, by the MIBA system, a moving window;
applying, by the MIBA system, the moving window to the portion of the source data for obtaining a dataset;
applying, by the MIBA system, a convolution algorithm to the dataset for obtaining convoluted data;
creating, by the MIBA system, a MIBA master file from the convoluted data; and
determining, by the MIBA system, a probability of a biomarker from the MIBA master file.
2. The method of claim 1, wherein the sample is a body tissue of a patient, and wherein the 3D model is of another patient sharing at least one attribute with the patient.
3. The method of claim 1, wherein the image data comprises images from a plurality of image modalities.
4. The method of claim 3, further comprising:
combining, by the MIBA system, the images from the plurality of image modalities based on an in-slice registration for obtaining in-slice registered data.
5. The method of claim 4, further comprising:
mapping, by the MIBA system, the in-slice registered data to the 3D model for obtaining the source data;
extracting, by the MIBA system, voxel data from the source data; and
entering, by the MIBA system, the voxel data into a MIBA database.
6. The method of claim 1, further comprising receiving parameters for defining the moving window, wherein the parameters include at least one of a size, a shape, a type of output value, a step size, and a direction of movement for the moving window.
7. The method of claim 1, wherein applying the moving window further comprises:
creating, by the MIBA system, a 3D matrix from the portion of the source data;
refining, by the MIBA system, the 3D matrix;
applying, by the MIBA system, one or more matrix operations to the refined 3D matrix; and
receiving, by the MIBA system, selection of a matrix column from the 3D matrix for forming the dataset.
8. The method of claim 7, wherein refining the 3D matrix comprises at least one of dimensionality reduction, aggregation, and subset selection processes.
9. The method of claim 7, wherein the one or more operations comprises at least one of matrix addition, matrix subtraction, matrix multiplication, matrix division, matrix exponentiation, and matrix transposition.
10. The method of claim 1, wherein the convolution algorithm comprises a Bayesian belief network algorithm.
11. The method of claim 1, further comprising:
mapping, by the MIBA system, the convoluted data to the 3D model;
extracting, by the MIBA system, MIBA voxels from the mapping; and
creating, by the MIBA system, the MIBA master file with the MIBA voxels.
12. The method of claim 1, further comprising:
receiving, by the MIBA system, annotation data; and
updating, by the MIBA system, the MIBA master file to include the annotation data.
13. A medical imaging bioinformatics annotated (“MIBA”) system, comprising:
a database configured to store a MIBA master file; and
a MIBA creation unit configured to:
receive image data from a sample;
register the image data to a three-dimensional (3D) model selected from a population database for obtaining source data;
extract voxel data from the source data and enter the voxel data into the database;
receive selection of a volume of interest;
extract a portion of the voxel data from the database corresponding to the volume of interest;
create the MIBA master file from the portion of the voxel data; and
store the MIBA master file in the database; and
a MIBA query system configured to:
receive the MIBA master file from the database;
extract data from the MIBA master file in response to the query; and
present the extracted data on an output interface.
14. The MIBA system of claim 13, wherein the MIBA creation unit is further configured to:
receive annotation data; and
update the MIBA master file to incorporate the annotation data.
15. The MIBA system of claim 13, wherein the MIBA creation unit is further configured to:
create a 3D matrix from the portion of the voxel data;
refine the 3D matrix;
apply one or more matrix operations to the refined 3D matrix;
receive selection of a matrix column from the 3D matrix for forming the dataset;
apply a convolution algorithm to the selected matrix column to obtain convoluted data;
map the convoluted data to the 3D model;
extract MIBA voxel data from the mapped convoluted data; and
create the MIBA master file with the MIBA voxel data.
16. The MIBA system of claim 15, wherein the MIBA creation unit is further configured to:
receive parameters to define a moving window; and
apply the moving window to the portion of the voxel data for creating the 3D matrix.
17. The MIBA system of claim 13, wherein the image data comprises images obtained from a plurality of imaging modalities.
18. A method comprising:
creating, by a medical imaging bioinformatics annotated (“MIBA”) system, a MIBA master file, wherein creating the MIBA master File comprises:
receiving, by the MIBA system, image data from a sample;
performing, by the MIBA system, a first registration on the image data for obtaining in-slice registered data;
performing, by the MIBA system, a second registration comprising registering the in-slice registered data to a three-dimensional (3D) model selected from a population database for obtaining source data;
extracting, by the MIBA system, voxel data from the source data and storing the voxel data in a MIBA database;
receiving, by the MIBA system, selection of a volume of interest;
extracting, by the MIBA system, a portion of the voxel data corresponding to the volume of interest;
creating, by the MIBA system, the MIBA master file from the portion of the voxel data; and
storing, by the MIBA system, the MIBA master file in the MIBA database; and
receiving, by the MIBA system, a query;
extracting, by the MIBA system, data from the MIBA master file in response to the query; and
presenting, by the MIBA system, the extracted data on an output interface.
19. The method of claim 18, further comprising:
receiving, by the MIBA system, annotated data; and
updating, by the MIBA system, the MIBA master file with the annotated data.
20. The method of claim 18, further comprising:
applying, by the MIBA system, a moving window to the portion of the voxel data;
applying, by the MIBA system, a convoluted algorithm to output of the moving window for obtaining convoluted data; and
mapping, by the MIBA system, the convoluted data to the 3D model for creating the MIBA master file.
US17/582,601 2017-04-21 2022-01-24 System and method for creating, querying, and displaying a miba master file Abandoned US20220406410A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/582,601 US20220406410A1 (en) 2017-04-21 2022-01-24 System and method for creating, querying, and displaying a miba master file

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201762488581P 2017-04-21 2017-04-21
US201762580543P 2017-11-02 2017-11-02
US15/959,142 US11232853B2 (en) 2017-04-21 2018-04-20 System and method for creating, querying, and displaying a MIBA master file
US17/582,601 US20220406410A1 (en) 2017-04-21 2022-01-24 System and method for creating, querying, and displaying a miba master file

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/959,142 Continuation US11232853B2 (en) 2017-04-21 2018-04-20 System and method for creating, querying, and displaying a MIBA master file

Publications (1)

Publication Number Publication Date
US20220406410A1 true US20220406410A1 (en) 2022-12-22

Family

ID=63856935

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/959,142 Active 2040-04-02 US11232853B2 (en) 2017-04-21 2018-04-20 System and method for creating, querying, and displaying a MIBA master file
US17/582,601 Abandoned US20220406410A1 (en) 2017-04-21 2022-01-24 System and method for creating, querying, and displaying a miba master file

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/959,142 Active 2040-04-02 US11232853B2 (en) 2017-04-21 2018-04-20 System and method for creating, querying, and displaying a MIBA master file

Country Status (3)

Country Link
US (2) US11232853B2 (en)
EP (1) EP3613050A4 (en)
WO (1) WO2018195501A2 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11043042B2 (en) * 2016-05-16 2021-06-22 Hewlett-Packard Development Company, L.P. Generating a shape profile for a 3D object
JP6885896B2 (en) * 2017-04-10 2021-06-16 富士フイルム株式会社 Automatic layout device and automatic layout method and automatic layout program
US10783699B2 (en) * 2018-02-16 2020-09-22 AI Analysis, Inc. Sub-voxel refinement of anatomical models
US10877120B2 (en) * 2018-05-18 2020-12-29 Case Western Reserve University System and method for visualization and segmentation of tissue using a bayesian estimation of multicomponent relaxation values in magnetic resonance fingerprinting
US11313931B2 (en) * 2018-05-18 2022-04-26 Case Western Reserve University System and method for quantifying T1, T2 and resonance frequency using rosette trajectory acquisition and read segmented reconstruction
DE102018209584A1 (en) * 2018-06-14 2019-12-19 Siemens Healthcare Gmbh Magnetic fingerprinting method
EP3629048A1 (en) 2018-09-27 2020-04-01 Siemens Healthcare GmbH Low field magnetic resonance fingerprinting
US11769594B2 (en) * 2018-10-11 2023-09-26 Jlk Inc. Deep learning model learning device and method for cancer region
US11195616B1 (en) * 2020-10-15 2021-12-07 Stasis Labs, Inc. Systems and methods using ensemble machine learning techniques for future event detection
WO2023212709A2 (en) * 2022-04-29 2023-11-02 Board Of Regents, The University Of Texas System An efficient approach to optimal experimental design for magnetic resonance fingerprinting with b-splines

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060262970A1 (en) * 2005-05-19 2006-11-23 Jan Boese Method and device for registering 2D projection images relative to a 3D image data record
US20160019693A1 (en) * 2014-07-15 2016-01-21 The Brigham And Women's Hospital Systems and methods for generating biomarkers based on multivariate classification of functional imaging and associated data
US20160038095A1 (en) * 2014-08-11 2016-02-11 Cubisme Imaging, Llc Method for determining in vivo tissue biomarker characteristics using multiparameter mri matrix creation and big data analytics
US20160203263A1 (en) * 2015-01-08 2016-07-14 Imbio Systems and methods for analyzing medical images and creating a report
US20160350946A1 (en) * 2015-05-29 2016-12-01 Erica Lin Method of forming probability map
US20170263023A1 (en) * 2016-03-08 2017-09-14 Siemens Healthcare Gmbh Methods and systems for accelerated reading of a 3D medical volume
US20170358079A1 (en) * 2013-08-13 2017-12-14 H. Lee Moffitt Cancer Center And Research Institute, Inc. Systems, methods and devices for analyzing quantitative information obtained from radiological images
US20180165867A1 (en) * 2016-11-16 2018-06-14 Terarecon, Inc. System and method for three-dimensional printing, holographic and virtual reality rendering from medical image processing
US10452813B2 (en) * 2016-11-17 2019-10-22 Terarecon, Inc. Medical image identification and interpretation
US10762627B2 (en) * 2007-01-17 2020-09-01 St. Jude Medical International Holding S.À R.L. Method and a system for registering a 3D pre acquired image coordinates system with a medical positioning system coordinate system and with a 2D image coordinate system

Family Cites Families (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020186818A1 (en) 2000-08-29 2002-12-12 Osteonet, Inc. System and method for building and manipulating a centralized measurement value database
US6567684B1 (en) 2000-11-08 2003-05-20 Regents Of The University Of Michigan Imaging system, computer, program product and method for detecting changes in rates of water diffusion in a tissue using magnetic resonance imaging (MRI)
US6549802B2 (en) 2001-06-07 2003-04-15 Varian Medical Systems, Inc. Seed localization system and method in ultrasound by fluoroscopy and ultrasound fusion
US20030072479A1 (en) 2001-09-17 2003-04-17 Virtualscopics System and method for quantitative assessment of cancers and their change over time
US6956373B1 (en) 2002-01-02 2005-10-18 Hugh Keith Brown Opposed orthogonal fusion system and method for generating color segmented MRI voxel matrices
US7840247B2 (en) 2002-09-16 2010-11-23 Imatx, Inc. Methods of predicting musculoskeletal disease
US8805619B2 (en) 2002-10-28 2014-08-12 The General Hospital Corporation Tissue disorder imaging analysis
US20060269476A1 (en) 2005-05-31 2006-11-30 Kuo Michael D Method for integrating large scale biological data with imaging
KR20090012323A (en) 2006-05-19 2009-02-03 코닌클리케 필립스 일렉트로닉스 엔.브이. Error adaptive functional imaging
EP1913868A1 (en) 2006-10-19 2008-04-23 Esaote S.p.A. System for determining diagnostic indications
EP2146631A4 (en) 2007-04-13 2013-03-27 Univ Michigan Systems and methods for tissue imaging
EP2147395A1 (en) 2007-05-17 2010-01-27 Yeda Research And Development Company Limited Method and apparatus for computer-aided diagnosis of cancer and product
US20100284927A1 (en) 2007-10-26 2010-11-11 University Of Utah Research Foundation Use of mri contrast agents for evaluating the treatment of tumors
US8139831B2 (en) 2007-12-06 2012-03-20 Siemens Aktiengesellschaft System and method for unsupervised detection and gleason grading of prostate cancer whole mounts using NIR fluorscence
US8718340B2 (en) 2008-09-03 2014-05-06 Trustees Of The University Of Pennsylvania System and method for accurate and rapid identification of diseased regions on biological images with applications to disease diagnosis and prognosis
AU2009310386A1 (en) 2008-10-31 2010-05-06 Oregon Health & Science University Method and apparatus using magnetic resonance imaging for cancer identification
US20100158332A1 (en) 2008-12-22 2010-06-24 Dan Rico Method and system of automated detection of lesions in medical images
WO2011059655A1 (en) 2009-10-29 2011-05-19 Optovue, Inc. Enhanced imaging for optical coherence tomography
JP5589366B2 (en) 2009-11-27 2014-09-17 ソニー株式会社 Information processing apparatus, information processing method, and program thereof
WO2011143361A2 (en) 2010-05-11 2011-11-17 Veracyte, Inc. Methods and compositions for diagnosing conditions
US8315812B2 (en) * 2010-08-12 2012-11-20 Heartflow, Inc. Method and system for patient-specific modeling of blood flow
ES2618908T3 (en) 2010-11-30 2017-06-22 Volpara Health Technologies Limited An image processing technique and an image processing system
US9721338B2 (en) 2011-01-11 2017-08-01 Rutgers, The State University Of New Jersey Method and apparatus for segmentation and registration of longitudinal images
WO2012097336A1 (en) 2011-01-13 2012-07-19 Rutgers, The State University Of New Jersey Enhanced multi-protocol analysis via intelligent supervised embedding (empravise) for multimodal data fusion
EP2678827A4 (en) 2011-02-24 2017-10-25 Dog Microsystems Inc. Method and apparatus for isolating a potential anomaly in imaging data and its application to medical imagery
CN103477353A (en) 2011-03-16 2013-12-25 皇家飞利浦有限公司 Method and system for intelligent linking of medical data
AU2012250496A1 (en) 2011-05-03 2013-11-28 Commonwealth Scientific And Industrial Research Organisation Method for detection of a neurological disease
CA2840613C (en) * 2011-06-29 2019-09-24 The Regents Of The University Of Michigan Analysis of temporal changes in registered tomographic images
CA2847845A1 (en) 2011-09-06 2013-03-14 University Of Florida Research Foundation, Inc. Systems and methods for detecting the presence of anomalous material within tissue
WO2013049153A2 (en) 2011-09-27 2013-04-04 Board Of Regents, University Of Texas System Systems and methods for automated screening and prognosis of cancer from whole-slide biopsy images
JP5394598B2 (en) 2011-11-25 2014-01-22 パナソニック株式会社 Medical image compression apparatus, medical image compression method, and prediction knowledge database creation apparatus
US20140309511A1 (en) 2011-12-06 2014-10-16 Dianovator Ab Medical arrangements and a method for prediction of a value related to a medical condition
GB201121307D0 (en) 2011-12-12 2012-01-25 Univ Stavanger Probability mapping for visualisation of biomedical images
DE102012201412B4 (en) 2012-02-01 2023-01-12 Siemens Healthcare Gmbh Method for calculating a value of an absorption parameter of positron emission tomography, method for positron emission tomography, magnetic resonance system and positron emission tomograph
US9370304B2 (en) 2012-06-06 2016-06-21 The Regents Of The University Of Michigan Subvolume identification for prediction of treatment outcome
US8873836B1 (en) 2012-06-29 2014-10-28 Emc Corporation Cluster-based classification of high-resolution data
KR101993716B1 (en) 2012-09-28 2019-06-27 삼성전자주식회사 Apparatus and method for diagnosing lesion using categorized diagnosis model
US9256967B2 (en) 2012-11-02 2016-02-09 General Electric Company Systems and methods for partial volume correction in PET penalized-likelihood image reconstruction
US20140153795A1 (en) 2012-11-30 2014-06-05 The Texas A&M University System Parametric imaging for the evaluation of biological condition
KR20140088434A (en) 2013-01-02 2014-07-10 삼성전자주식회사 Mri multi-parametric images aquisition supporting apparatus and method based on patient characteristics
US9378551B2 (en) 2013-01-03 2016-06-28 Siemens Aktiengesellschaft Method and system for lesion candidate detection
JP5668090B2 (en) 2013-01-09 2015-02-12 キヤノン株式会社 Medical diagnosis support apparatus and medical diagnosis support method
US9730655B2 (en) 2013-01-21 2017-08-15 Tracy J. Stark Method for improved detection of nodules in medical images
DE102014201321A1 (en) 2013-02-12 2014-08-14 Siemens Aktiengesellschaft Determination of lesions in image data of an examination object
KR102042202B1 (en) 2013-02-25 2019-11-08 삼성전자주식회사 Lesion segmentation apparatus and method in medical image
US9424639B2 (en) 2013-04-10 2016-08-23 Battelle Memorial Institute Method of assessing heterogeneity in images
GB201307590D0 (en) 2013-04-26 2013-06-12 St Georges Hosp Medical School Processing imaging data to obtain tissue type information
US9165362B2 (en) 2013-05-07 2015-10-20 The Johns Hopkins University 3D-2D image registration for medical imaging
US20150093007A1 (en) 2013-09-30 2015-04-02 Median Technologies System and method for the classification of measurable lesions in images of the chest
EP2932470A2 (en) 2013-10-18 2015-10-21 Koninklijke Philips N.V. Registration of medical images
US10241181B2 (en) 2014-01-13 2019-03-26 Siemens Healthcare Gmbh Resolution enhancement of diffusion imaging biomarkers in magnetic resonance imaging
US9760989B2 (en) 2014-05-15 2017-09-12 Vida Diagnostics, Inc. Visualization and quantification of lung disease utilizing image registration
US9764136B2 (en) * 2014-06-06 2017-09-19 Case Western Reserve University Clinical decision support system
US9092691B1 (en) * 2014-07-18 2015-07-28 Median Technologies System for computing quantitative biomarkers of texture features in tomographic images
US10061003B2 (en) 2014-09-01 2018-08-28 bioProtonics, L.L.C. Selective sampling for assessing structural spatial frequencies with specific contrast mechanisms
US10002419B2 (en) * 2015-03-05 2018-06-19 Siemens Healthcare Gmbh Direct computation of image-derived biomarkers
US20160292194A1 (en) 2015-04-02 2016-10-06 Sisense Ltd. Column-oriented databases management
JP6434171B2 (en) * 2015-06-25 2018-12-05 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Image alignment
US10176408B2 (en) * 2015-08-14 2019-01-08 Elucid Bioimaging Inc. Systems and methods for analyzing pathologies utilizing quantitative imaging
WO2017151757A1 (en) * 2016-03-01 2017-09-08 The United States Of America, As Represented By The Secretary, Department Of Health And Human Services Recurrent neural feedback model for automated image annotation
US10157460B2 (en) 2016-10-25 2018-12-18 General Electric Company Interpolated tomosynthesis projection images
DE102019203192A1 (en) 2019-03-08 2020-09-10 Siemens Healthcare Gmbh Generation of a digital twin for medical examinations

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060262970A1 (en) * 2005-05-19 2006-11-23 Jan Boese Method and device for registering 2D projection images relative to a 3D image data record
US10762627B2 (en) * 2007-01-17 2020-09-01 St. Jude Medical International Holding S.À R.L. Method and a system for registering a 3D pre acquired image coordinates system with a medical positioning system coordinate system and with a 2D image coordinate system
US20170358079A1 (en) * 2013-08-13 2017-12-14 H. Lee Moffitt Cancer Center And Research Institute, Inc. Systems, methods and devices for analyzing quantitative information obtained from radiological images
US20160019693A1 (en) * 2014-07-15 2016-01-21 The Brigham And Women's Hospital Systems and methods for generating biomarkers based on multivariate classification of functional imaging and associated data
US20160038095A1 (en) * 2014-08-11 2016-02-11 Cubisme Imaging, Llc Method for determining in vivo tissue biomarker characteristics using multiparameter mri matrix creation and big data analytics
US20160203263A1 (en) * 2015-01-08 2016-07-14 Imbio Systems and methods for analyzing medical images and creating a report
US20160350946A1 (en) * 2015-05-29 2016-12-01 Erica Lin Method of forming probability map
US20170263023A1 (en) * 2016-03-08 2017-09-14 Siemens Healthcare Gmbh Methods and systems for accelerated reading of a 3D medical volume
US20180165867A1 (en) * 2016-11-16 2018-06-14 Terarecon, Inc. System and method for three-dimensional printing, holographic and virtual reality rendering from medical image processing
US10452813B2 (en) * 2016-11-17 2019-10-22 Terarecon, Inc. Medical image identification and interpretation

Also Published As

Publication number Publication date
US11232853B2 (en) 2022-01-25
EP3613050A4 (en) 2021-01-27
WO2018195501A2 (en) 2018-10-25
US20190102516A1 (en) 2019-04-04
WO2018195501A3 (en) 2019-03-07
EP3613050A2 (en) 2020-02-26

Similar Documents

Publication Publication Date Title
US20220406410A1 (en) System and method for creating, querying, and displaying a miba master file
US11593978B2 (en) System and method for forming a super-resolution biomarker map image
Rajkomar et al. High-throughput classification of radiographs using deep convolutional neural networks
Ghesu et al. Contrastive self-supervised learning from 100 million medical images with optional supervision
US10282588B2 (en) Image-based tumor phenotyping with machine learning from synthetic data
US8160357B2 (en) Image segmentation
EP3043318B1 (en) Analysis of medical images and creation of a report
US10853409B2 (en) Systems and methods for image search
Yousef et al. A holistic overview of deep learning approach in medical imaging
US9965857B2 (en) Medical image processing
US11017896B2 (en) Radiomic features of prostate bi-parametric magnetic resonance imaging (BPMRI) associate with decipher score
CN105760874B (en) CT image processing system and its CT image processing method towards pneumoconiosis
Gut et al. Benchmarking of deep architectures for segmentation of medical images
Hsiao et al. A deep learning-based precision and automatic kidney segmentation system using efficient feature pyramid networks in computed tomography images
CN112561869A (en) Pancreatic neuroendocrine tumor postoperative recurrence risk prediction method
Kaliyugarasan et al. Pulmonary nodule classification in lung cancer from 3D thoracic CT scans using fastai and MONAI
Le Van et al. Detecting lumbar implant and diagnosing scoliosis from vietnamese X-ray imaging using the pre-trained api models and transfer learning
Mossa et al. Ensemble learning of multiview CNN models for survival time prediction of braintumor patients using multimodal MRI scans
WO2020044735A1 (en) Similarity determination device, method, and program
JP7034306B2 (en) Region segmentation device, method and program, similarity determination device, method and program, and feature quantity derivation device, method and program
Qu et al. Advancing diagnostic performance and clinical applicability of deep learning-driven generative adversarial networks for Alzheimer's disease
Bi et al. Automated thresholded region classification using a robust feature selection method for PET-CT
Guttulsrud Generating Synthetic Medical Images with 3D GANs
Astaraki Advanced machine learning methods for oncological image analysis
PODDAR IMPROVED LUNG PATTERN CLASSIFICATION FOR INTERSTITIAL LUNG DISEASE USING DEEP LEARNING

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION