CN116134479A - Artificial intelligence-based image analysis system and method for detecting and characterizing lesions - Google Patents

Artificial intelligence-based image analysis system and method for detecting and characterizing lesions Download PDF

Info

Publication number
CN116134479A
CN116134479A CN202180050119.8A CN202180050119A CN116134479A CN 116134479 A CN116134479 A CN 116134479A CN 202180050119 A CN202180050119 A CN 202180050119A CN 116134479 A CN116134479 A CN 116134479A
Authority
CN
China
Prior art keywords
hotspot
processor
volume
image
hotspots
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180050119.8A
Other languages
Chinese (zh)
Inventor
J·M·布吕诺尔夫松
K·E·M·约翰松
H·M·E·萨尔斯泰特
J·F·A·里克特
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sinai Diagnostics
Original Assignee
Sinai Diagnostics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/008,411 external-priority patent/US11721428B2/en
Application filed by Sinai Diagnostics filed Critical Sinai Diagnostics
Publication of CN116134479A publication Critical patent/CN116134479A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10108Single photon emission computed tomography [SPECT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Abstract

Presented herein are systems and methods for providing improved detection and characterization of lesions in a subject via automated analysis of nuclear medicine images, such as Positron Emission Tomography (PET) and Single Photon Emission Computed Tomography (SPECT) images. In particular, in certain embodiments, the methods described herein utilize Artificial Intelligence (AI) to detect 3D nuclear medicine image regions corresponding to hotspots representing potential cancerous lesions in the subject. The machine learning module may be used not only to detect the presence and location of such regions within an image, but may also be used to segment the regions corresponding to the lesions and/or classify hot spots based on their likelihood of indicating a true, potentially cancerous lesion. This AI-based lesion detection, segmentation and classification may provide a basis for further characterization of lesions, overall tumor burden, and disease severity and risk estimation.

Description

Artificial intelligence-based image analysis system and method for detecting and characterizing lesions
Cross reference to related applications
The present application claims priority and rights of U.S. provisional patent application No. 63/048,436 to 7/6/2020, U.S. non-provisional patent application No. 17/008,411 to 31/2020, U.S. provisional patent application No. 63/127,666 to 12/18/2020, and U.S. provisional patent application No. 63/209,317 to 2021/6/10, the contents of each of which are hereby incorporated by reference in their entirety.
Technical Field
The present invention relates generally to systems and methods for creating, analyzing and/or presenting medical image data. More particularly, in certain embodiments, the present invention relates to systems and methods for automated analysis of medical images to identify and/or characterize cancerous lesions.
Background
Nuclear medicine imaging involves the use of radiolabeled compounds (known as radiopharmaceuticals). Radiopharmaceuticals are administered to the patient and accumulate in various regions within the body in a manner that depends on, and thus is indicative of, the biophysical and/or biochemical properties of tissue in such regions, such as those that are affected by the presence and/or state of a disease, such as cancer. For example, certain radiopharmaceuticals accumulate in abnormal osteogenic regions associated with malignant bone lesions that are indicative of metastasis after administration to a patient. Other radiopharmaceuticals may bind to specific receptors, enzymes, and proteins in the body that change during disease progression. After administration to a patient, these molecules circulate in the blood until they find their intended targets. The bound radiopharmaceutical remains at the disease site while the remainder of the agent is cleared from the body.
Nuclear medicine imaging techniques capture images by detecting radiation emitted from the radioactive portion of the radiopharmaceutical. The use of accumulated radiopharmaceuticals as beacons allows images depicting disease location and concentration to be obtained using common nuclear medicine modalities. Examples of nuclear medical imaging modalities include bone scanning imaging (also known as scintigraphy), single Photon Emission Computed Tomography (SPECT), and Positron Emission Tomography (PET). Bone scanning, SPECT, and PET imaging systems are found in most hospitals worldwide. The selection of a particular imaging modality depends on and/or dictates the particular radiopharmaceutical used. For example, technetium 99m # 99m Tc) labeled compounds are compatible with bone scan imaging and SPECT imaging, while PET imaging typically uses fluorinated compounds labeled with 18F. Compounds of formula (I) 99m Tc methylene bisphosphonate 99m Tc MDP) is a popular radiopharmaceutical for bone scanning imaging to detect metastatic cancer. Radiolabelled Prostate Specific Membrane Antigen (PSMA) targeting compounds (e.g 99m Tc-labeled 1404 and PyL TM (also known as [18F ]]DCFPyL)) can be used with SPECT and PET imaging, respectively, and offers the potential for highly specific prostate cancer detection.
Nuclear medicine imaging is thus a valuable technique for providing physicians with information that can be used to determine the presence and extent of a patient's disease. This information can be used by physicians to provide suggested course of treatment to patients and to track disease progression.
For example, a oncologist may use nuclear medicine images from a study of a patient as input in her assessment of whether the patient has a particular disease (e.g., prostate cancer), what stage of the disease is apparent, how the proposed course of treatment, if any, will be, whether surgical intervention is indicated, and possibly prognosis. The oncologist may use radiologist reports in this assessment. Radiologist reports are technical evaluations of nuclear medicine images prepared by radiologists for physicians requiring imaging studies, and include, for example, the type of study performed, clinical history, comparisons between images, techniques used to perform the study, radiologist observations and findings, and overall impressions and advice that radiologists may have based on imaging study results. The signed radiologist report is sent to the physician scheduling the study for review by the physician, and then discussed between the physician and the patient with respect to the outcome and advice of the treatment.
Thus, the process involves having the radiologist perform an imaging study on the patient, analyzing the acquired images, creating a radiologist report, forwarding the report to the requesting physician, having the physician formulate assessment and treatment advice, and having the physician communicate the results, advice, and risks to the patient. The process may also involve repeating the imaging study due to uncertain results or scheduling further inspection based on initial results. If the imaging study reveals that the patient has a particular disease or condition (e.g., cancer), the physician discusses various treatment options, including surgery, and the risk of waiting for observation or active monitoring methods rather than performing surgery is taken or resorted to.
Thus, the process of viewing and analyzing multiple patient images over time plays a key role in the diagnosis and treatment of cancer. There is a significant need for improved tools that facilitate and improve the accuracy of image viewing and analysis for cancer diagnosis and treatment. Improving the kits utilized by physicians, radiologists, and other health care professionals in this manner provides significant improvements in care standards and patient experience.
Disclosure of Invention
Presented herein are systems and methods that provide improved detection and characterization of lesions within a subject via automated analysis of nuclear medicine images, such as Positron Emission Tomography (PET) and Single Photon Emission Computed Tomography (SPECT) images. In particular, in certain embodiments, the methods described herein utilize Artificial Intelligence (AI) techniques to detect regions of a 3D nuclear medicine image representing potential cancerous lesions in the subject. In certain embodiments, these regions correspond to-localized regions of increased intensity relative to their surroundings due to increased uptake of the radiopharmaceutical within the lesion-hot spots. The systems and methods described herein may use one or more machine learning modules to not only detect the presence and location of such hotspots within an image, but also to segment areas corresponding to hotspots and/or classify hotspots based on the likelihood that they do correspond to real, potentially cancerous lesions. These AI-based lesion detection, segmentation and classification methods may provide the basis for further characterization of lesions, overall tumor burden, and disease severity and risk estimation.
For example, once an image hotspot representing a lesion is detected, segmented, and classified, a lesion index value may be calculated to provide a measure of radiopharmaceutical uptake within the potential lesion and/or the size (e.g., volume) of the potential lesion. The calculated lesion index values may then be aggregated to provide an overall estimate of the subject's tumor burden, disease severity, risk of metastasis, and the like. In some embodiments, the lesion index value is calculated by comparing the intensity within the segmented hot spot volume to a measure of the intensity of a particular reference organ (e.g., liver and aortic portion). Using the reference organ in this way allows for measuring lesion index values on a normalized scale that can be compared between images of different subjects. In certain embodiments, the methods described herein include techniques for inhibiting intensity bleed (intensity bleed) from multiple image regions corresponding to organ and tissue regions where radiopharmaceuticals normally accumulate at high levels, such as kidneys, liver, and bladder (e.g., urinary bladder). The intensities in the areas of the nuclear medicine images corresponding to these organs are typically very high (even for normal, healthy subjects) and do not necessarily indicate cancer. Furthermore, high radiopharmaceutical accumulation in these organs results in high levels of emitted radiation. The increased emitted radiation may scatter, resulting not only in a high intensity in the region of the nuclear medicine image corresponding to the organ itself, but also in a high intensity of nearby external voxels. Such intensity exudation into the region of the image outside and surrounding the region corresponding to the organ associated with high uptake may hinder detection of nearby lesions and cause inaccuracy in measuring uptake therein. Thus, correcting such intensity exudation effects improves the accuracy of lesion detection and quantification.
In certain embodiments, the AI-based lesion detection techniques described herein augment functional information obtained from a nuclear medicine image with anatomical information obtained from an anatomical image, such as an x-ray Computed Tomography (CT) image. For example, a machine learning module utilized in the methods described herein may receive a plurality of input channels, including a first channel corresponding to a portion of a functional, nuclear medicine, image (e.g., a PET image; e.g., a SPECT image), and an additional channel corresponding to a portion of a co-aligned anatomical (e.g., CT) image and/or anatomical information derived therefrom. Adding anatomical background content in this manner may improve the accuracy of the lesion detection method. The anatomical information may also be incorporated into a lesion classification method applied after detection. For example, in addition to calculating a lesion index value based on the intensity of the detected hotspots, anatomical landmarks may be assigned to hotspots based on the location of the hotspots. For example, a detected hotspot may be automatically assigned a marker (e.g., an alphanumeric marker) based on whether the location of the detected hotspot corresponds to a location within a prostate, pelvic lymph node, non-pelvic lymph node, bone, or soft tissue region outside of the prostate and lymph nodes.
In some embodiments, the detected hotspots and associated information (e.g., calculated lesion index values and anatomical markers) are displayed using an interactive Graphical User Interface (GUI) to allow a medical professional (e.g., physician, radiologist, technician, etc.) to view. Thus, a medical professional may use the GUI to view and confirm the accuracy of the detected hotspots and corresponding index values and/or anatomical landmarks. In some embodiments, the GUI may also allow the user to identify and segment (e.g., manually) additional hotspots within the medical image, allowing the medical professional to identify additional potential lesions that he/she believes may miss in the automated detection process. Once identified, lesion index values and/or anatomical landmarks may also be determined for these manually identified and segmented lesions. Once users are satisfied with the detected hotspots and the collection of information calculated therefrom, they can confirm their core and generate final, signed reports that can be viewed and used, for example, to discuss results and diagnostics with patients, and evaluate prognosis and treatment options.
In this way, the methods described herein provide AI-based tools for lesion detection and analysis that can improve the accuracy and simplify the evaluation of disease (e.g., cancer) status and progression in a subject. This facilitates diagnosis, prognosis and assessment of response to treatment, thereby improving patient outcome.
In one aspect, the invention relates to a method for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank) cancerous lesions within the subject, the method comprising: (a) Receiving (e.g., and/or accessing), by a processor of a computing device, a signal using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, a 3D functional image of the subject [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having an intensity value (e.g., a Standard Uptake Value (SUV)) representing detected radiation emitted from the specific physical volume ], obtained by Single Photon Emission Computed Tomography (SPECT) ], wherein at least a portion of the plurality of voxels of the 3D functional image represent a physical volume within a target tissue region; (b) Automatically detecting, by the processor, one or more hotspots within the 3D functional image using a machine learning module [ e.g., a pre-trained machine learning module (e.g., having predetermined (e.g., and fixed) parameters that have been determined by a training program), each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing (e.g., indicating) a potentially cancerous lesion within the subject, thereby establishing one or both of (i) and (ii) as follows: (i) A list of hotspots [ e.g., a list of coordinates (e.g., image coordinates; e.g., physical space coordinates); for example, a mask identifying voxels of the 3D functional image that correspond to locations (e.g., centroids) of detected hotspots, identifying locations of the hotspots for each hotspot, and (ii) a 3D hotspot graph identifying corresponding 3D hotspot volumes within the 3D functional image for each hotspot { e.g., wherein the 3D hotspot graph is a segmentation map (e.g., comprising one or more segmentation masks) identifying voxels within the 3D functional image corresponding to the 3D hotspot volumes of each hotspot for each hotspot [ e.g., wherein the 3D hotspot graph is obtained via artificial intelligence-based segmentation of the functional image (e.g., using a machine learning module that receives at least the 3D functional image as input and generates the 3D hotspot graph as output thereby segmenting hotspots) ]; for example, wherein the 3D hotspot graph depicts a 3D boundary (e.g., an irregular boundary) of the hotspot for each hotspot (e.g., the 3D boundary encloses the 3D hotspot volume, e.g., and distinguishes voxels of the 3D functional image that make up the 3D hotspot volume from other voxels of the 3D functional image); and (c) storing and/or providing the list of hotspots and/or the 3D hotspot graph for display and/or further processing.
In certain embodiments, the machine learning module receives at least a portion of the 3D functional image as input and automatically detects the one or more hotspots based at least in part on intensities of voxels of the received portion of the 3D functional image. In certain embodiments, the machine learning module receives as input a 3D segmentation map identifying one or more volumes of interest (VOIs) within the 3D functional image, each VOI corresponding to a particular target tissue region and/or a particular anatomical region [ e.g., soft tissue region (e.g., prostate, lymph node, lung, breast) ] within the subject; for example, one or more specific bones; for example, the entire bone region ].
In certain embodiments, the method comprises receiving (e.g., and/or accessing) by the processor a signal using an anatomical imaging modality [ e.g., x-ray Computed Tomography (CT); for example, magnetic Resonance Imaging (MRI); for example, ultrasound ], wherein the 3D anatomical image includes a graphical representation of tissue (e.g., soft tissue and/or bone) within the subject, and the machine learning module receives at least two input channels including a first input channel corresponding to at least a portion of the 3D anatomical image and a second input channel corresponding to at least a portion of the 3D functional image [ e.g., wherein the machine learning module receives PET images and CT images as separate channels (e.g., separate channels representing the same volume) (e.g., two color channels (RGB)) similar to receiving photographic color images by a machine learning module ].
In certain embodiments, the machine learning module receives as input a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image and/or the 3D anatomical image, each VOI corresponding to a particular target tissue region and/or a particular anatomical region. In certain embodiments, the method includes automatically segmenting, by the processor, the 3D anatomical image, thereby creating the 3D segmentation map.
In certain embodiments, the machine learning module is a region-specific machine learning module that receives as input specific portions of the 3D functional image corresponding to one or more specific tissue regions and/or anatomical regions of the subject.
In certain embodiments, the machine learning module generates the list of hotspots as an output [ e.g., wherein the machine learning module implements a machine learning algorithm (e.g., an Artificial Neural Network (ANN)) trained to determine one or more locations (e.g., 3D coordinates) based on intensities of voxels of at least a portion of the 3D functional image, each location corresponding to a location of one of the one or more hotspots ].
In certain embodiments, the machine learning module generates the 3D hotspot graph as an output [ e.g., wherein the machine learning module implements a machine learning algorithm (e.g., an Artificial Neural Network (ANN)) trained to segment the 3D functional image (e.g., based at least in part on intensities of voxels of the 3D functional image) to identify the 3D hotspot graph (e.g., the 3D hotspot graph depicts a 3D boundary (e.g., an irregular boundary) of the hotspot for each hotspot), thereby identifying the 3D hotspot volume (e.g., enclosed by the 3D hotspot boundary)) of the 3D hotspot volume; for example, wherein the machine learning module implements a machine learning algorithm trained to determine, for each voxel of at least a portion of the 3D functional image, a hotspot likelihood value representative of a likelihood that the voxel corresponds to a hotspot (e.g., and step (b) includes performing one or more subsequent processing steps, such as thresholding, to identify the 3D hotspot volume of the 3D hotspot graph (e.g., the 3D hotspot graph depicts a 3D boundary (e.g., an irregular boundary) of the hotspot for each hotspot, thereby identifying the 3D hotspot volume (e.g., enclosed by the 3D hotspot boundary)) using the hotspot likelihood value).
In certain embodiments, the method comprises: (d) Determining, by the processor, for each hotspot of at least a portion of the hotspots, a lesion likelihood classification corresponding to a likelihood that the hotspot represents a lesion within the subject [ e.g., a binary classification indicating whether the hotspot is a real lesion; for example, a likelihood value on a scale (e.g., a floating point value in the range of zero to one) that represents the likelihood that the hotspot represents a real lesion ].
In certain embodiments, step (D) comprises using a second machine learning module to determine the lesion likelihood classification for each hotspot of the portion [ e.g., wherein the machine learning module implements a machine learning algorithm trained to detect hotspots (e.g., generate the hotspot list and/or the 3D hotspot graph as output) and determine the lesion likelihood classification for the hotspot for each hotspot ]. In certain embodiments, step (d) comprises using a second machine learning module (e.g., a hotspot classification module) to determine the lesion likelihood classification for each hotspot [ e.g., based at least in part on one or more members selected from the group consisting of: the intensity of the 3D functional image, the list of hotspots, the 3D hotspot graph, the intensity of the 3D anatomical image, and the 3D segmentation map; for example, wherein the second machine learning module receives one or more input channels corresponding to one or more members selected from the group consisting of intensities of the 3D functional images, the list of hotspots, the 3D hotspot graph, intensities of 3D anatomical images, and 3D segmentation maps.
In certain embodiments, the method includes determining, by the processor, for each hotspot, a set of one or more hotspot features and using the set of one or more hotspot features as input to the second machine learning module.
In certain embodiments, the method comprises: (e) A subset of the one or more hotspots corresponding to hotspots having a high likelihood of corresponding to cancerous lesions is selected (e.g., for inclusion in a report; e.g., for calculating one or more risk index values for the subject) by the processor based at least in part on the lesion likelihood classification for the hotspots.
In certain embodiments, the method comprises: (f) Adjusting, by the processor, intensities of voxels of the 3D functional image to correct intensity exudation (e.g., crosstalk) from one or more high intensity volumes of the 3D functional image, each of the one or more high intensity volumes corresponding to a high uptake tissue region within the subject associated with normally high radiopharmaceutical uptake (e.g., not necessarily indicative of cancer). In certain embodiments, step (f) includes correcting intensity bleed from multiple high intensity volumes one at a time in a sequential manner [ e.g., first adjusting intensities of voxels of the 3D functional image to correct intensity bleed from a first high intensity volume to produce a first corrected image, then adjusting intensities of voxels of the first corrected image to correct intensity bleed from a second high intensity volume, etc. ]. In certain embodiments, the one or more high intensity volumes correspond to one or more high uptake tissue regions selected from the group consisting of kidney, liver, and bladder (e.g., urinary bladder).
In certain embodiments, the method comprises: (g) A corresponding lesion index is determined by the processor for each of at least a portion of the one or more hotspots that indicates a level of radiopharmaceutical uptake within a potential lesion corresponding to the hotspot and/or a size (e.g., volume) of the potential lesion. In certain embodiments, step (g) comprises comparing the intensity(s) (e.g., corresponding to Standard Uptake Values (SUVs)) of one or more voxels associated with the hotspot (e.g., at and/or near the location of the hotspot; e.g., within the volume of the hotspot) to one or more reference values, each reference value being associated with a particular reference tissue region (e.g., liver; e.g., aortic portion) within the subject and determining [ e.g., as an average (e.g., a robust average, such as an average of values within a quartile range) ]basedon the intensity (e.g., SUV value) of a reference volume corresponding to the reference tissue region. In certain embodiments, the one or more reference values comprise one or more members selected from the group consisting of an aortic reference value associated with an aortic portion of the subject and a liver reference value associated with a liver of the subject.
In certain embodiments, determining the particular reference value for at least one particular reference value associated with a particular reference tissue region includes fitting [ e.g., fitting a distribution of intensities of voxels (e.g., fitting a histogram of voxel intensities) ] to a multi-component hybrid model (e.g., a two-component gaussian model) ] for intensities of voxels within a particular reference volume corresponding to the particular reference tissue region [ e.g., and identifying one or more secondary peaks in the distribution of voxel intensities, the secondary peaks corresponding to voxels associated with abnormal uptake, and determining from the reference value those voxels (e.g., to account for effects of abnormally low radiopharmaceutical uptake in certain portions of the reference tissue region, e.g., portions of the liver) ].
In certain embodiments, the method comprises calculating (e.g., automatically by the processor) an overall risk index for the subject indicative of the subject's cancer status and/or risk using the determined lesion index value.
In certain embodiments, the method comprises determining, by the processor, for each hotspot (e.g., automatically), an anatomical classification corresponding to a particular anatomical region and/or group of anatomical regions within the subject in which the potential cancerous lesion represented by the hotspot is determined [ e.g., by the processor (e.g., based on received and/or determined 3D segmentation maps) ] to be located [ e.g., in the prostate, pelvic lymph nodes, non-pelvic lymph nodes, bones (e.g., bone metastatic regions), and soft tissue regions not located in the prostate or lymph nodes ].
In certain embodiments, the method comprises: (h) Causing, by the processor, a graphical representation of at least a portion of the one or more hotspots to be displayed within a Graphical User Interface (GUI) for viewing by a user. In certain embodiments, the method comprises: (i) User selection of a subset of the one or more hotspots identified as likely to represent potential cancerous lesions within the subject via user viewing is received via the GUI by the processor.
In certain embodiments, the 3D functional image comprises a PET or SPECT image obtained after administration of an agent (e.g., a radiopharmaceutical; e.g., an imaging agent) to the subject. In certain embodiments, the agent comprises a PSMA binding agent. In certain embodiments, the agent comprises 18 F. In certain embodiments, the agent comprises [18F]DCFPyL. In certain embodiments, the agent comprises PSMA-11 (e.g., 68 Ga-PSMA-11). In certain embodiments, the agent comprises a compound selected from the group consisting of 99m Tc、 68 Ga、 177 Lu、 225 Ac、 111 In、 123 I、 124 I, I 131 One or more members of the group of I.
In certain embodiments, the machine learning module implements a neural network [ e.g., an Artificial Neural Network (ANN); for example, convolutional Neural Network (CNN) ].
In certain embodiments, the processor is a processor of a cloud-based system.
In another aspect, the invention relates to a method for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank) cancerous lesions within the subject, the method comprising: (a) Receiving (e.g., and/or accessing) by a processor of a computing device using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, a 3D functional image of the subject [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having intensity values representing detected radiation emitted from the specific physical volume ], obtained by Single Photon Emission Computed Tomography (SPECT) ], wherein at least a portion of the plurality of voxels of the 3D functional image represent a physical volume within a target tissue region; (b) Receiving (e.g., and/or accessing) by the processor a signal using an anatomical imaging modality [ e.g., x-ray Computed Tomography (CT); for example, magnetic Resonance Imaging (MRI); for example, ultrasound ] a 3D anatomical image of the subject obtained, wherein the 3D anatomical image comprises a graphical representation of tissue (e.g., soft tissue and/or bone) within the subject; (c) Automatically detecting, by the processor, one or more hotspots within the 3D functional image using a machine learning module, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing (e.g., indicating) a potentially cancerous lesion within the subject, establishing one or both of (i) and (ii) as follows: (i) A list of hotspots that identify, for each hotspot, a location of the hotspot, and (ii) a 3D hotspot graph that identifies, for each hotspot, a corresponding 3D hotspot volume within the 3D functional image { e.g., wherein the 3D hotspot graph is a segmentation graph (e.g., comprising one or more segmentation masks) that identifies, for each hotspot, voxels within the 3D functional image that correspond to the 3D hotspot volume of each hotspot [ e.g., wherein the 3D hotspot graph is obtained via artificial intelligence-based segmentation of the functional image (e.g., using a machine learning module that receives at least the 3D functional image as input and generates the 3D hotspot graph as output thereby segmenting the hotspot) ]; for example, wherein the 3D hotspot graph depicts a 3D boundary (e.g., an irregular boundary) of the hotspot for each hotspot (e.g., the 3D boundary encloses the 3D hotspot volume (e.g., and distinguishes voxels of the 3D functional image that make up the 3D hotspot volume from other voxels of the 3D functional image) }, wherein the machine learning module receives at least two input channels including a first input channel corresponding to at least a portion of the 3D anatomical image and a second input channel corresponding to at least a portion of the 3D functional image [ e.g., wherein the machine learning module receives a PET image and a CT image as separate channels (e.g., separate channels representing the same volume) (e.g., similar to two color channels (RGB) that receive photographic color images by a machine learning module) ] and/or anatomical information derived therefrom [ e.g., a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image, each VOI corresponding to a particular target tissue region and/or particular anatomical region ]; and (D) storing and/or providing the list of hotspots and/or the 3D hotspot graph for display and/or further processing.
In another aspect, the invention relates to a method for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank) cancerous lesions within the subject, the method comprising: (a) Receiving (e.g., and/or accessing) by a processor of a computing device using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, a 3D functional image of the subject [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having intensity values representing detected radiation emitted from the specific physical volume ], obtained by Single Photon Emission Computed Tomography (SPECT) ], wherein at least a portion of the plurality of voxels of the 3D functional image represent a physical volume within a target tissue region; (b) Automatically detecting, by the processor, one or more hotspots within the 3D functional image using a first machine learning module, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing (e.g., indicating) a potential cancerous lesion within the subject, thereby establishing a list of hotspots identifying a location of the hotspot for each hotspot [ e.g., wherein the machine learning module implements a machine learning algorithm (e.g., an Artificial Neural Network (ANN)) trained to determine one or more locations (e.g., 3D coordinates) based on intensities of voxels of at least a portion of the 3D functional image, each location corresponding to a location of one of the one or more hotspots ]; (c) Automatically determining, by the processor, a corresponding 3D hotspot volume within the 3D functional image for each of the one or more hotspots using a second machine learning module and the hotspot list, thereby establishing a 3D hotspot graph [ e.g., wherein the second machine learning module implements a machine learning algorithm (e.g., an Artificial Neural Network (ANN)) trained to segment the 3D functional image based at least in part on intensities of the hotspot list along with voxels of the 3D functional image to identify the 3D hotspot volume of the 3D hotspot graph; for example, wherein the machine learning module implements a machine learning algorithm (e.g., and step (b) includes performing one or more subsequent processing steps, e.g., thresholding, for each voxel trained to determine a hotspot likelihood value representing a likelihood that the voxel corresponds to a hotspot for at least a portion of the 3D functional image, to identify the 3D hotspot volume of the 3D hotspot using the hotspot likelihood value (e.g., wherein the 3D hotspot is a segmentation map (e.g., including one or more segmentation masks) generated using the second machine learning module (e.g., based on and/or corresponding to an output from) that identifies, for each hotspot, voxels within the 3D functional image corresponding to the 3D hotspot volume of each hotspot; e.g., wherein the 3D hotspot delineates a 3D boundary (e.g., an irregular boundary) of the 3D hotspot for each hotspot, e.g., the 3D hotspot encloses the 3D hotspot volume, and further encloses the 3D functional image from the 3D hotspot volume and/or provides a list and/or further processing of the voxels).
In certain embodiments, the method comprises: (e) A lesion likelihood classification corresponding to a likelihood that the hotspot represents a lesion within the subject is determined by the processor for each hotspot of at least a portion of the hotspots. In certain embodiments, step (e) comprises using a third machine learning module (e.g., a hotspot classification module) to determine the lesion likelihood classification for each hotspot [ e.g., based at least in part on one or more members selected from the group consisting of intensity of the 3D functional image, the hotspot list, the 3D hotspot map, intensity of a 3D anatomical image, and a 3D segmentation map; for example, wherein the third machine learning module receives one or more input channels corresponding to one or more members selected from the group consisting of intensities of the 3D functional images, the list of hotspots, the 3D hotspot graph, intensities of 3D anatomical images, and 3D segmentation maps.
In certain embodiments, the method comprises: (f) A subset of the one or more hotspots corresponding to hotspots having a high likelihood of corresponding to cancerous lesions is selected (e.g., for inclusion in a report; e.g., for calculating one or more risk index values for the subject) by the processor based at least in part on the lesion likelihood classification for the hotspots.
In another aspect, the invention relates to a method of measuring intensity values within a reference volume (e.g., a liver volume associated with a liver of a subject) corresponding to a reference tissue region in order to avoid effects from the tissue region associated with low (e.g., abnormally low) radiopharmaceutical uptake (e.g., due to a tumor without tracer-uptake), the method comprising: (a) Receiving (e.g., and/or accessing) a 3D functional image of a subject by a processor of a computing device, the 3D functional image being using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, single Photon Emission Computed Tomography (SPECT) ] to obtain [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having intensity values representing detected radiation emitted from the specific physical volume, wherein at least a portion of the plurality of voxels of the 3D functional image represent physical volumes within a target tissue region ]; (b) Identifying, by the processor, the reference volume within the 3D functional image; (c) Fitting, by the processor, a multicomponent mixture model (e.g., a bi-component gaussian mixture model) to intensities of voxels within the reference volume [ e.g., fitting the multicomponent mixture model to a distribution (e.g., a histogram) of intensities of voxels within the reference volume ]. (d) Identifying, by the processor, a dominant mode of the multicomponent model; (e) Determining, by the processor, a measure of intensity (e.g., average, maximum, mode, median, etc.) corresponding to the primary mode, thereby determining a reference intensity value corresponding to a measure of intensity of voxels that are (i) within the reference tissue volume and (ii) associated with the primary mode (e.g., and excluding voxels from reference value calculations that have intensities associated with secondary modes) (e.g., thereby avoiding effects from tissue regions associated with low radiopharmaceutical uptake); (f) Detecting, by the processor, one or more hotspots within the functional image corresponding to potentially cancerous lesions; and (g) determining, by the processor, a lesion index value using at least the reference intensity value for each hotspot of at least a portion of the detected hotspots [ e.g., the lesion index value is based on (i) a measure of intensity of voxels corresponding to the detected hotspots and (ii) the reference intensity value ].
In another aspect, the invention relates to a method of correcting intensity exudation (e.g., cross-talk) from a region of high-uptake tissue within a subject due to association with high-radiopharmaceutical uptake under normal conditions (e.g., and not necessarily indicative of cancer), the method comprising: (a) Receiving (e.g., and/or accessing) a 3D functional image of the subject by a processor of a computing device, the 3D functional image being using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, single Photon Emission Computed Tomography (SPECT) ] to obtain [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having intensity values representing detected radiation emitted from the specific physical volume, wherein at least a portion of the plurality of voxels of the 3D functional image represent physical volumes within a target tissue region ]; (b) Identifying, by the processor, a high intensity volume within the 3D functional image, the high intensity volume corresponding to a particular high uptake tissue region (e.g., kidney; e.g., liver; e.g., bladder) in which high radiopharmaceutical uptake normally occurs; (c) Identifying, by the processor, a suppression volume within the 3D functional image based on the identified high intensity volume, the suppression volume corresponding to a volume that is outside a boundary of the identified high intensity volume and that is within a predetermined decay distance from the boundary of the identified high intensity volume; (d) Determining, by the processor, a background image corresponding to the 3D functional image, wherein intensities of voxels within the high intensity volume are replaced with an interpolated value determined based on intensities of voxels of the 3D functional image within the suppressed volume; (e) Determining, by the processor, an estimated image by subtracting intensities of voxels of the background image from intensities of voxels from the 3D functional image (e.g., performing voxel-by-voxel subtraction); (f) Determining, by the processor, a inhibition map by: extrapolating intensities of voxels of the estimated image corresponding to the high intensity volume to locations of voxels within the suppression volume to determine intensities of voxels of the suppression map corresponding to the suppression volume; and setting intensities of voxels of the inhibition map corresponding to locations outside the inhibition volume to zero; and (g) adjusting, by the processor, intensities of voxels of the 3D functional image based on the inhibition map (e.g., by subtracting intensities of voxels of the inhibition map from intensities of voxels of the 3D functional image), thereby correcting intensity bleed from the high intensity volume.
In certain embodiments, the method comprises performing steps (b) through (g) in a sequential manner for each of a plurality of high intensity volumes, thereby correcting intensity bleed from each of the plurality of high intensity volumes.
In certain embodiments, the plurality of high intensity volumes comprises one or more members selected from the group consisting of kidneys, liver, and bladder (e.g., urinary bladder).
In another aspect, the invention relates to a method for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank) cancerous lesions within the subject, the method comprising: (a) Receiving (e.g., and/or accessing) by a processor of a computing device using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, a 3D functional image of the subject [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having intensity values representing detected radiation emitted from the specific physical volume ], obtained by Single Photon Emission Computed Tomography (SPECT) ], wherein at least a portion of the plurality of voxels of the 3D functional image represent a physical volume within a target tissue region; (b) Automatically detecting, by the processor, one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing (e.g., indicating) a potentially cancerous lesion within the subject; (c) Causing, by the processor, a graphical representation of the one or more hotspots to be rendered for display within an interactive Graphical User Interface (GUI), such as a quality control and reporting GUI; (d) Receiving, by the processor, a user selection (e.g., for inclusion in a report) of a final set of hotspots including at least a portion (up to all) of the one or more automatically detected hotspots via the interactive GUI; and (e) storing and/or providing the final set of hotspots for display and/or further processing.
In certain embodiments, the method comprises: (f) Receiving, by the processor, user selections of one or more additional, user-identified hotspots via the GUI for inclusion in the final set of hotspots; and (g) updating, by the processor, the final set of hotspots to include the one or more additional, user-identified hotspots.
In certain embodiments, step (b) comprises using one or more machine learning modules.
In another aspect, the invention relates to a method for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank) cancerous lesions within the subject, the method comprising: (a) Receiving (e.g., and/or accessing) by a processor of a computing device using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, a 3D functional image of the subject [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having intensity values representing detected radiation emitted from the specific physical volume ], obtained by Single Photon Emission Computed Tomography (SPECT) ], wherein at least a portion of the plurality of voxels of the 3D functional image represent a physical volume within a target tissue region; (b) Automatically detecting, by the processor, one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing (e.g., indicating) a potentially cancerous lesion within the subject; (c) Automatically determining, by the processor, for each of at least a portion of the one or more hotspots, an anatomical classification corresponding to a particular anatomical region and/or group of anatomical regions within the subject in which the potential cancerous lesion represented by the hotspot is determined [ e.g., by the processor (e.g., based on received and/or determined 3D segmentation maps) ] to be located [ e.g., within the prostate, pelvic lymph nodes, non-pelvic lymph nodes, bones (e.g., bone metastatic regions), and soft tissue regions not located in the prostate or lymph nodes ]; and (d) storing and/or providing for display and/or further processing an identification of the one or more hotspots along with the anatomical classification for each hotspot corresponding to the hotspot.
In certain embodiments, step (b) comprises using one or more machine learning modules.
In another aspect, the invention relates to a system for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank) cancerous lesions within the subject, the system comprising: a processor of a computing device; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) Receiving (e.g., and/or accessing) a data stream using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, a 3D functional image of the subject [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having an intensity value (e.g., a Standard Uptake Value (SUV)) representing detected radiation emitted from the specific physical volume ], obtained by Single Photon Emission Computed Tomography (SPECT) ], wherein at least a portion of the plurality of voxels of the 3D functional image represent a physical volume within a target tissue region; (b) Automatically detecting, using a machine learning module [ e.g., a pre-trained machine learning module (e.g., having predetermined (e.g., and fixed) parameters that have been determined by a training program) ], one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing (e.g., indicating) a potentially cancerous lesion within the subject, thereby establishing one or both of (i) and (ii) as follows: (i) A list of hotspots [ e.g., a list of coordinates (e.g., image coordinates; e.g., physical space coordinates); for example, a mask identifying voxels of the 3D functional image, each voxel corresponding to a location (e.g., centroid) of a detected hotspot, identifying a location of the hotspot for each hotspot, and (ii) a 3D hotspot graph identifying a corresponding 3D hotspot volume within the 3D functional image for each hotspot { e.g., wherein the 3D hotspot graph is a segmentation map (e.g., including one or more segmentation masks) identifying voxels within the 3D functional image corresponding to the 3D hotspot volume of each hotspot for each hotspot [ e.g., wherein the 3D hotspot graph is obtained via artificial intelligence-based segmentation of the functional image (e.g., using a machine learning module that receives at least the 3D functional image as input and generates the 3D hotspot graph as output thereby segmenting the hotspot) ]; for example, wherein the 3D hotspot graph depicts a 3D boundary (e.g., an irregular boundary) of the hotspot for each hotspot (e.g., the 3D boundary encloses the 3D hotspot volume, e.g., and distinguishes voxels of the 3D functional image that make up the 3D hotspot volume from other voxels of the 3D functional image); and (c) storing and/or providing the list of hotspots and/or the 3D hotspot graph for display and/or further processing.
In certain embodiments, the machine learning module receives at least a portion of the 3D functional image as input and automatically detects the one or more hotspots based at least in part on intensities of voxels of the received portion of the 3D functional image.
In certain embodiments, the machine learning module receives as input a 3D segmentation map identifying one or more volumes of interest (VOIs) within the 3D functional image, each VOI corresponding to a particular target tissue region and/or a particular anatomical region [ e.g., soft tissue region (e.g., prostate, lymph node, lung, breast) ] within the subject; for example, one or more specific bones; for example, the entire bone region ].
In some embodiments, the instructions cause the processor to: receiving (e.g., and/or accessing) using an anatomical imaging modality [ e.g., x-ray Computed Tomography (CT); for example, magnetic Resonance Imaging (MRI); for example, ultrasound ], wherein the 3D anatomical image includes a graphical representation of tissue (e.g., soft tissue and/or bone) within the subject, and the machine learning module receives at least two input channels including a first input channel corresponding to at least a portion of the 3D anatomical image and a second input channel corresponding to at least a portion of the 3D functional image [ e.g., wherein the machine learning module receives PET images and CT images as separate channels (e.g., separate channels representing the same volume) (e.g., two color channels (RGB)) similar to receiving photographic color images by a machine learning module ].
In certain embodiments, the machine learning module receives as input a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image and/or the 3D anatomical image, each VOI corresponding to a particular target tissue region and/or a particular anatomical region.
In certain embodiments, the instructions cause the processor to automatically segment the 3D anatomical image, thereby establishing the 3D segmentation map.
In certain embodiments, the machine learning module is a region-specific machine learning module that receives as input specific portions of the 3D functional image corresponding to one or more specific tissue regions and/or anatomical regions of the subject.
In certain embodiments, the machine learning module generates the list of hotspots as an output [ e.g., wherein the machine learning module implements a machine learning algorithm (e.g., an Artificial Neural Network (ANN)) trained to determine one or more locations (e.g., 3D coordinates) based on intensities of voxels of at least a portion of the 3D functional image, each location corresponding to a location of one of the one or more hotspots ].
In certain embodiments, the machine learning module generates the 3D hotspot graph as an output [ e.g., wherein the machine learning module implements a machine learning algorithm (e.g., an Artificial Neural Network (ANN)) trained to segment the 3D functional image (e.g., based at least in part on intensities of voxels of the 3D functional image) to identify the 3D hotspot graph (e.g., the 3D hotspot graph depicts a 3D boundary (e.g., an irregular boundary) of the hotspot for each hotspot), thereby identifying the 3D hotspot volume (e.g., enclosed by the 3D hotspot boundary)) of the 3D hotspot volume; for example, wherein the machine learning module implements a machine learning algorithm trained to determine, for each voxel of at least a portion of the 3D functional image, a hotspot likelihood value representative of a likelihood that the voxel corresponds to a hotspot (e.g., and step (b) includes performing one or more subsequent processing steps, such as thresholding, to identify the 3D hotspot volume of the 3D hotspot graph (e.g., the 3D hotspot graph depicts a 3D boundary (e.g., an irregular boundary) of the hotspot for each hotspot, thereby identifying the 3D hotspot volume (e.g., enclosed by the 3D hotspot boundary)) using the hotspot likelihood value).
In some embodiments, the instructions cause the processor to: (d) Determining, for each hotspot of at least a portion of the hotspots, a lesion likelihood classification corresponding to a likelihood that the hotspot represents a lesion within the subject [ e.g., a binary classification indicating whether the hotspot is a true lesion; for example, a likelihood value on a scale (e.g., a floating point value in the range of zero to one) that represents the likelihood that the hotspot represents a real lesion ].
In certain embodiments, at step (D), the instructions cause the processor to use the machine learning module to determine the lesion likelihood classification for each hotspot of the portion [ e.g., wherein the machine learning module implements a machine learning algorithm trained to detect hotspots (e.g., generate the hotspot list and/or the 3D hotspot graph as output) and determine the lesion likelihood classification for the hotspot for each hotspot ].
In certain embodiments, at step (D), the instructions cause the processor to use a second machine learning module (e.g., a hotspot classification module) to determine the lesion likelihood classification for each hotspot [ e.g., based at least in part on one or more members selected from the group consisting of intensity of the 3D functional image, the hotspot list, the 3D hotspot map, intensity of a 3D anatomical image, and a 3D segmentation map; for example, wherein the second machine learning module receives one or more input channels corresponding to one or more members selected from the group consisting of intensities of the 3D functional images, the list of hotspots, the 3D hotspot graph, intensities of 3D anatomical images, and 3D segmentation maps.
In certain embodiments, the instructions cause the processor to determine a set of one or more hotspot features for each hotspot and use the set of one or more hotspot features as input to the second machine learning module.
In certain embodiments 55-58, wherein the instructions cause the processor to: (e) A subset of the one or more hotspots corresponding to hotspots having a high likelihood of corresponding to cancerous lesions is selected based at least in part on the lesion likelihood classification of the hotspots (e.g., for inclusion in a report; e.g., for calculating one or more risk index values for the subject).
In some embodiments, the instructions cause the processor to: (f) Adjusting, by the processor, intensities of voxels of the 3D functional image to correct intensity exudation (e.g., crosstalk) from one or more high intensity volumes of the 3D functional image, each of the one or more high intensity volumes corresponding to a high uptake tissue region within the subject associated with normally high radiopharmaceutical uptake (e.g., not necessarily indicative of cancer).
In certain embodiments, at step (f), the instructions cause the processor to correct intensity bleedout from multiple high intensity volumes one at a time in a sequential manner [ e.g., first adjusting intensities of voxels of the 3D functional image to correct intensity bleedout from a first high intensity volume to produce a first corrected image, then adjusting intensities of voxels of the first corrected image to correct intensity bleedout from a second high intensity volume, etc. ].
In certain embodiments, the one or more high intensity volumes correspond to one or more high uptake tissue regions selected from the group consisting of kidney, liver, and bladder (e.g., urinary bladder).
In some embodiments, the instructions cause the processor to: (g) A corresponding lesion index is determined for each of at least a portion of the one or more hotspots that indicates a level of radiopharmaceutical uptake within a potential lesion corresponding to the hotspot and/or a size (e.g., volume) of the potential lesion.
In certain embodiments, at step (g), the instructions cause the processor to compare the intensity(s) (e.g., corresponding to a Standard Uptake Value (SUV)) of one or more voxels associated with the hotspot (e.g., at and/or near the location of the hotspot; e.g., within the volume of the hotspot) to one or more reference values, each reference value being associated with a particular reference tissue region (e.g., liver; e.g., aortic portion) within the subject and to determine [ e.g., as an average (e.g., a robust average, such as an average of values within a quartile spacing) ] based on the intensity (e.g., SUV value) of a reference volume corresponding to the reference tissue region.
In certain embodiments, the one or more reference values comprise one or more members selected from the group consisting of an aortic reference value associated with an aortic portion of the subject and a liver reference value associated with a liver of the subject.
In some embodiments, for at least one particular reference value associated with a particular reference tissue region, the instructions cause the processor to determine the particular reference value by: fitting the intensities of voxels within a specific reference volume corresponding to the specific reference tissue region [ e.g., by fitting a distribution of intensities of voxels (e.g., fitting a histogram of voxel intensities) ] to a multicomponent hybrid model (e.g., a two-component gaussian model) [ e.g., and identifying one or more secondary peaks in the distribution of voxel intensities that correspond to voxels associated with abnormal uptake, and determining from reference values excluding those voxels (e.g., from reference values) from reference values (e.g., thereby accounting for effects of abnormally low radiopharmaceutical uptake in certain portions of the reference tissue region, e.g., portions of the liver) ].
In certain embodiments, the instructions cause the processor to calculate (e.g., automatically) an overall risk index for the subject indicative of the subject's cancer status and/or risk using the determined lesion index value.
In certain embodiments, the instructions cause the processor to determine, for each hotspot (e.g., automatically), an anatomical classification corresponding to a particular anatomical region and/or group of anatomical regions within the subject in which the potential cancerous lesion represented by the hotspot is determined [ e.g., by the processor (e.g., based on the received and/or determined 3D segmentation map) ] to be located [ e.g., in the prostate, pelvic lymph nodes, non-pelvic lymph nodes, bones (e.g., bone metastatic regions), and soft tissue regions not located in the prostate or lymph nodes ].
In some embodiments, the instructions cause the processor to: (h) Causing a graphical representation of at least a portion of the one or more hotspots to be rendered for display within a Graphical User Interface (GUI) for viewing by a user.
In some embodiments, the instructions cause the processor to: (i) User selection of a subset of the one or more hotspots identified as likely to represent potential cancerous lesions within the subject via user viewing is received via the GUI.
In certain embodiments, the 3D functional image comprises a PET or SPECT image obtained after administration of an agent (e.g., a radiopharmaceutical; e.g., an imaging agent) to the subject. In certain embodiments, the agent comprises a PSMA binding agent. In certain embodiments, the agent comprises 18 F. In certain embodiments, the agent comprises [18F]DCFPyL. In certain embodiments, the agent comprises PSMA-11 (e.g., 68 Ga-PSMA-11). In certain embodiments, the agent comprises a compound selected from the group consisting of 99m Tc、 68 Ga、 177 Lu、 225 Ac、 111 In、 123 I、 124 I, I 131 One or more members of the group of I.
In certain embodiments, the machine learning module implements a neural network [ e.g., an Artificial Neural Network (ANN); for example, convolutional Neural Network (CNN) ].
In certain embodiments, the processor is a processor of a cloud-based system.
In another aspect, the invention relates to a system for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank) cancerous lesions within the subject, the system comprising: a processor of a computing device; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) Receiving (e.g., and/or accessing) a data stream using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, a 3D functional image of the subject [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having intensity values representing detected radiation emitted from the specific physical volume ], obtained by Single Photon Emission Computed Tomography (SPECT) ], wherein at least a portion of the plurality of voxels of the 3D functional image represent a physical volume within a target tissue region; (b) Receiving (e.g., and/or accessing) using an anatomical imaging modality [ e.g., x-ray Computed Tomography (CT); for example, magnetic Resonance Imaging (MRI); for example, ultrasound ] a 3D anatomical image of the subject obtained, wherein the 3D anatomical image comprises a graphical representation of tissue (e.g., soft tissue and/or bone) within the subject; (c) Automatically detecting, using a machine learning module, one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing (e.g., indicating) a potentially cancerous lesion within the subject, establishing one or both of (i) and (ii) as follows: (i) A list of hotspots that identify, for each hotspot, a location of the hotspot, and (ii) a 3D hotspot graph that identifies, for each hotspot, a corresponding 3D hotspot volume within the 3D functional image { e.g., wherein the 3D hotspot graph is a segmentation graph (e.g., comprising one or more segmentation masks) that identifies, for each hotspot, voxels within the 3D functional image that correspond to the 3D hotspot volume of each hotspot [ e.g., wherein the 3D hotspot graph is obtained via artificial intelligence-based segmentation of the functional image (e.g., using a machine learning module that receives at least the 3D functional image as input and generates the 3D hotspot graph as output thereby segmenting the hotspot) ]; for example, wherein the 3D hotspot graph depicts a 3D boundary (e.g., an irregular boundary) of the hotspot for each hotspot (e.g., the 3D boundary encloses the 3D hotspot volume (e.g., and distinguishes voxels of the 3D functional image that make up the 3D hotspot volume from other voxels of the 3D functional image) }, wherein the machine learning module receives at least two input channels including a first input channel corresponding to at least a portion of the 3D anatomical image and a second input channel corresponding to at least a portion of the 3D functional image [ e.g., wherein the machine learning module receives a PET image and a CT image as separate channels (e.g., separate channels representing the same volume) (e.g., similar to two color channels (RGB) that receive photographic color images by a machine learning module) ] and/or anatomical information derived therefrom [ e.g., a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image, each VOI corresponding to a particular target tissue region and/or particular anatomical region ]; and (D) storing and/or providing the list of hotspots and/or the 3D hotspot graph for display and/or further processing.
In another aspect, the invention relates to a system for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank) cancerous lesions within the subject, the system comprising: a processor of a computing device; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) Receiving (e.g., and/or accessing) a data stream using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, a 3D functional image of the subject [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having intensity values representing detected radiation emitted from the specific physical volume ], obtained by Single Photon Emission Computed Tomography (SPECT) ], wherein at least a portion of the plurality of voxels of the 3D functional image represent a physical volume within a target tissue region; (b) Automatically detecting one or more hotspots within the 3D functional image using a first machine learning module, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing (e.g., indicating) a potential cancerous lesion within the subject, thereby establishing a list of hotspots identifying a location of the hotspot for each hotspot [ e.g., wherein the machine learning module implements a machine learning algorithm (e.g., an Artificial Neural Network (ANN)) trained to determine one or more locations (e.g., 3D coordinates) based on intensities of voxels of at least a portion of the 3D functional image, each location corresponding to a location of one of the one or more hotspots ]; (c) Automatically determining, for each of the one or more hotspots, a corresponding 3D hotspot volume within the 3D functional image using a second machine learning module and the hotspot list, thereby creating a 3D hotspot graph [ e.g., wherein the second machine learning module implements a machine learning algorithm (e.g., an Artificial Neural Network (ANN)) trained to segment the 3D functional image based at least in part on the hotspot list along with intensities of voxels of the 3D functional image to identify the 3D hotspot volume of the 3D hotspot graph; for example, wherein the machine learning module implements a machine learning algorithm trained to determine, for each voxel of at least a portion of the 3D functional image, a hotspot likelihood value representative of a likelihood that the voxel corresponds to a hotspot (e.g., and step (b) includes performing one or more subsequent processing steps, such as thresholding, to identify the 3D hotspot volume of the 3D hotspot using the hotspot likelihood value (e.g., wherein the 3D hotspot graph is a segmentation map (e.g., including one or more segmentation masks) generated using the second machine learning module (e.g., based on and/or corresponding to an output from) that identifies, for each hotspot, voxels within the 3D functional image that correspond to the 3D hotspot volume of each hotspot; e.g., wherein the 3D hotspot graph depicts a 3D boundary (e.g., an irregular boundary) of the hotspot for each hotspot, e.g., the 3D hotspot graph encloses the 3D volume, e.g., and the 3D hotspot graph is formed from the 3D functional image and/or further list and/or further processing the 3D functional image is provided.
In some embodiments, the instructions cause the processor to: (e) A lesion likelihood classification corresponding to a likelihood that the hotspot represents a lesion within the subject is determined for each hotspot of at least a portion of the hotspots.
In certain embodiments, at step (e), the instructions cause the processor to use a third machine learning module (e.g., a hotspot classification module) to determine the lesion likelihood classification for each hotspot [ e.g., based at least in part on one or more members selected from the group consisting of intensity of the 3D functional image, the hotspot list, the 3D hotspot map, intensity of a 3D anatomical image, and a 3D segmentation map; for example, wherein the third machine learning module receives one or more input channels corresponding to one or more members selected from the group consisting of intensities of the 3D functional images, the list of hotspots, the 3D hotspot graph, intensities of 3D anatomical images, and 3D segmentation maps.
In some embodiments, the instructions cause the processor to: (f) A subset of the one or more hotspots corresponding to hotspots having a high likelihood of corresponding to cancerous lesions is selected based at least in part on the lesion likelihood classification of the hotspots (e.g., for inclusion in a report; e.g., for calculating one or more risk index values for the subject).
In another aspect, the invention relates to a system for measuring intensity values within a reference volume (e.g., a liver volume associated with a liver of a subject) corresponding to a reference tissue region in order to avoid effects from the tissue region associated with low (e.g., abnormally low) radiopharmaceutical uptake (e.g., due to a tumor without tracer-uptake), the system comprising: a processor of a computing device; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) Receiving (e.g., and/or accessing) a 3D functional image of a subject, the 3D functional image being of a type using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, single Photon Emission Computed Tomography (SPECT) ] to obtain [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having intensity values representing detected radiation emitted from the specific physical volume, wherein at least a portion of the plurality of voxels of the 3D functional image represent physical volumes within a target tissue region ]; (b) identifying the reference volume within the 3D functional image; (c) Fitting a multi-component mixture model (e.g., a two-component gaussian mixture model) to intensities of voxels within the reference volume [ e.g., fitting the multi-component mixture model to a distribution (e.g., a histogram) of intensities of voxels within the reference volume ]; (d) identifying a dominant mode of the multicomponent model; (e) Determining a measure of intensity (e.g., average, maximum, mode, median, etc.) corresponding to the primary mode, thereby determining a reference intensity value corresponding to a measure of intensity of voxels that are (i) within the reference tissue volume and (ii) associated with the primary mode (e.g., and excluding voxels from reference value calculation that have intensities associated with secondary modes) (e.g., thereby avoiding effects from tissue regions associated with low radiopharmaceutical uptake); (f) Detecting one or more hotspots corresponding to potentially cancerous lesions within the 3D functional image; and (g) determining a lesion index value for each hotspot of at least a portion of the detected hotspots using at least the reference intensity value [ e.g., the lesion index value is based on (i) a measure of intensity of voxels corresponding to the detected hotspots and (ii) the reference intensity value ].
In another aspect, the invention relates to a system for correcting intensity exudation (e.g., crosstalk) from a region of high-uptake tissue within a subject due to high-radiopharmaceutical uptake associated with normal (e.g., and not necessarily indicative of cancer), the method comprising: (a) Receiving (e.g., and/or accessing) a 3D functional image of a subject, the 3D functional image being of a type using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, single Photon Emission Computed Tomography (SPECT) ] to obtain [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having intensity values representing detected radiation emitted from the specific physical volume, wherein at least a portion of the plurality of voxels of the 3D functional image represent physical volumes within a target tissue region ]; (b) Identifying a high intensity volume within the 3D functional image, the high intensity volume corresponding to a particular high uptake tissue region (e.g., kidney; e.g., liver; e.g., bladder) in which high radiopharmaceutical uptake occurs under normal conditions; (c) Identifying a suppression volume within the 3D functional image based on the identified high intensity volume, the suppression volume corresponding to a volume that is outside a boundary of the identified high intensity volume and within a predetermined decay distance from the boundary of the identified high intensity volume; (d) Determining a background image corresponding to the 3D functional image, wherein intensities of voxels within the high intensity volume are replaced with an interpolation value determined based on intensities of voxels of the 3D functional image within the suppression volume; (e) Determining an estimated image by subtracting intensities of voxels of the background image from intensities of voxels from the 3D functional image (e.g., performing voxel-by-voxel subtraction); (f) determining the inhibition map by: extrapolating intensities of voxels of the estimated image corresponding to the high intensity volume to locations of voxels within the suppression volume to determine intensities of voxels of the suppression map corresponding to the suppression volume; and setting intensities of voxels of the inhibition map corresponding to locations outside the inhibition volume to zero; and (g) adjusting the intensity of voxels of the 3D functional image based on the inhibition map (e.g., by subtracting the intensity of voxels of the inhibition map from the intensity of voxels of the 3D functional image), thereby correcting intensity bleed from the high intensity volume.
In certain embodiments, the instructions cause the processor to perform steps (b) through (g) for each of a plurality of high intensity volumes in a sequential manner, correcting intensity exudation from each of the plurality of high intensity volumes.
In certain embodiments, the plurality of high intensity volumes comprises one or more members selected from the group consisting of kidneys, liver, and bladder (e.g., urinary bladder).
In another aspect, the invention relates to a system for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank) cancerous lesions within the subject, the system comprising: a processor of a computing device; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) Receiving (e.g., and/or accessing) a data stream using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, a 3D functional image of the subject [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having intensity values representing detected radiation emitted from the specific physical volume ], obtained by Single Photon Emission Computed Tomography (SPECT) ], wherein at least a portion of the plurality of voxels of the 3D functional image represent a physical volume within a target tissue region; (b) Automatically detecting one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing (e.g., indicating) a potentially cancerous lesion within the subject; (c) Causing a graphical representation of the one or more hotspots to be rendered for display within an interactive Graphical User Interface (GUI), such as a quality control and reporting GUI; (d) Receiving, via the interactive GUI, a user selection of a final set of hotspots (e.g., for inclusion in a report) comprising at least a portion (up to all) of the one or more automatically detected hotspots; and (e) storing and/or providing the final set of hotspots for display and/or further processing.
In some embodiments, the instructions cause the processor to: (f) Receive, via the GUI, a user selection of one or more additional, user-identified hotspots for inclusion in the final set of hotspots; and (g) updating the final set of hotspots to include the one or more additional, user-identified hotspots.
In certain embodiments, in step (b), the instructions cause the processor to use one or more machine learning modules.
In another aspect, the invention relates to a method for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank) cancerous lesions within the subject, the method comprising: (a) Receiving (e.g., and/or accessing) by a processor of a computing device a 3D functional image of the subject obtained using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, single Photon Emission Computed Tomography (SPECT) ]; (b) Receiving (e.g., and/or accessing) by the processor a 3D anatomical image [ e.g., a Computed Tomography (CT) image ] of the subject obtained using an anatomical imaging modality; magnetic Resonance (MR) images ]; (c) Receiving (e.g., and/or accessing) a 3D segmentation map by the processor, the 3D segmentation map identifying one or more specific tissue regions or groups of tissue regions within the 3D functional image and/or within the 3D anatomical image (e.g., a set of tissue regions corresponding to a specific anatomical region; e.g., a group of tissue regions comprising an organ in which high or low radiopharmaceutical uptake occurs); (d) Automatically detecting and/or segmenting, by the processor, using one or more machine learning modules, a set of one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potentially cancerous lesion within the subject, thereby establishing one or both of (i) and (ii) as follows: (i) a list of hotspots identifying, for each hotspot, a location of the hotspot [ e.g., as detected by the one or more machine learning modules ], and (ii) a 3D hotspot graph identifying, for each hotspot, a corresponding 3D hotspot volume within the 3D functional image [ e.g., as determined via segmentation performed by the one or more machine learning modules ] [ e.g., wherein the 3D hotspot graph is a segmentation map that delineates, for each hotspot, a 3D hotspot boundary (e.g., an irregular boundary) of the hotspot (e.g., the 3D boundary encloses the 3D hotspot volume) ], wherein at least one of the one or more machine learning modules (e.g., up to all) receives as input (i) the 3D functional image, (ii) the 3D anatomical image, and (iii) the 3D segmentation map; and (e) storing and/or providing a list of hotspots and/or the 3D hotspot graph for display and/or further processing.
In certain embodiments, the method comprises: receiving, by the processor, an initial 3D segmentation map that identifies one or more (e.g., a plurality of) specific tissue regions (e.g., organs and/or specific bones) within the 3D anatomical image and/or the 3D functional image; and identifying, by the processor, at least a portion of the one or more particular tissue regions as belonging to a particular one of one or more tissue groups (e.g., predefined groups) and updating, by the processor, the 3D segmentation map to indicate that the identified particular region belongs to the particular tissue group; and using, by the processor, the updated 3D segmentation map as input to at least one of the one or more machine learning modules.
In certain embodiments, the one or more tissue groups comprise a soft tissue group such that a particular tissue region representing soft tissue is identified as belonging to the soft tissue group. In certain embodiments, the one or more tissue groups comprise a bone tissue group such that a particular tissue region representing bone is identified as belonging to the bone tissue group. In certain embodiments, the one or more tissue groups include a high uptake group of organs such that one or more organs associated with high radiopharmaceutical uptake (e.g., under normal conditions, and not necessarily due to the presence of a lesion) are identified as belonging to the high uptake group.
In certain embodiments, the method comprises determining, by the processor, a classification of the hotspots for each detected and/or segmented hotspot [ e.g., classifying the hotspots as bone, lymph, or prostate according to anatomical location, e.g., assigning an alphanumeric code, such as the labeling scheme in table 1, based on the determined (e.g., by the processor) location of the hotspot in the subject ].
In certain embodiments, the method includes using at least one of the one or more machine learning modules to determine the classification of the hot spot for each detected and/or segmented lesion (e.g., wherein a single machine learning module performs detection, segmentation, and classification).
In certain embodiments, the one or more machine learning modules include: (A) A whole body lesion detection module that detects and/or segments hot spots throughout the body; and (B) a prostate lesion module that detects and/or segments hot spots within the prostate. In certain embodiments, the method includes generating a hotspot list and/or map using each of (a) and (B) and merging the results.
In certain embodiments, step (d) comprises: segmenting and classifying the set of one or more hotspots to create a labeled 3D hotspot graph, the labeled 3D hotspot graph identifying, for each hotspot, a corresponding 3D hotspot volume within the 3D functional image and wherein each hotspot volume is labeled as belonging to a particular hotspot category of a plurality of hotspot categories [ e.g., each hotspot category identifies a particular anatomical and/or tissue region (e.g., lymph, bone, prostate) in which a lesion represented by the hotspot is determined to be located ]. Using a first machine learning module to segment a first initial set of one or more hotspots within the 3D functional image to create a first initial 3D hotspot graph identifying a first initial set of hotspot volumes, wherein the first machine learning module segments the hotspots of the 3D functional image according to a single hotspot category [ e.g., identifies all hotspots as belonging to a single hotspot category so as to distinguish background regions from hotspot volumes (e.g., but not different types of hotspots) (e.g., such that each hotspot volume identified by the first 3D hotspot graph is labeled as belonging to a single hotspot category, as opposed to background) ]; using a second machine learning module to segment a second initial set of one or more hotspots within the 3D functional image to create a second initial 3D hotspot graph identifying a second initial set of hotspot volumes, wherein the second machine learning module segments the 3D functional image according to a plurality of different hotspot categories such that the second initial 3D hotspot graph is a multi-category 3D hotspot graph, wherein each hotspot volume is labeled as belonging to a particular one of the plurality of different hotspot categories (e.g., so as to distinguish between hotspot volumes corresponding to different hotspot categories, and between hotspot volumes and background areas); and merging, by the processor, the first initial 3D hotspot graph and the second initial 3D hotspot graph for at least a portion of the hotspot volume identified by the first initial 3D hotspot graph: identifying a matching hotspot volume of the second initial 3D hotspot graph (e.g., by identifying substantially overlapping hotspot volumes of the first and second initial 3D hotspot graphs), the matching hotspot volume of the second 3D hotspot graph having been labeled as belonging to a particular hotspot category of the plurality of different hotspot categories; and marking the particular hotspot volume of the first initial 3D hotspot graph as belonging to the particular hotspot category (e.g., the matching hotspot volume is marked as belonging to), thereby creating a merged 3D hotspot graph comprising a segmented hotspot volume of the first 3D hotspot graph, the segmented hotspot volume having been marked according to the category to which the matching hotspot volume of the second 3D hotspot graph is identified as belonging; and step (e) comprises storing and/or providing the merged 3D heatmap for display and/or further processing.
In certain embodiments, the plurality of different hotspot categories comprises one or more members selected from the group consisting of: (i) A bone hotspot determined (e.g., by the second machine learning module) to represent a lesion located in a bone; (ii) A lymphatic hotspot determined (e.g., by the second machine learning module) to represent a lesion located in a lymph node; and (iii) a prostate hotspot determined (e.g., by the second machine learning module) to represent a lesion located in the prostate.
In certain embodiments, the method further comprises: (f) receiving and/or accessing the hot spot list; and (g) for each hotspot in the list of hotspots, segmenting the hotspot using an analytical model [ e.g., thereby establishing a 3D map of analytically segmented hotspots (e.g., the 3D map identifying for each hotspot a hotspot volume of voxels comprising the 3D anatomical and/or functional image enclosed by the segmented hotspot region) ].
In certain embodiments, the method further comprises: (h) receiving and/or accessing the hotspot graph; and (i) for each hotspot in the hotspot graph, segmenting the hotspot using an analytical model [ e.g., thereby establishing a 3D map of analytically segmented hotspots (e.g., the 3D map identifying for each hotspot a hotspot volume of voxels comprising the 3D anatomical and/or functional images enclosed by the segmented hotspot region) ].
In certain embodiments, the analytical model is an adaptive thresholding method, and step (i) comprises: determining one or more reference values, each reference value being based on a measure of intensity of voxels of the 3D functional image that lie within a particular reference volume corresponding to a particular reference tissue region (e.g., a blood pool reference value is determined based on intensity within an aortic volume corresponding to a portion of the subject's aorta; e.g., a liver reference value is determined based on intensity within a liver volume corresponding to the subject's liver); and for each particular hotspot volume of the 3D hotspot graph: determining, by the processor, a corresponding hotspot intensity based on intensities of voxels within the particular hotspot volume [ e.g., wherein the hotspot intensity is a maximum of intensities of voxels within the particular hotspot volume (e.g., representing an SUV) ]; and determining, by the processor, a hotspot-specific threshold for the particular hotspot based on at least one of (i) the corresponding hotspot intensity and (ii) the one or more reference values.
In certain embodiments, the hotspot-specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function being selected based on a comparison of the corresponding hotspot intensity to the at least one reference value [ e.g., wherein each of the plurality of threshold functions is associated with a specific range of intensity (e.g., SUV) values, and the specific threshold function is selected according to the specific range in which the hotspot intensity and/or its (e.g., predetermined) percentage falls (e.g., and wherein each specific range of intensity values is bounded at least in part by a multiple of the at least one reference value) ].
In certain embodiments, the hotspot-specific threshold is determined (e.g., by the specific threshold function) as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as hotspot intensity increases [ e.g., wherein the variable percentage is itself a function (e.g., a decreasing function) of the corresponding hotspot intensity ].
In another aspect, the invention relates to a method for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank; e.g., classify; e.g., as representing a particular lesion type) cancerous lesions within the subject, the method comprising: (a) Receiving (e.g., and/or accessing) by a processor of a computing device a 3D functional image of the subject obtained using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, single Photon Emission Computed Tomography (SPECT) ]; (b) Automatically segmenting, by the processor, a first initial set of one or more hotspots within the 3D functional image using a first machine learning module, thereby creating a first initial 3D hotspot graph identifying a first initial set of hotspot volumes, wherein the first machine learning module segments the hotspots of the 3D functional image according to a single hotspot category [ e.g., identifies all hotspots as belonging to a single hotspot category so as to distinguish background regions from hotspot volumes (e.g., but not different types of hotspots) (e.g., such that each hotspot volume identified by the first 3D hotspot graph is labeled as belonging to a single hotspot category, as opposed to background) ]; (c) Automatically segmenting, by the processor, a second initial set of one or more hotspots within the 3D functional image using a second machine learning module, thereby creating a second initial 3D hotspot graph identifying a second initial set of hotspot volumes, wherein the second machine learning module segments the 3D functional image according to a plurality of different hotspot categories [ e.g., each hotspot category identifying a particular anatomical and/or tissue region (e.g., lymph, bone, prostate) in which lesions represented by the hotspots are determined to be located ], such that the second initial 3D hotspot graph is a multi-category 3D hotspot graph, wherein each hotspot volume is labeled as belonging to a particular one of the plurality of different hotspot categories (e.g., so as to distinguish a hotspot volume corresponding to a different hotspot category, and to distinguish a hotspot volume from a background region); (d) Merging, by the processor, the first initial 3D hotspot graph and the second initial 3D hotspot graph for each particular hotspot volume of at least a portion of the first set of initial hotspot volumes identified by the first initial 3D hotspot graph: identifying a matching hotspot volume of the second initial 3D hotspot graph (e.g., by identifying substantially overlapping hotspot volumes of the first and second initial 3D hotspot graphs), the matching hotspot volume of the second 3D hotspot graph having been labeled as belonging to a particular hotspot category of the plurality of different hotspot categories; and marking the particular hotspot volume of the first initial 3D hotspot graph as belonging to the particular hotspot category (to which the matching hotspot volume is marked) thereby creating a merged 3D hotspot graph comprising a segmented hotspot volume of the first 3D hotspot graph, the segmented hotspot volume having been marked according to the category to which the matching hotspot of the second 3D hotspot graph is identified; and (e) storing and/or providing the merged 3D heatmap for display and/or further processing.
In certain embodiments, the plurality of different hotspot categories comprises one or more members selected from the group consisting of: (i) A bone hotspot determined (e.g., by the second machine learning module) to represent a lesion located in a bone; (ii) A lymphatic hotspot determined (e.g., by the second machine learning module) to represent a lesion located in a lymph node; and (iii) a prostate hotspot determined (e.g., by the second machine learning module) to represent a lesion located in the prostate.
In another aspect, the present invention relates to a method for automatically processing a 3D image of a subject via an adaptive thresholding method to identify and/or characterize (e.g., rank; e.g., classify; e.g., as representing a particular lesion type) cancerous lesions within the subject, the method comprising: (a) Receiving (e.g., and/or accessing) by a processor of a computing device a 3D functional image of the subject obtained using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, single Photon Emission Computed Tomography (SPECT) ]; (b) Receiving (e.g., and/or accessing), by the processor, a preliminary 3D hotspot map identifying one or more preliminary hotspot volumes within the 3D functional image; (c) Determining, by the processor, one or more reference values, each reference value being based on a measure of intensity of voxels of the 3D functional image that lie within a particular reference volume corresponding to a particular reference tissue region (e.g., a blood pool reference value is determined based on intensity within an aortic volume corresponding to a portion of an aorta of a subject; e.g., a liver reference value is determined based on intensity within a liver volume corresponding to a liver of a subject); (d) For each particular preliminary hot spot volume of at least a portion of the one or more preliminary hot spot volumes identified by the preliminary 3D hot spot map, establishing, by the processor, a refined 3D hot spot map based on the preliminary hot spot volumes and using segmentation based on an adaptive threshold: determining a corresponding hotspot intensity based on intensities of voxels within the particular preliminary hotspot volume [ e.g., wherein the hotspot intensity is a maximum of intensities of voxels within the particular preliminary hotspot volume (e.g., representing an SUV) ]; determining a hotspot-specific threshold for the particular preliminary hotspot volume based on at least one of (i) the corresponding hotspot intensity and (ii) the one or more reference values; segmenting at least a portion of the 3D functional image (e.g., a subvolume approximately in a particular preliminary hot spot volume) using a threshold-based segmentation algorithm that performs image segmentation using a hot spot particular threshold determined for the particular preliminary hot spot volume [ e.g., and identifies clusters of voxels having intensities above the hot spot particular threshold and including maximum intensity voxels of the preliminary hot spot (e.g., 3D clusters of voxels connected to each other in an n-connected component manner (e.g., where n=6, n=18, etc.) ], thereby determining a refined, analytically segmented hot spot volume corresponding to the particular preliminary hot spot volume; and including the refined hotspot volume in the refined 3D hotspot graph; and (e) storing and/or providing the refined 3D heatmap for display and/or further processing.
In certain embodiments, the hotspot-specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function being selected based on a comparison of the corresponding hotspot intensity to the at least one reference value [ e.g., wherein each of the plurality of threshold functions is associated with a specific range of intensity (e.g., SUV) values, and the specific threshold function is selected according to the specific range in which the hotspot intensity and/or its (e.g., predetermined) percentage falls (e.g., and wherein each specific range of intensity values is bounded at least in part by a multiple of the at least one reference value) ].
In certain embodiments, the hotspot-specific threshold is determined (e.g., by the specific threshold function) as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as hotspot intensity increases [ e.g., wherein the variable percentage is itself a function (e.g., a decreasing function) of the corresponding hotspot intensity ].
In another aspect, the invention relates to a method for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank; e.g., classify; e.g., as representing a particular lesion type) cancerous lesions within the subject, the method comprising: (a) Receiving (e.g., and/or accessing) by a processor of a computing device a signal using an anatomical imaging modality [ e.g., x-ray Computed Tomography (CT); for example, magnetic Resonance Imaging (MRI); for example, ultrasound ] a 3D anatomical image of the subject obtained, wherein the 3D anatomical image comprises a graphical representation of tissue (e.g., soft tissue and/or bone) within the subject; (b) Automatically segmenting, by the processor, the 3D anatomical image to create a 3D segmentation map, the 3D segmentation map identifying a plurality of volumes of interest (VOIs) in the 3D anatomical image, including a liver volume corresponding to a liver of the subject and an aortic volume corresponding to an aortic portion (e.g., thoracic and/or abdominal portion); (c) Receiving (e.g., and/or accessing) by the processor a data signal using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, a 3D functional image of the subject [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having intensity values representing detected radiation emitted from the specific physical volume ], obtained by Single Photon Emission Computed Tomography (SPECT) ], wherein at least a portion of the plurality of voxels of the 3D functional image represent a physical volume within a target tissue region; (d) Automatically segmenting, by the processor, one or more hotspots within the 3D functional image, each segmented hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing (e.g., indicating) a potentially cancerous lesion within the subject, identifying one or more automatically segmented hotspot volumes; (e) Causing, by the processor, a graphical representation of the one or more automatically segmented hotspot volumes to be rendered for display within an interactive Graphical User Interface (GUI), such as a quality control and reporting GUI; (f) Receiving, by the processor, a user selection of a final set of hot spots comprising at least a portion (e.g., up to all) of the one or more automatically segmented hot spot volumes via the interactive GUI; (g) Determining, by the processor, a lesion index value for each hotspot volume of the final set based on (i) intensities of voxels of the functional image corresponding to the hotspot volume (e.g., located within the hotspot volume) and (ii) one or more reference values determined using intensities of voxels of the functional image corresponding to the liver volume and the aortic volume; and (e) storing and/or providing the final set of hotspots and/or lesion index values for display and/or further processing.
In certain embodiments, step (b) comprises segmenting the anatomical image such that the 3D segmentation map identifies one or more bone volumes corresponding to one or more bones of the subject, and step (D) comprises identifying a bone volume within the functional image using the one or more bone volumes and segmenting one or more bone hotspot volumes located within the bone volume (e.g., by applying one or more differences of gaussian filters and thresholding bone volumes).
In certain embodiments, step (b) comprises segmenting the anatomical image such that the 3D segmentation map identifies one or more organ volumes corresponding to a soft tissue organ [ e.g., left/right lung, left/right gluteus maximus, urinary bladder, liver, left/right kidney, gall bladder, spleen, chest, and abdominal aorta ] of the subject, and optionally (e.g., for patients not undergoing radical prostatectomy) prostate ], and step (D) comprises identifying one or more soft tissue (e.g., lymph and optionally prostate) volumes within the functional image using the one or more segmented organ volumes and segmenting one or more lymph and/or prostate hotspot volumes located within the soft tissue volume (e.g., by applying one or more Laplacian (Laplacian) of gaussian filters and thresholding the soft tissue volume).
In certain embodiments, step (d) further comprises adjusting the intensity of the functional image to suppress intensity from one or more high uptake tissue regions (e.g., using one or more suppression methods described herein) prior to segmenting the one or more lymph and/or prostate hotspot volumes.
In certain embodiments, step (g) comprises determining a liver reference value using intensities of voxels of the functional image corresponding to the liver volume.
In certain embodiments, the method includes fitting a bi-component gaussian mixture model to a histogram of intensities of functional image voxels corresponding to the liver volume, using the bi-component gaussian mixture model fit to identify and exclude voxels from the liver volume that have intensities associated with regions of abnormally low uptake, and using intensities of remaining (e.g., non-excluded) voxels to determine a liver reference value.
In another aspect, the invention relates to a system for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank; e.g., classify; e.g., as representing a particular lesion type) cancerous lesions within the subject, the system comprising: a processor of a computing device; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) Receiving (e.g., and/or accessing) a 3D functional image of the subject obtained using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, single Photon Emission Computed Tomography (SPECT) ]; (b) Receiving (e.g., and/or accessing) a 3D anatomical image [ e.g., a Computed Tomography (CT) image ] of the subject obtained using an anatomical imaging modality; magnetic Resonance (MR) images ]; (c) Receiving (e.g., and/or accessing) a 3D segmentation map that identifies one or more specific tissue regions or groups of tissue regions within the 3D functional image and/or within the 3D anatomical image (e.g., a set of tissue regions corresponding to a specific anatomical region; e.g., a group of tissue regions comprising an organ in which high or low radiopharmaceutical uptake occurs); (d) Automatically detecting and/or segmenting, using one or more machine learning modules, a set of one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potentially cancerous lesion within the subject, thereby establishing one or both of (i) and (ii) as follows: (i) a list of hotspots identifying, for each hotspot, a location of the hotspot [ e.g., as detected by the one or more machine learning modules ], and (ii) a 3D hotspot graph identifying, for each hotspot, a corresponding 3D hotspot volume within the 3D functional image [ e.g., as determined via segmentation performed by the one or more machine learning modules ] [ e.g., wherein the 3D hotspot graph is a segmentation map that delineates, for each hotspot, a 3D hotspot boundary (e.g., an irregular boundary) of the hotspot (e.g., the 3D boundary encloses the 3D hotspot volume) ], wherein at least one of the one or more machine learning modules (e.g., up to all) receives as input (i) the 3D functional image, (ii) the 3D anatomical image, and (iii) the 3D segmentation map; and (e) storing and/or providing the list of hotspots and/or the 3D hotspot graph for display and/or further processing.
In some embodiments, the instructions cause the processor to: receiving an initial 3D segmentation map, the initial 3D segmentation map identifying one or more (e.g., a number) specific tissue regions (e.g., organs and/or specific bones) within the 3D anatomical image and/or the 3D functional image; identifying at least a portion of the one or more particular tissue regions as belonging to a particular one of one or more tissue groups (e.g., predefined groups) and updating the 3D segmentation map to indicate that the identified particular region belongs to the particular tissue group; and using the updated 3D segmentation map as input to at least one of the one or more machine learning modules.
In certain embodiments, the one or more tissue groups comprise a soft tissue group such that a particular tissue region representing soft tissue is identified as belonging to the soft tissue group. In certain embodiments, the one or more tissue groups comprise a bone tissue group such that a particular tissue region representing bone is identified as belonging to the bone tissue group. In certain embodiments, the one or more tissue groups include a high uptake group of organs such that one or more organs associated with high radiopharmaceutical uptake (e.g., under normal conditions, and not necessarily due to the presence of a lesion) are identified as belonging to the high uptake group.
In certain embodiments, the instructions cause the processor to determine a classification of the hotspot for each detected and/or segmented hotspot [ e.g., classifying the lesion as bone, lymph, or prostate according to anatomical location, e.g., assigning an alphanumeric code, such as the labeling scheme in table 1, based on the determined (e.g., by the processor) location of the hotspot relative to the subject ].
In certain embodiments, the instructions cause the processor to use at least one of the one or more machine learning modules to determine the classification of the hotspot for each detected and/or segmented hotspot (e.g., wherein a single machine learning module performs detection, segmentation, and classification).
In certain embodiments, the one or more machine learning modules include: (A) A whole body lesion detection module that detects and/or segments hot spots throughout the body; and (B) a prostate lesion module that detects and/or segments hot spots within the prostate. In certain embodiments, the instructions cause the processor to generate the hot spot list and/or map using each of (a) and (B) and merge the results.
In certain embodiments, at step (D), the instructions cause the processor to segment and classify the set of one or more hotspots to create a labeled 3D hotspot graph that identifies, for each hotspot, a corresponding 3D hotspot volume within the 3D functional image and wherein each hotspot is labeled as belonging to a particular hotspot category of a plurality of hotspot categories [ e.g., each hotspot category identifies a particular anatomical and/or tissue region (e.g., lymph, bone, prostate) in which the lesion represented by the hotspot is determined to be located ]. Using a first machine learning module to segment a first initial set of one or more hotspots within the 3D functional image to create a first initial 3D hotspot graph identifying a first initial set of hotspot volumes, wherein the first machine learning module segments the hotspots of the 3D functional image according to a single hotspot category [ e.g., identifies all hotspots as belonging to a single hotspot category so as to distinguish background regions from hotspot volumes (e.g., but not different types of hotspots) (e.g., such that each hotspot volume identified by the first 3D hotspot graph is labeled as belonging to a single hotspot category, as opposed to background) ]; using a second machine learning module to segment a second initial set of one or more hotspots within the 3D functional image to create a second initial 3D hotspot graph identifying a second initial set of hotspot volumes, wherein the second machine learning module segments the 3D functional image according to a plurality of different hotspot categories such that the second initial 3D hotspot graph is a multi-category 3D hotspot graph, wherein each hotspot volume is labeled as belonging to a particular one of the plurality of different hotspot categories (e.g., so as to distinguish between hotspot volumes corresponding to different hotspot categories, and between hotspot volumes and background areas); and merging the first initial 3D hotspot graph and the second initial 3D hotspot graph for at least a portion of the hotspot volume identified by the first initial 3D hotspot graph by: identifying a matching hotspot volume of the second initial 3D hotspot graph (e.g., by identifying substantially overlapping hotspot volumes of the first and second initial 3D hotspot graphs), the matching hotspot volume of the second 3D hotspot graph having been labeled as belonging to a particular hotspot category of the plurality of different hotspot categories; and marking the particular hotspot volume of the first initial 3D hotspot graph as belonging to the particular hotspot category (to which the matching hotspot volume is marked) thereby creating a merged 3D hotspot graph comprising a segmented hotspot volume of the first 3D hotspot graph, the segmented hotspot volume having been marked according to the category to which the matching hotspot of the second 3D hotspot graph is identified; and at step (e), the instructions cause the processor to store and/or provide the merged 3D heatmap for display and/or further processing.
In certain embodiments, the plurality of different hotspot categories comprises one or more members selected from the group consisting of: (i) A bone hotspot determined (e.g., by the second machine learning module) to represent a lesion located in a bone; (ii) A lymphatic hotspot determined (e.g., by the second machine learning module) to represent a lesion located in a lymph node; and (iii) a prostate hotspot determined (e.g., by the second machine learning module) to represent a lesion located in the prostate.
In certain embodiments, the instructions further cause the processor to: (f) receiving and/or accessing the hot spot list; and (g) for each hotspot in the list of hotspots, segmenting the hotspot using an analytical model [ e.g., thereby establishing a 3D map of analytically segmented hotspots (e.g., the 3D map identifying for each hotspot a hotspot volume of voxels comprising the 3D anatomical and/or functional image enclosed by the segmented hotspot region) ].
In certain embodiments, the instructions further cause the processor to: (h) receiving and/or accessing the hotspot graph; and (i) for each hotspot in the hotspot graph, segmenting the hotspot using an analytical model [ e.g., thereby establishing a 3D map of analytically segmented hotspots (e.g., the 3D map identifying for each hotspot a hotspot volume of voxels comprising the 3D anatomical and/or functional image enclosed by the segmented hotspot region) ].
In certain embodiments, the analytical model is an adaptive thresholding method, and in step (i), the instructions cause the processor to: determining one or more reference values, each reference value being based on a measure of intensity of voxels of the 3D functional image that lie within a particular reference volume corresponding to a particular reference tissue region (e.g., a blood pool reference value is determined based on intensity within an aortic volume corresponding to a portion of the subject's aorta; e.g., a liver reference value is determined based on intensity within a liver volume corresponding to the subject's liver); and for each particular hotspot volume of the 3D hotspot graph: determining a corresponding hotspot intensity based on intensities of voxels within the particular hotspot volume [ e.g., wherein the hotspot intensity is a maximum of intensities of voxels within the particular hotspot volume (e.g., representing an SUV) ]; and determining a hotspot-specific threshold for the particular hotspot based on at least one of (i) the corresponding hotspot intensity and (ii) the one or more reference values.
In certain embodiments, the hotspot-specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function being selected based on a comparison of the corresponding hotspot intensity to the at least one reference value [ e.g., wherein each of the plurality of threshold functions is associated with a specific range of intensity (e.g., SUV) values, and the specific threshold function is selected according to the specific range in which the hotspot intensity and/or its (e.g., predetermined) percentage falls (e.g., and wherein each specific range of intensity values is bounded at least in part by a multiple of the at least one reference value) ].
In certain embodiments, the hotspot-specific threshold is determined (e.g., by the specific threshold function) as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as hotspot intensity increases [ e.g., wherein the variable percentage is itself a function (e.g., a decreasing function) of the corresponding hotspot intensity ].
In another aspect, the invention relates to a system for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank; e.g., classify; e.g., as representing a particular lesion type) cancerous lesions within the subject, the system comprising: a processor of a computing device; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) Receiving (e.g., and/or accessing) a 3D functional image of the subject obtained using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, single Photon Emission Computed Tomography (SPECT) ]; (b) Automatically segmenting a first initial set of one or more hotspots within the 3D functional image using a first machine learning module, thereby creating a first initial 3D hotspot graph identifying a first initial set of hotspot volumes, corresponding 3D hotspot volumes within the 3D functional image, wherein the first machine learning module segments the hotspots of the 3D functional image according to a single hotspot category [ e.g., identifies all hotspots as belonging to a single hotspot category in order to distinguish background regions from hotspot volumes (e.g., but not different types of hotspots) (e.g., such that each hotspot volume identified by the first 3D hotspot graph is labeled as belonging to a single hotspot category, as opposed to background) ]; (c) Automatically segmenting a second initial set of one or more hotspots within the 3D functional image using a second machine learning module, thereby creating a second initial 3D hotspot graph identifying a second initial set of hotspot volumes, wherein the second machine learning module segments the 3D functional image according to a plurality of different hotspot categories [ e.g., each hotspot category identifying a particular anatomical and/or tissue region (e.g., lymph, bone, prostate) in which lesions represented by the hotspots are determined to be located ], such that the second initial 3D hotspot graph is a multi-category 3D hotspot graph, wherein each hotspot volume is labeled as belonging to a particular one of the plurality of different hotspot categories (e.g., so as to distinguish a hotspot volume corresponding to a different hotspot category, and to distinguish a hotspot volume from a background region); (d) The first initial 3D hotspot graph and the second initial 3D hotspot graph are merged for each particular hotspot volume of at least a portion of the first set of initial hotspot volumes identified by the first initial 3D hotspot graph by: identifying a matching hotspot volume of the second initial 3D hotspot graph (e.g., by identifying substantially overlapping hotspot volumes of the first and second initial 3D hotspot graphs), the matching hotspot volume of the second 3D hotspot graph having been labeled as belonging to a particular hotspot category of the plurality of different hotspot categories; and marking the particular hotspot volume of the first initial 3D hotspot graph as belonging to the particular hotspot category (to which the matching hotspot volume is marked) thereby creating a merged 3D hotspot graph comprising a segmented hotspot volume of the first 3D hotspot graph, the segmented hotspot volume having been marked according to the category to which the matching hotspot of the second 3D hotspot graph is identified; and (e) storing and/or providing the merged 3D heatmap for display and/or further processing.
In certain embodiments, the plurality of different hotspot categories comprises one or more members selected from the group consisting of: (i) A bone hotspot determined (e.g., by the second machine learning module) to represent a lesion located in a bone; (ii) A lymphatic hotspot determined (e.g., by the second machine learning module) to represent a lesion located in a lymph node; and (iii) a prostate hotspot determined (e.g., by the second machine learning module) to represent a lesion located in the prostate.
In another aspect, the present invention relates to a system for automatically processing a 3D image of a subject via an adaptive thresholding method to identify and/or characterize (e.g., rank; e.g., classify; e.g., as representing a particular lesion type) cancerous lesions within the subject, the system comprising: a processor of a computing device; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) Receiving (e.g., and/or accessing) a 3D functional image of the subject obtained using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, single Photon Emission Computed Tomography (SPECT) ]; (b) Receiving (e.g., and/or accessing) a preliminary 3D hotspot graph identifying one or more preliminary hotspot volumes within the 3D functional image; (c) Determining one or more reference values, each reference value being based on a measure of intensity of voxels of the 3D functional image that lie within a particular reference volume corresponding to a particular reference tissue region (e.g., a blood pool reference value is determined based on intensity within an aortic volume corresponding to a portion of the subject's aorta; e.g., a liver reference value is determined based on intensity within a liver volume corresponding to the subject's liver); (d) For each particular preliminary hot spot volume of at least a portion of the one or more preliminary hot spot volumes identified by the preliminary 3D hot spot map, establishing a refined 3D hot spot map based on the preliminary hot spot volumes and using an adaptive threshold-based segmentation: determining a corresponding hotspot intensity based on intensities of voxels within the particular preliminary hotspot volume [ e.g., wherein the hotspot intensity is a maximum of intensities of voxels within the particular preliminary hotspot volume (e.g., representing an SUV) ]; and determining a hotspot-specific threshold for the particular preliminary hotspot volume based on at least one of (i) the corresponding hotspot intensity and (ii) the one or more reference values; segmenting at least a portion of the 3D functional image (e.g., a subvolume approximately in a particular preliminary hot spot volume) using a threshold-based segmentation algorithm that performs image segmentation using a hot spot particular threshold determined for the particular preliminary hot spot [ e.g., and identifies clusters of voxels having intensities above the hot spot particular threshold and including the maximum intensity voxels of the preliminary hot spot (e.g., 3D clusters of voxels connected to each other in an n-connected component manner (e.g., where n=6, n=18, etc.) ], thereby determining a refined, analytically segmented hot spot volume corresponding to the particular preliminary hot spot volume; and including the refined hotspot volume in the refined 3D hotspot graph; and (e) storing and/or providing the refined 3D heatmap for display and/or further processing.
In certain embodiments, the hotspot-specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function being selected based on a comparison of the corresponding hotspot intensity to the at least one reference value [ e.g., wherein each of the plurality of threshold functions is associated with a specific range of intensity (e.g., SUV) values, and the specific threshold function is selected according to the specific range in which the hotspot intensity and/or its (e.g., predetermined) percentage falls (e.g., and wherein each specific range of intensity values is bounded at least in part by a multiple of the at least one reference value) ].
In certain embodiments, the hotspot-specific threshold is determined (e.g., by the specific threshold function) as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as hotspot intensity increases [ e.g., wherein the variable percentage is itself a function (e.g., a decreasing function) of the corresponding hotspot intensity ].
In another aspect, the invention relates to a system for automatically processing a 3D image of a subject to identify and/or characterize (e.g., rank; e.g., classify; e.g., as representing a particular lesion type) cancerous lesions within the subject, the system comprising: a processor of a computing device; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) Receiving (e.g., and/or accessing) using an anatomical imaging modality [ e.g., x-ray Computed Tomography (CT); for example, magnetic Resonance Imaging (MRI); for example, ultrasound ] a 3D anatomical image of the subject obtained, wherein the 3D anatomical image comprises a graphical representation of tissue (e.g., soft tissue and/or bone) within the subject; (b) Automatically segmenting the 3D anatomical image to create a 3D segmentation map, the 3D segmentation map identifying a plurality of volumes of interest (VOIs) in the 3D anatomical image, including a liver volume corresponding to a liver of the subject and an aortic volume corresponding to an aortic portion (e.g., thoracic and/or abdominal portions); (c) Receiving (e.g., and/or accessing) a data stream using a functional imaging modality [ e.g., positron Emission Tomography (PET); for example, a 3D functional image of the subject [ e.g., wherein the 3D functional image comprises a plurality of voxels, each voxel representing a specific physical volume within the subject and having intensity values representing detected radiation emitted from the specific physical volume ], obtained by Single Photon Emission Computed Tomography (SPECT) ], wherein at least a portion of the plurality of voxels of the 3D functional image represent a physical volume within a target tissue region; (d) Automatically segmenting one or more hotspots within the 3D functional image, each segmented hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing (e.g., indicating) a potential cancerous lesion within the subject, identifying one or more automatically segmented hotspot volumes; (e) Causing a graphical representation of the one or more automatically segmented hotspot volumes to be rendered for display within an interactive Graphical User Interface (GUI), such as a quality control and reporting GUI; (f) Receiving, via the interactive GUI, a user selection of a final set of hot spots comprising at least a portion (e.g., up to all) of the one or more automatically segmented hot spot volumes; (g) Determining a lesion index value for each hotspot volume of the final set based on (i) intensities of voxels of the functional image corresponding to the hotspot volume (e.g., located within the hotspot volume) and (ii) one or more reference values determined using intensities of voxels of the functional image corresponding to the liver volume and the aortic volume; and (e) storing and/or providing the final set of hotspots and/or lesion index values for display and/or further processing.
In certain embodiments, at step (b), the instructions cause the processor to segment the anatomical image such that the 3D segmentation map identifies one or more bone volumes corresponding to one or more bones of the subject, and at step (D), the instructions cause the processor to identify bone volumes within the functional image using the one or more bone volumes and segment one or more bone hotspot volumes located within the bone volumes (e.g., by applying one or more differences of gaussian filters and thresholding bone volumes).
In certain embodiments, at step (b), the instructions cause the processor to segment the anatomical image such that the 3D segmentation map identifies one or more organ volumes corresponding to soft tissue organs of the subject (e.g., left/right lung, left/right gluteus maximus, urinary bladder, liver, left/right kidney, gall bladder, spleen, chest, and abdominal aorta, and optionally (e.g., for patients not undergoing radical prostatectomy) the prostate), and at step (D), the instructions cause the processor to identify soft tissue (e.g., lymph and optionally prostate) volumes within the functional image using the one or more segmented organ volumes and segment one or more lymph and/or prostate hotspot volumes located within the soft tissue volumes (e.g., by applying one or more laplace operators of gaussian filters and thresholding the soft tissue volumes).
In certain embodiments, in step (d), the instructions cause the processor to adjust the intensity of the functional image to suppress intensity from one or more high uptake tissue regions (e.g., using one or more suppression methods described herein) prior to segmenting the one or more lymph and/or prostate hotspot volumes.
In certain embodiments, at step (g), the instructions cause the processor to determine a liver reference value using intensities of voxels of the functional image corresponding to the liver volume.
In some embodiments, the instructions cause the processor to: fitting a bi-component gaussian mixture model to a histogram of intensities of functional image voxels corresponding to the liver volume, using the bi-component gaussian mixture model fit to identify and exclude voxels from the liver volume having intensities associated with regions of abnormally low uptake, and using intensities of remaining (e.g., non-excluded) voxels to determine a liver reference value.
Features of embodiments described with respect to one aspect of the invention may be applied with respect to another aspect of the invention.
Drawings
The foregoing and other objects, aspects, features, and advantages of the present disclosure will become more apparent and better understood by referring to the following description taken in conjunction with the accompanying drawings in which:
FIG. 1A is a block flow diagram of an exemplary process for Artificial Intelligence (AI) based lesion detection in accordance with an illustrative embodiment.
FIG. 1B is a block flow diagram of an exemplary process for AI-based lesion detection in accordance with an illustrative embodiment.
Fig. 1C is a block flow diagram of an exemplary process for AI-based lesion detection in accordance with an illustrative embodiment.
Fig. 2A is a graph showing a histogram of liver SUV values overlaid with a two-component gaussian mixture model according to an illustrative embodiment.
Fig. 2B is a PET image overlaid on a CT image showing a portion of a liver volume used to calculate a liver reference value, according to an illustrative embodiment.
Fig. 2C is a block flow diagram of an exemplary process for calculating reference intensity values to avoid/reduce effects from tissue regions associated with low radiopharmaceutical uptake in accordance with an illustrative embodiment.
Fig. 3 is a block flow diagram of an exemplary process for correcting intensity exudation from one or more tissue regions associated with high radiopharmaceutical uptake in accordance with an illustrative embodiment.
FIG. 4 is a block flow diagram of an exemplary process for automatically marking hotspots corresponding to detected lesions in accordance with an illustrative embodiment.
FIG. 5A is a block flow diagram of an exemplary process for interactive lesion detection that allows user feedback and viewing via a Graphical User Interface (GUI) in accordance with an illustrative embodiment.
FIG. 5B is an exemplary process for user review, quality control, and reporting of automatically detected lesions, according to an illustrative embodiment.
Fig. 6A is a screen shot of a GUI for confirming accurate segmentation of a liver reference volume in accordance with an illustrative embodiment.
Fig. 6B is a screen shot of a GUI for confirming accurate segmentation of an aortic portion (blood pool) reference volume in accordance with an illustrative embodiment.
Fig. 6C is a screen shot of a GUI for user selection and/or verification of automatically segmented hotspots corresponding to detected lesions within a subject, according to an illustrative embodiment.
Fig. 6D is a screen shot of a portion of a GUI that allows a user to manually identify lesions within an image in accordance with an illustrative embodiment.
Fig. 6E is a screen shot of another portion of the GUI that allows a user to manually identify lesions within an image in accordance with an illustrative embodiment.
FIG. 7 is a screenshot of a portion of a GUI showing a quality control checklist in accordance with an illustrative embodiment.
FIG. 8 is a screen shot of a report generated by a user using an embodiment of the automated lesion detection tool described herein in accordance with an illustrative embodiment.
Fig. 9 is a block flow diagram showing an example architecture for hot spot (lesion) segmentation via a machine learning module that receives a 3D anatomical image, a 3D functional image, and a 3D segmentation map as inputs, according to an illustrative embodiment.
Fig. 10A is a block flow diagram showing an example process in which lesion type mapping is performed after hot spot segmentation according to an illustrative embodiment.
Fig. 10B is another block flow diagram illustrating an exemplary process in which lesion type mapping is performed after hot spot segmentation, illustrating the use of a 3D segmentation map, according to an illustrative embodiment.
Fig. 11A is a block flow diagram showing a process for detecting and/or segmenting hotspots representing lesions using a whole body network and a prostate specific network, according to an illustrative embodiment.
Fig. 11B is a block flow diagram showing a process for detecting and/or segmenting hotspots representing lesions using a whole body network and a prostate specific network, according to an illustrative embodiment.
FIG. 12 is a block flow diagram showing the use of an analysis segmentation step after AI-based hotspot segmentation in accordance with an illustrative embodiment.
FIG. 13A is a block diagram showing an example U-network architecture for hotspot splitting in accordance with an illustrative embodiment.
Fig. 13B and 13C are block diagrams showing an example FPN architecture for hotspot splitting, according to an illustrative embodiment.
14A, 14B and 14C show exemplary images of hot spot segmentation using a U-grid architecture, exemplary in accordance with an illustrative embodiment.
15A and 15B show exemplary images demonstrating hot spot segmentation using the FPN architecture in accordance with an illustrative embodiment.
16A, 16B, 16C, 16D, and 16E are screen shots of exemplary GUIs for uploading, analyzing medical image data, and generating reports from medical image data in accordance with an illustrative embodiment.
17A and 17B are block flow diagrams of an example process for splitting and classifying hotspots using two parallel machine learning modules, according to an illustrative embodiment.
FIG. 17C is a block flow diagram illustrating interactions and data flow between various software modules (e.g., APIs) of an example implementation of a process for splitting and classifying hotspots using two parallel machine learning modules in accordance with an illustrative embodiment.
FIG. 18A is a block flow diagram of an exemplary process for segmenting hotspots by using an analytical model of an adaptive thresholding method in accordance with an illustrative embodiment.
18B and 18C are graphs showing hot spot specific threshold values as a function of hot spot intensity (SUV) used in an adaptive thresholding method according to an illustrative embodiment max ) A graph of variation of (c).
Fig. 18D, 18E, and 18F are diagrams illustrating certain thresholding techniques in accordance with an illustrative embodiment.
Fig. 18G is a diagram showing a histogram of intensities of prostate voxels along axial, sagittal, and coronal planes along with an illustrative setting of prostate voxel intensity values and threshold scaling factors in accordance with an illustrative embodiment.
FIG. 19A is a block flow diagram illustrating hot spot segmentation using conventional manual ROI definition and conventional fixed and/or relative thresholding in accordance with an illustrative embodiment.
Fig. 19B is a block flow diagram illustrating hotspot segmentation using an AI-based method in conjunction with an adaptive thresholding method in accordance with an illustrative embodiment.
FIG. 20 is an image set comparing example segmentation results used only for thresholding with segmentation results obtained via an AI-based method in combination with an adaptive thresholding method in accordance with an illustrative embodiment.
Fig. 21A, 21B, 21C, 21D, 21E, 21F, 21G, 21H, and 21I show a 2D tile sequence of a 3D PET image moving in the vertical direction in the abdomen area. The images compare the hotspot segmentation results (left-hand image) within the abdominal region performed by the thresholding method alone with the hotspot segmentation results (right-hand image) of the machine learning method according to some embodiments described herein, and show the hotspot regions identified by each method superimposed on the PET image tile.
FIG. 22 is a block flow diagram of a process for uploading and analyzing PET/CT image data using a CAD apparatus providing automated image analysis, according to some embodiments described herein.
FIG. 23 is a screen shot of an exemplary GUI that allows a user to upload image data for viewing and analysis via a CAD apparatus that provides automated image analysis, according to certain embodiments described herein.
FIG. 24 is a screen shot of an exemplary GUI viewer allowing a user to view and analyze medical image data (e.g., 3D PET/CT images) and the results of an automated image analysis, according to an illustrative embodiment.
FIG. 25 is a screen shot of an automatically generated report in accordance with an illustrative embodiment.
FIG. 26 is a block flow diagram of an exemplary workflow for providing automated analysis along with user input and viewing for analyzing medical image data in accordance with an illustrative embodiment.
Fig. 27 shows three views of a CT image of a segmented bone and soft tissue volume with an overlay according to an illustrative embodiment.
FIG. 28 is a block flow diagram of an analytical model for splitting hotspots in accordance with an illustrative embodiment.
Fig. 29A is a block diagram of a cloud computing architecture used in some embodiments.
FIG. 29B is a block diagram of an exemplary micro-service communication flow used in some embodiments.
FIG. 30 is a block diagram of an exemplary cloud computing environment for use in certain embodiments.
FIG. 31 is a block diagram of an example computing device and an example mobile computing device used in certain embodiments.
The features and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements.
Detailed Description
It is contemplated that the systems, devices, methods, and processes of the claimed invention encompass variations and adaptations of the development of information using embodiments from those described herein. Adaptations and/or variations of the systems, devices, methods, and processes described herein may be performed by one of ordinary skill in the relevant arts.
Throughout the description where articles, devices and systems are described as having, comprising or including specific components or where processes and methods are described as having, comprising or including specific steps, it is contemplated that articles, devices and systems of the present invention consisting essentially of or consisting of the recited components are additionally present, and processes and methods according to the present invention consisting essentially of or consisting of the recited processing steps are additionally present.
It should be understood that the order of steps or order for performing certain actions is not important as long as the invention remains operable. Furthermore, two or more steps or actions may be performed simultaneously.
The mention of any publication herein (e.g., in the background section) is not an admission that the publication is prior art with respect to any of the technical solutions presented herein. The background section is presented for clarity and is not intended to be a prior art description of any technical solution.
The presence and/or placement of headings provided for the convenience of the reader is not intended to limit the scope of the subject matter described herein.
In this application, unless otherwise clear from the context, (i) the term "a" or (an) "may be understood to mean" at least one "; (ii) The term "or" may be understood to mean "and/or"; (iii) The terms "comprises" and "comprising" may be interpreted as encompassing the recited components or steps, whether presented alone or in combination with one or more additional components or steps; and (iv) the terms "about" and "approximately" may be understood to allow for standard variation as understood by one of ordinary skill in the art; and (v) where ranges are provided, endpoints are inclusive.
In certain embodiments, the term "about" when used herein with reference to a reference value refers to a value that is similar in context to the referenced value. In general, those skilled in the art will recognize the relative degree of variation covered by "about" in that context. For example, in some embodiments, the term "about" may encompass a range of values within 25%, 20%, 19%, 18%, 17%, 16%, 15%, 14%, 13%, 12%, 11%, 10%, 9%, 8%, 7%, 6%, 5%, 4%, 3%, 2%, 1% or less of the referenced value.
A. Nuclear medicine image
Nuclear medical images are obtained using a nuclear imaging modality such as bone scan imaging, positron Emission Tomography (PET) imaging, and Single Photon Emission Computed Tomography (SPECT) imaging.
As used herein, "image" -e.g., a 3-D image of a mammal-includes any visual representation, such as a photograph, video frame, streaming video, and any electronic, digital, or mathematical simulation of a photograph, video frame, or streaming video. In certain embodiments, any of the apparatus described herein includes a display for displaying an image or any other result produced by a processor. In certain embodiments, any of the methods described herein include the step of displaying an image or any other result produced via the method.
As used herein, "3-D" or "three-dimensional" with respect to an "image" means conveying information about three dimensions. The 3-D image may be visualized as a three-dimensional dataset and/or may be displayed as a set of two-dimensional representations, or as a three-dimensional representation.
In certain embodiments, the nuclear medicine image uses an imaging agent that includes a radiopharmaceutical. Nuclear medicine images are obtained after administration of a radiopharmaceutical to a patient (e.g., a human subject) and provide information about the distribution of the radiopharmaceutical within the patient. Radiopharmaceuticals are compounds that include radionuclides.
As used herein, "administering" an agent refers to introducing a substance (e.g., an imaging agent) into a subject. In general, any route of administration may be utilized, including, for example, parenteral (e.g., intravenous), oral, topical, subcutaneous, intraperitoneal, intraarterial, inhalation, vaginal, rectal, nasal, introduction into the cerebrospinal fluid, or instillation into a body compartment.
As used herein, "radionuclide" refers to a moiety of a radioisotope that includes at least one element. Exemplary suitable radionuclides include, but are not limited to, those radionuclides described herein. In some embodiments, the radionuclide is one used in Positron Emission Tomography (PET). In some embodiments, the radionuclide is one used in Single Photon Emission Computed Tomography (SPECT). In some embodiments, the non-limiting list of radionuclides comprises 99m Tc、 111 In、 64 Cu、 67 Ga、 68 Ga、 186 Re、 188 Re、 153 Sm、 177 Lu、 67 Cu、 123 I、 124 I、 125 I、 126 I、 131 I、 11 C、 13 N、 15 O、 18 F、 153 Sm、 166 Ho、 177 Lu、 149 Pm、 90 Y、 213 Bi、 103 Pd、 109 Pd、 159 Gd、 140 La、 198 Au、 199 Au、 169 Yb、 175 Yb、 165 Dy、 166 Dy、 105 Rh、 111 Ag、 89 Zr、 225 Ac、 82 Rb、 75 Br、 76 Br、 77 Br、 80 Br、 80m Br、 82 Br、 83 Br、 211 At (At) 192 Ir。
As used herein, "radiopharmaceutical" refers to a compound that includes a radionuclide. In certain embodiments, the radiopharmaceutical is for diagnostic and/or therapeutic purposes. In certain embodiments, the radiopharmaceuticals comprise small molecules labeled with one or more radionuclides, antibodies labeled with one or more radionuclides, and antigen binding portions of antibodies labeled with one or more radionuclides.
Nuclear medicine images (e.g., PET scans; e.g., SPECT scans; e.g., whole body bone scans; e.g., synthetic PET-CT images; e.g., synthetic SPECT-CT images) detect radiation emitted from radionuclides of a radiopharmaceutical to form an image. The distribution of a particular radiopharmaceutical within a patient may be determined by biological mechanisms (e.g., blood flow or perfusion), as well as by specific enzyme or receptor binding interactions. Different radiopharmaceuticals may be designed to interact with different biological mechanisms and/or specific enzymes or receptor binding and, thus, selectively concentrate within specific types of tissues and/or regions within a patient when administered to the patient. A greater amount of radiation is emitted from regions within the patient having a higher concentration of the radiopharmaceutical than other regions, such that these regions appear brighter in the nuclear medicine image. Thus, intensity variations within the nuclear medicine image can be used to map the distribution of the radiopharmaceutical within the patient. This mapped distribution of radiopharmaceuticals within the patient may be used, for example, to infer the presence of cancerous tissue within various areas of the patient's body.
For example, in the direction of suffering fromTechnetium 99m methylenebisphosphonate after administration 99m Tc MDP) selectively accumulates in the skeletal region of the patient, particularly at sites with abnormal osteogenesis associated with malignant bone lesions. The selective concentration of the radiopharmaceutical at these sites creates localized regions of high intensity in the identifiable hot spot-nuclear medicine image. Thus, the presence of malignant bone lesions associated with metastatic prostate cancer may be inferred by identifying such hot spots within a whole-body scan of the patient. As described below, may be based on administration to a patient 99m Automated analysis of intensity fluctuations in whole-body scans obtained after Tc MDP calculates risk indices related to the overall patient survival and other prognostic metrics indicative of disease status, progression, treatment efficacy, and the like. In some embodiments, can also be combined with 99m Tc MDP uses other radiopharmaceuticals in a similar manner.
In certain embodiments, the particular radiopharmaceutical used depends on the particular nuclear medical imaging modality used. For example, with 99m Tc MDP is similar, sodium fluoride 18F (NaF) also accumulates in bone lesions, but can be used with PET imaging. In certain embodiments, PET imaging may also utilize vitamin choline in a radioactive form that is readily absorbed by prostate cancer cells.
In certain embodiments, radiopharmaceuticals that selectively bind to specific proteins or receptors of interest, particularly those whose expression is increased in cancerous tissues, may be used. Such proteins or receptors of interest include, but are not limited to, tumor antigens such as CEA expressed in colorectal cancer: her2/neu expressed in a variety of cancers; BRCA 1 and BRCA 2 expressed in breast and ovarian cancers; and TRP-1 and TRP-2 expressed in melanoma.
For example, human Prostate Specific Membrane Antigen (PSMA) is upregulated in prostate cancer, including metastatic disease. PSMA is expressed by almost all prostate cancers and its expression is further increased in poorly differentiated, metastatic, and hormone refractory cancers. Thus, radiopharmaceuticals corresponding to PSMA-binding agents labeled with one or more radionuclides (e.g., compounds having high affinity for PSMA) may be used to obtain nuclear medicine images of a patient from which the presence and/or status of prostate cancer within various regions of the patient (e.g., including, but not limited to, bone regions) may be assessed. In certain embodiments, the nuclear medicine image obtained using the PSMA-binding agent is used to identify the presence of cancerous tissue within the prostate when the disease is in a localized state. In certain embodiments, nuclear medicine images obtained using radiopharmaceuticals comprising PSMA-binding agents are used to identify the presence of cancerous tissue within various regions, including not only the prostate, but other organ and tissue regions (e.g., lung, lymph nodes, and bone), such as are relevant when the disease is metastatic.
In particular, radionuclide-labeled PSMA-binding agents selectively accumulate within cancerous tissue upon their affinity for PSMA following administration to a patient. To be as described above with respect to 99m In a similar manner as described for Tc MDP, selective concentration of radionuclide labeled PSMA binding agents at specific sites within a patient creates detectable hot spots in nuclear medicine images. Localized cancers within the prostate of a patient and/or metastatic cancers in various areas of the patient's body can be detected and assessed as PSMA-binding agents concentrate in various cancerous tissues and areas of the body that express PSMA. The risk index associated with the overall patient survival and other prognostic metrics indicative of disease state, progression, treatment efficacy, and the like may be calculated based on automated analysis of intensity fluctuations in nuclear medicine images obtained after administration of PSMA-binding agent radiopharmaceuticals to the patient.
Various radionuclide labeled PSMA binders are useful as radiopharmaceutical imaging agents for nuclear medicine imaging to detect and evaluate prostate cancer. In certain embodiments, the particular radionuclide-labeled PSMA-binding agent used depends on factors such as the particular imaging modality (e.g., PET; e.g., SPECT) and the particular region (e.g., organ) of the patient to be imaged. For example, certain radionuclide-labeled PSMA binders are suitable for PET imaging, while others are suitable for SPECT imaging. For example, certain radionuclide labeled PSMA-binding agents facilitate imaging of a patient's prostate and are primarily used when disease is localized, while others facilitate imaging of organs and regions throughout a patient's body and are useful for assessing metastatic prostate cancer.
Various PSMA-binding agents and radionuclide-labeled versions thereof are described in U.S. patent nos. 8,778,305, 8,211,401, and 8,962,799, each of which is incorporated by reference herein in its entirety. Several PSMA-binding agents and radionuclide-labeled versions thereof are also described in PCT application PCT/US2017/058418 (PCT publication WO 2018/081354), filed on 26, 10, 2017, which is incorporated herein by reference in its entirety. Several exemplary PSMA binding agents and radionuclide-labeled versions thereof are also described below Wen Zhangjie J.
B. Automated lesion detection and analysis
i. Automated lesion detection
In certain embodiments, the systems and methods described herein utilize machine learning techniques for automated image segmentation and detection of hotspots corresponding to and indicative of possible cancerous lesions within a subject.
In certain embodiments, the systems and methods described herein may be implemented in a cloud-based platform, for example, as described in PCT/US2017/058418 (PCT publication WO 2018/081354), which is incorporated by reference in its entirety, for example, as described in the 2017, 10, 26, application.
In certain embodiments, as described herein, the machine learning module implements one or more machine learning techniques, such as random forest classifiers, artificial Neural Networks (ANNs), convolutional Neural Networks (CNNs), and the like. In some embodiments, for example, a machine learning module implementing machine learning techniques is trained using manually segmented and/or labeled images to identify and/or classify portions of the images. This training may be used to determine various parameters of a machine learning algorithm implemented by the machine learning module, such as weights associated with layers in the neural network. In certain embodiments, once the machine learning module is trained, e.g., to accomplish a particular task (e.g., identify certain target areas within an image), the values of the determined parameters are fixed and the (e.g., unchanged, static) machine learning module is used to process new data (e.g., different from the training data) and complete its training task without further updating of its parameters (e.g., the machine learning module does not receive feedback and/or updates). In some embodiments, the machine learning module may receive feedback, e.g., based on a view of user accuracy, and this feedback may be used as additional training data to dynamically update the machine learning module. In some embodiments, the trained machine learning module is a classification algorithm, such as a random forest classifier, with adjustable and/or fixed (e.g., locked) parameters.
In certain embodiments, machine learning techniques are used to automatically segment anatomical structures in anatomical images (e.g., CT, MRI, ultrasound, etc. images) in order to identify a volume of interest corresponding to a particular target tissue region, e.g., a particular organ (e.g., prostate, lymph node region, kidney, liver, bladder, aortic portion) and bone. In this way, the machine learning module may be used to generate a segmentation mask and/or segmentation map (e.g., comprising a plurality of segmentation masks, each segmentation mask corresponding to and identifying a particular target tissue region) that may be mapped to a functional image (e.g., a PET or SPECT image) (e.g., projected onto a functional image) to provide an anatomical background for evaluation of intensity fluctuations therein. Methods for segmenting images and analyzing nuclear medicine images using acquired anatomical backgrounds are described in further detail, for example, in PCT/US 2019/01486 (PCT publication WO 2019/136349) of the 2019, 1-7, and PCT/EP2020/050132 (PCT publication WO 2020/144134) of the 2020, 1-6, the contents of each of which are hereby incorporated by reference in their entirety.
In some embodiments, the potential lesions are detected as locally high intensity areas in a functional image (e.g., a PET image). These localized areas of increased intensity (also referred to as hotspots) may be detected using image processing techniques (e.g., filtering and thresholding) that do not necessarily involve machine learning, and segmented using methods such as fast-marching methods. Anatomical information established from the segmentation of the anatomical image allows anatomical labeling of detected hot spots representing potential lesions. Anatomical background may also be used to allow different detection and segmentation techniques to be used for hot spot detection in different anatomical regions, which may increase sensitivity and performance.
In some embodiments, the automatically detected hotspots may be presented to the user via an interactive Graphical User Interface (GUI). In some embodiments, to account for target lesions that are detected by a user (e.g., physician) but missed or poorly segmented by the system, a manual segmentation tool is included in the GUI to allow the user to manually "draw" areas of the image that they consider to correspond to lesions of any shape and size. These manually segmented lesions may then be included in a subsequently generated report along with selected automatically detected target lesions.
AI-based lesion detection
In certain embodiments, the systems and methods described herein utilize one or more machine learning modules to analyze the intensity of 3D functional images and detect hot spots representing potential lesions. For example, training material for AI-based lesion detection algorithms may be obtained by collecting a dataset of PET/CT images in which hotspots representing lesions have been manually detected and segmented. These manually labeled images may be used to train one or more machine learning algorithms to automatically analyze functional images (e.g., PET images) to accurately detect and segment hotspots corresponding to cancerous lesions.
FIG. 1A shows an example process 100a for automated lesion detection and/or segmentation using a machine learning module implementing a machine learning algorithm, such as ANN, CNN, and the like. As shown in fig. 1A, a 3d functional image 102 (e.g., a PET or SPECT image) is received 106 and used as input to a machine learning module 110. Figure 1A shows PyL for use as a radiopharmaceutical 102a TM Exemplary PET images were obtained. The PET image 102a is shown superimposed on a CT image (e.g., as a PET/CT image), but the machine learning module 110 may receive the PET (e.g., or other functional image) itself (e.g., without CT, or other anatomical image) as input. In certain embodiments, anatomical images may also be received as input, as described below. The machine learning module automatically detects and/or segments the determined (via the machine learning module) representation of the potential cancerA hot spot 120 of sexual lesions. An example image showing hot spots occurring in the PET image 120b is also shown in fig. 1A. Thus, the machine learning module generates as output one or both of (i) the hot spot list 130 and (ii) the hot spot map 132. In some embodiments, the hotspot list identifies the location (e.g., centroid) of the detected hotspot. In certain embodiments, the hotspot graph identifies a 3D volume of the detected hotspot and/or depicts a 3D boundary of the detected hotspot as determined via image segmentation performed by machine learning module 110. The list of hotspots and/or the hotspot graph (e.g., to other software modules) may be stored and/or provided for display and/or further processing 140.
In some embodiments, not only functional image information (e.g., from a PET image) but also anatomical information may be trained on machine-learning-based lesion detection algorithms, and the machine-learning-based lesion detection algorithms may utilize not only functional image information (e.g., from a PET image) but also anatomical information. For example, in certain embodiments, one or more machine learning modules for lesion detection and segmentation may be trained on two channels-a first channel corresponding to a portion of a PET image and a second channel corresponding to a portion of a CT image, and may receive the two channels as inputs. In some embodiments, information derived from anatomical (e.g., CT) images may also be used as input to a machine learning module for lesion detection and/or segmentation. For example, in certain embodiments, a 3D segmentation map identifying individual tissue regions within anatomical and/or functional images may also be used (e.g., received as input by one or more machine learning modules, e.g., as separate input channels) to provide anatomical context.
Fig. 1B shows an example process 100B in which both a 3D anatomical image 104 (e.g., CT or MR image) and a 3D functional image 102 are received 108 and used as input to a machine learning module 112, the machine learning module 112 performs hot spot detection and/or segmentation 122 based on information (e.g., voxel intensities) from both the 3D anatomical image 104 and the 3D functional image 102, as described herein. The hot spot list 130 and/or the hot spot map 132 may be generated as output from the machine learning module and stored/provided for further processing (e.g., graphical visualization for display, subsequent operations by other software modules, etc.) 140.
In certain embodiments, automated lesion detection and analysis (e.g., for inclusion in a report) includes three tasks: (i) detecting a hot spot corresponding to a lesion; (ii) Segmentation of the detected hotspots (e.g., to identify a 3D volume within the functional image corresponding to each lesion); (iii) The detected hotspots are classified as having a high or low probability of corresponding to a true lesion within the subject (e.g., and thus suitable or unsuitable for inclusion in a radiologist report). In some embodiments, one or more machine learning modules may be used to accomplish these three tasks, e.g., one after the other (e.g., sequentially) or in combination. For example, in some embodiments, a first machine learning module is trained to detect hotspots and identify hotspot locations, a second machine learning module is trained to segment hotspots, and a third machine learning module is trained to classify detected hotspots, for example, using information obtained from the other two machine learning modules.
For example, as shown in the example process 100C of fig. 1C, the 3d functional image 102 may be received 106 and used as input to a first machine learning module 114 that performs automated hotspot detection. The first machine learning module 114 automatically detects one or more hotspots 124 in the 3D functional image and generates a list of hotspots 130 as output. The second machine learning module 116 may receive the hot spot list 130 as input along with the 3D functional image and perform the automated hot spot segmentation 126 to generate the hot spot map 132. As previously described, the hotspot graph 132 and the hotspot list 130 may be stored and/or provided for further processing 140.
In some embodiments, a single machine learning module is trained to directly segment hotspots within an image (e.g., a 3D functional image; e.g., to generate a 3D hotspot graph identifying volumes corresponding to detected hotspots), thereby combining the first two steps of detecting and segmenting hotspots. A second machine learning module may then be used, for example, to classify the detected hotspots based on the previously determined segmented hotspots. In some embodiments, a single machine learning module may be trained to accomplish all three tasks in a single step: detecting, dividing and classifying.
Lesion index value
In certain embodiments, lesion index values are calculated for detected hotspots to provide, for example, a measure of relative uptake within the corresponding physical lesion and/or the size of the corresponding physical lesion. In certain embodiments, a lesion index value is calculated for a particular hotspot based on (i) a measure of the intensity of the hotspot and (ii) reference values corresponding to a measure of intensity within one or more reference volumes (each reference volume corresponding to a particular reference tissue region). For example, in certain embodiments, the reference values include an aortic reference value (also referred to as a blood pool reference) that measures intensity within an aortic volume corresponding to a portion of the aorta and a liver reference value that measures intensity within a liver volume corresponding to the liver of the subject. In certain embodiments, the intensities of voxels of a nuclear medicine image (e.g., a PET image) represent Standard Uptake Values (SUV) (e.g., have been calibrated for injected radiopharmaceutical dose and/or patient weight), and the measure of hotspot intensity and/or the measure of reference value is the SUV value. The use of such reference values for calculating lesion index values is further described in detail, for example, in PCT/EP2020/050132, filed 1/6/2020, the contents of which are hereby incorporated by reference in their entirety.
In some embodiments, a segmentation mask is used to identify a particular reference volume in, for example, a PET image. For a particular reference volume, a segmentation mask identifying the reference volume may be obtained via segmentation of an anatomical (e.g., CT) image. For example, in some embodiments (e.g., as described in PCT/EP 2020/050132), segmentation of a 3D anatomical image may be performed to generate a segmentation map comprising a plurality of segmentation masks, each segmentation mask identifying a region of interest of a particular tissue. Thus, one or more segmentation masks of the segmentation map generated in this manner may be used to identify one or more reference volumes.
In some embodiments, to identify voxels to be used to calculate a reference volume corresponding to a reference value, the mask may be eroded a fixed distance (e.g., at least one voxel) to establish a reference organ mask identifying a reference volume corresponding to a physical region entirely within the reference tissue region. For example, erosion distances of 3mm and 9mm may be used for the aortic and hepatic reference volumes, respectively. Other erosion distances may also be used. Additional mask refinement (e.g., to select a particular, desired set of voxels for computing a reference value) may also be performed, e.g., as described below with respect to the liver reference volume.
Various measures of intensity within the reference volume may be used. For example, in some embodiments, a robust average of voxel intensities inside a reference volume (e.g., as defined by a reference volume segmentation mask after erosion) may be determined as an average of values within a quartile range of voxel intensities (IQR mean ). Other metrics (e.g., peak, maximum, median, etc.) may also be determined. In some embodiments, the aortic reference value is determined as a robust average of SUVs from voxels inside the aortic mask. The robust average is calculated as the average of values within the quartile range IQR mean
In some embodiments, a subset of voxels within the reference volume is selected so as to avoid effects from a reference tissue region that may have abnormally low radiopharmaceutical uptake. Although the automated segmentation techniques described and referenced herein may provide accurate contours (e.g., identification) of image regions corresponding to particular tissue regions, there are often regions of abnormally low uptake in the liver that should be excluded from reference value calculations. For example, liver reference values (e.g., liver SUV values) may be calculated in order to avoid effects from regions in the liver that have very low tracer (radiopharmaceutical) activity that may occur, for example, due to tumors without tracer uptake. In some embodiments, to account for the effects of abnormally low uptake in the reference tissue region, a histogram of intensities corresponding to voxels of the liver (e.g., voxels within an identified liver reference volume) is analyzed for reference value calculations of the liver and intensities are removed (e.g., excluded) if they form a second histogram peak of lower intensity, including only intensities associated with higher intensity value peaks.
For example, for the liver, the reference SUV may be calculated as the average SUV of the principal components (also referred to as "modes", e.g., referred to as in "principal modes") in the two-component gaussian mixture model fitted to the histogram of SUVs of voxels within the liver reference volume (e.g., as identified by the liver segmentation mask, e.g., following the erosion procedure described above). In certain embodiments, if the secondary component has an average SUV greater than the primary component and the secondary component has a weight of at least 0.33, then the castback is erroneous and any reference value for the liver is not determined. In some embodiments, if the secondary component has an average number greater than the primary peak, the liver reference mask remains intact. Otherwise, a separate SUV threshold is calculated. In certain embodiments, the separation threshold is defined such that the probability of a primary component belonging to an SUV at the threshold or greater is the same as the probability of a secondary component belonging to an SUV at the separation threshold or less. The reference liver mask is then refined by removing voxels having SUVs less than the separation threshold. The liver reference value may then be determined as a measure of the intensity of voxels (e.g., SUV) identified by the liver reference mask, e.g., as described herein with respect to the aortic reference. Fig. 2A illustrates an example liver reference calculation showing a histogram of liver SUV values with gaussian mixture components (major component 244 and minor component 246) shown in red and separation threshold 242 labeled in green.
Fig. 2B shows the resulting portion of the liver volume used to calculate the liver reference value, wherein voxels corresponding to lower value peaks are excluded from the reference value calculation. Contours 252a and 252B of the refined liver volume mask are shown on each image in fig. 2B, with voxels corresponding to lower-valued peaks (e.g., having intensities below the separation threshold 242) excluded. As shown in the figure, lower intensity regions towards the bottom of the liver, as well as regions near the liver edges, have been excluded.
Fig. 2C shows an example process 200 in which a multicomponent mixing model is used to avoid effects from regions with low tracer uptake, as described herein with respect to liver reference volume calculations. The process shown in fig. 2C and described herein with respect to the liver may also be similarly applied to calculate intensity metrics for regions of interest of other organs and tissues, such as the aorta (e.g., aortic portion, such as thoracic or abdominal aortic portion), parotid, gluteal muscles. As shown, in fig. 2C and described herein, in a first step, a 3D functional image 202 is received, and a reference volume corresponding to a particular reference tissue region (e.g., liver, aorta, parotid) is identified 208 therein. The multi-component hybrid model 210 is then fitted to the distribution intensities (e.g., intensity histograms) of the reference volumes (e.g., interior), and the dominant mode 212 of the hybrid model is identified. A measure of intensity associated with the primary mode (e.g., and excluding specific gravity from intensities associated with other, secondary modes) is determined 214 and used as a reference intensity value for the identified reference volume. In certain embodiments, a measure of intensity associated with a primary mode is determined by identifying a separation threshold, such that intensities above the separation threshold are determined to be associated with a primary mode and intensities below the separation threshold are determined to be associated with a secondary mode, as described herein. Voxels with intensities above the separation threshold are used to determine the reference intensity value, while voxels with intensities below the separation threshold are excluded from the reference intensity value calculation.
In certain embodiments, detecting 216 a hotspot and the reference intensity value determined in this manner may be used to determine the lesion index value 218 of the detected hotspot, for example, via methods described in PCT/US 2019/01486, e.g., of the 2019 1-month 7 application, and PCT/EP2020/050132, of the 2020-month 1-6 application, the contents of each of which are hereby incorporated by reference in their entirety.
inhibiting intensity exudation associated with normal uptake in high-uptake organs
In some embodiments, the intensities of voxels of the functional image are adjusted so as to suppress/correct intensity exudation associated with certain organs in which high uptake normally occurs. Such methods may be used, for example, in organs such as the kidneys, liver and urinary bladder. In certain embodiments, correcting intensity exudation associated with multiple organs is performed in a stepwise manner, one organ at a time. For example, in certain embodiments, renal uptake is inhibited first, followed by hepatic uptake, followed by urinary bladder uptake. Thus, the input to liver inhibition is an image in which renal uptake has been corrected (e.g., and the input to bladder inhibition is an image in which renal and hepatic uptake has been corrected).
FIG. 3 shows an exemplary process 300 for correcting intensity exudation from a high uptake tissue region. As shown in fig. 3, a 3d functional image is received 304 and a high intensity volume corresponding to a high uptake tissue region is identified 306. In another step, a suppression volume outside the high intensity volume is identified 308. In certain embodiments, the suppression volume may be determined as a volume of a region enclosed outside the high intensity volume but within a predetermined distance from the high intensity volume, as described herein. In another step, the background image is determined 310, for example, by assigning a voxel, e.g., via interpolation (e.g., using convolution), within a high intensity volume intensity determined based on intensities outside of the high intensity volume (e.g., within the suppression volume). In another step, an estimated image is determined 312 by subtracting the background image from the 3D functional image (e.g., via voxel-by-voxel intensity subtraction). In another step, a suppression map is determined 314. As described herein, in certain embodiments, the inhibition map is determined using the estimated image by extrapolating intensity values of voxels within the high intensity volume to locations outside the high intensity volume. In some embodiments, the intensities are extrapolated only to locations within the suppression volume, and the intensities of voxels outside the suppression volume are set to 0. The suppression map is then used to adjust the intensity of the 3D functional image 316, for example, by subtracting the suppression map from the 3D functional image (e.g., performing voxel-by-voxel intensity subtraction).
An exemplary procedure for suppressing/correcting intensity exudation from a particular organ (in some embodiments, kidneys are treated together) for PET/CT composite images is as follows:
1. the projected CT organ mask segmentation is adjusted to the high intensity region of the PET image in order to handle PET/CT misalignment. If the PET-adjusted organ mask is less than 10 pixels, then this organ is not suppressed.
2. The "background image" is calculated to replace all high uptake with interpolated background uptake within the decay distance from the PET adjusted organ mask. This is done using a convolution of the gaussian kernel.
3. The intensity that should be considered in estimating the suppression is calculated as the difference between the input PET and the background image. This "estimated image" has a high intensity inside the given organ and zero intensity at a location more than the attenuation distance from the given organ.
4. The suppression map is estimated from the estimated image using an exponential model. The inhibition map is non-zero only in regions within the attenuation distance of the PET-adjusted organ segmentation.
5. The inhibition map was subtracted from the original PET image.
As described above, these five steps may be repeated in a sequential manner for each of a set of multiple organs.
Anatomical markers of detected lesions
In certain embodiments, detected hotspots are assigned anatomical markers (e.g., automatically) that identify particular anatomical regions and/or groups of regions in which lesions represented by the detected hotspots are determined to be located. For example, as shown in the example process 400 of fig. 4, a 3d functional image may be received 404 and used to automatically detect the hotspot 406, e.g., via any of the methods described herein. Once the hotspots are detected, the anatomical classification for each hotspot may be automatically determined 408 and each hotspot marked with the determined anatomical classification. The automated anatomical labeling may be performed, for example, using the automatically determined locations of the detected hotspots along with anatomical information provided by, for example, a 3D segmentation map and/or anatomical image identifying the image region corresponding to the particular tissue region. The hotspots and anatomical landmarks for each hotspot may be stored and/or provided for further processing 410.
For example, detected hotspots may be automatically classified into one of five categories:
● T (prostate tumor)
● N (Pelvis lymph node)
● Ma (non-pelvic lymph)
● Mb (bone metastasis)
● Mc (Soft tissue metastasis not located in the prostate or lymph nodes)
Table 1 below lists the tissue regions associated with each of the five categories. Hotspots corresponding to locations within each of the tissue regions associated with a particular category may be automatically assigned to that category accordingly.
TABLE 1 list of tissue regions corresponding to five categories in lesion anatomical tagging approach
Figure BPA0000334647880000481
Graphical user interface, quality control and reporting
In some embodiments, the detected hotspots and associated information (e.g., calculated lesion index values and anatomical markers) are displayed with an interactive Graphical User Interface (GUI) to allow a medical professional (e.g., physician, radiologist, technician, etc.) to view. Thus, a medical professional may use the GUI to view and confirm the accuracy of the detected hotspots and corresponding index values and/or anatomical landmarks. In some embodiments, the GUI may also allow the user to identify and segment (e.g., manually) additional hotspots within the medical image, allowing the medical professional to identify additional potential lesions that he/she believes may miss in the automated detection process. Once identified, lesion index values and/or anatomical landmarks may also be determined for these manually identified and segmented lesions. For example, as indicated in fig. 5B, the user may view the location determined for each hotspot, as well as anatomical landmarks, such as (e.g., automatically determined) miTNM classifications. Such a miTNM classification scheme is for example described in "standardized assessment of prostate cancer molecular imaging (procse) by Eiber et al, J.Nucl.Med.59, pages 469-78 (2018): an explanation of PSMA Ligand PET/CT is described in further detail in the MiTNM classification proposal (Prostate Cancer Molecular Imaging Standardized Evaluation (PROMISE): proposed miTNM Classification for the Interpretation of PSMA-liver PET/CT) ", the contents of which are hereby incorporated by reference in their entirety. Once users are satisfied with the detected hotspots and the collection of information calculated therefrom, they can confirm their core availability and generate final, signed reports that can be viewed and used to discuss results and diagnostics with patients, and evaluate prognosis and treatment options.
For example, as shown in fig. 5A, in an example process 500 for interactive hotspot viewing and detection, a 3d functional image is received 504 and hotspots are automatically detected 506, e.g., using any of the automated detection methods described herein. A set of automatically detected hotspots is graphically represented and presented 508 within the interactive GUI for viewing by the user. The user may select at least a portion (e.g., up to all) of the automatically detected hotspots for inclusion in a final set of hotspots 510, which may then be used for further calculation 512, e.g., to determine risk index values for the patient.
Fig. 5B shows an exemplary workflow 520 for user review of detected lesions and lesion index values for quality control and reporting. The exemplary workflow allows for user review of segmented lesions and liver and aortic segmentation for calculating lesion index values, as described herein. For example, in a first step, the user views the quality 522 of the image (e.g., CT image) and the accuracy 524 of the automated segmentation for obtaining liver and blood pool (e.g., aorta) reference values. As shown in fig. 6A and 6B, the GUI allows the user to evaluate the images and superimposed the segmentation to ensure that the automated segmentation of the liver (602, purple in fig. 6A) is within healthy liver tissue and the automated segmentation of the blood pool (aortic portion 604, shown as salmon color in fig. 6B) is within the aorta and left ventricle.
At another step 526, the user validates the automatically detected hotspots and/or identifies additional hotspots, for example, to establish a final set of hotspots corresponding to lesions for inclusion in the generated report. As shown in fig. 6C, the user may select an automatically identified hotspot (e.g., as an overlay and/or annotated region on a PET and/or CT image) by hovering a mouse over a graphical representation of the hotspot displayed within the GUI. To facilitate hotspot selection, the particular hotspot selected may be indicated to the user via a color change (e.g., turning green). The user may then click on the hotspot to select it, which may be visually confirmed to the user via another color change. For example, as shown in fig. 6C, after selection, the hot spot turns pink. After user selection, the quantitatively determined values (e.g., lesion index and/or anatomical landmarks) may be displayed to the user to allow them to verify the automatically determined values 528.
In some embodiments, the GUI allows the user to select hotspots from a set of (automatically) pre-identified hotspots to confirm that they do represent a lesion 526a and also identify additional hotspots 562b corresponding to lesions that have not been automatically detected.
As shown in fig. 6D and 6E, a user may use a GUI tool to draw on tiles of an image (e.g., a PET image and/or a CT image; e.g., a PET image overlaid on a CT image) to annotate an area corresponding to a new, manually identified lesion. Quantitative information (e.g., lesion index and/or anatomical markers) may be automatically determined for a manually identified lesion, or may be manually entered by a user.
At another step, for example, once all lesions have been selected by the user and/or manually identified, the GUI displays a quality control checklist for the user to view 530, as shown in fig. 7. Once the user views and completes the checklist, they can click "setup report" to sign and generate final report 532. An example of a generated report is shown in fig. 8.
C. Exemplary machine learning network architecture for lesion segmentation
i. Machine learning module input and architecture
Turning to fig. 9, which shows an example hotspot detection and segmentation process 900 in some embodiments, the hotspot detection and/or segmentation is performed by a machine learning module 908 that receives as input a functional image 902 and an anatomical image 904, and a segmentation map 906, the segmentation map 906 providing, for example, segmentation of various tissue regions (e.g., soft tissue and bone, and various organs as described herein).
The functional image 902 may be a PET image. As described herein, intensity voxels of the functional image 902 may be scaled to represent SUV values. In certain embodiments, other functional images as described herein may also be used. The anatomical image 904 may be a CT image. In some embodiments, the voxel intensity of the CT image 904 is scaled to represent Hensfield units (Hounsfield units). In certain embodiments, other anatomical images as described herein may be used.
In some embodiments, the machine learning module 908 implements a machine learning algorithm using a U-net architecture. In some embodiments, the machine learning module 908 implements a machine learning algorithm that uses a Feature Pyramid Network (FPN) architecture. In some embodiments, various other machine learning architectures may be used to detect and/or segment lesions. In some embodiments, a machine learning module as described herein performs semantic segmentation. In some embodiments, a machine learning module as described herein performs example segmentation, e.g., to distinguish one lesion from another.
In some embodiments, the three-dimensional segmentation map 906 received as input by the machine learning module identifies various volumes (e.g., via a plurality of 3D segmentation masks) in the received 3D anatomical and/or functional images as corresponding to particular tissue regions of interest of, for example, certain organs (e.g., prostate, liver, aorta, bladder, various other organs described herein, etc.) and/or bones. Additionally or alternatively, the machine learning module may receive a 3D segmentation map 906 identifying a group of tissue regions. For example, in some embodiments, a 3D segmentation map identifying soft tissue regions, bones, and then background regions may be used. In some embodiments, the 3D segmentation map may identify a high uptake organ group in which high levels of radiopharmaceutical uptake occur. For example, a high uptake organ group may include liver, spleen, kidney, and urinary bladder. In some embodiments, the 3D segmentation map identifies a high uptake organ group along with one or more other organs, such as the aorta (e.g., low uptake soft tissue organs). Other groups of tissue regions may also be used.
The functional images, anatomical images, and segmentation map inputs to the machine learning module 908 may have various sizes and dimensions. For example, in certain embodiments, each of the functional image, anatomical image, and segmentation map is a patch of a three-dimensional image (e.g., represented by a three-dimensional matrix). In some embodiments, each of the patches has the same size-e.g., each input is a [32x32x32] or [64x64x64] patch of voxels.
The machine learning module 908 segments the hotspots and generates a 3D hotspot graph 910 identifying one or more hotspot volumes. For example, the 3D hotspot graph 910 may include one or more masks having the same size as one or more of the functional image, anatomical image, or segmentation map inputs and identifying one or more hotspot volumes. In this way, the 3D hotspot map 910 may be used to identify volumes within the functional image, anatomical image, or segmentation map that correspond to hotspots and thus to physical lesions.
In some embodiments, the machine learning module 908 segments the hotspot volume, thereby distinguishing background (i.e., non-hotspot) areas from the hotspot volume. For example, the machine learning module 908 may be a binary classifier that classifies voxels as background or belonging to a single hotspot category. Thus, the machine learning module 908 may generate as output a class-agnostic (e.g., or 'single class') 3D hotspot graph that identifies hotspot volumes but does not distinguish between different anatomical locations and/or lesion types that a particular hotspot volume may represent-e.g., bone metastasis, lymphatic nodules, localized prostate. In some embodiments, the machine learning module 908 segments the hotspot volume and also classifies the hotspots according to a plurality of hotspot categories, each hotspot category representing a particular anatomical location and/or type of lesion represented by the hotspot. In this way, the machine learning module 908 may directly generate a multi-category 3D hotspot graph that identifies one or more hotspot volumes and tags each hotspot volume as belonging to a particular one of the plurality of hotspot categories. For example, the detected hot spots may be classified as bone metastases, lymphatic nodules, or prostate lesions. In some embodiments, other soft tissue classifications may be included.
In addition to or instead of classifying the hotspots according to their likelihood of representing a real lesion, such classification may be performed as described herein, for example, in section b.ii.
Lesion classification post-treatment and/or output
Turning to fig. 10A and 10B, in some embodiments, after detection and/or segmentation of a hotspot in an image by one or more machine learning modules (with physical lesions as an understanding of the occurrence of the hotspot in, for example, a functional image, the terms "lesion" and "hotspot" are used interchangeably in fig. 9-12), a post-process 1000 is performed to label the hotspot as belonging to a particular hotspot category. For example, the detected hot spots may be classified as bone metastases, lymphatic nodules, or prostate lesions. In some embodiments, the labeling scheme in table 1 may be used. In some embodiments, this tagging may be performed by a machine learning module, which may be the same machine learning module used to perform segmentation and/or detection of hotspots, or may be a separate module that receives as input a list of detected hotspots (e.g., identifying their locations) and/or a 3D hotspot graph (e.g., delineating hotspot boundaries as determined via segmentation), either individually or in conjunction with other inputs (e.g., 3D functional images, 3D anatomical images, and/or segmentation maps, as described herein). As shown in fig. 10B, in some embodiments, the segmentation map 906 used as an input to the machine learning module 908 to perform lesion detection and/or segmentation may also be used to classify lesions, for example, according to anatomical location. In some embodiments, other (e.g., different) segmentation maps may be used (e.g., not necessarily the same segmentation map fed as input into the machine learning module).
Parallel organ-specific lesion detection module
Turning to fig. 11A and 11B, in some embodiments, the one or more machine learning modules include one or more organ-specific modules that perform detection and/or segmentation of hotspots located in corresponding organs. For example, as shown in the example processes 1100 and 1150 of fig. 11A and 11B, respectively, the prostate module 1108a may be used to perform detection and/or segmentation in a prostate region. In some embodiments, one or more organ-specific modules are used in conjunction with a whole-body module 1108b that detects and/or segments hot spots throughout the subject's entire body. In some embodiments, results 1100a from one or more organ-specific modules are combined with results 1110b from whole-body modules to form a final hot spot list and/or hot spot map 1112. In some embodiments, merging may include merging results (e.g., a list of hotspots and/or a 3D hotspot graph) 1110a and 1110b are combined with other outputs (e.g., a 3D hotspot graph 1114 created by partitioning hotspots using other methods, which may include using other machine learning modules and/or techniques, as well as other partitioning methods). In some embodiments, additional segmentation methods may be performed after the hot spot is detected and/or segmented by one or more machine learning modules. This additional segmentation step may use as input hotspot segmentation and/or detection results obtained, for example, from one or more machine learning modules. In certain embodiments, as shown in FIG. 11B, as described herein (e.g., in the following articles section) C.The analytical segmentation method 1122 described in iv may be used in conjunction with an organ specific lesion detection module. Analytical segmentation 1122 uses results 1110b and 1110a from upstream machine learning modules 1108b and 1108a, along with PET image 1102 to segment hotspots using analytical segmentation techniques (e.g., which do not utilize machine learning) and to build an analytical segmented 3D hotspot graph 1124.
Analysis of segmentations
Turning to fig. 12, in some embodiments, machine learning techniques may be used to perform hotspot detection and/or initial segmentation, and as a subsequent step, an analytical model is used to perform the final segmentation of each hotspot, for example.
As used herein, the terms "analytical model" and "analytical segmentation" refer to segmentation methods based on (e.g., using) predetermined rules and/or functions (e.g., mathematical functions). For example, in some embodiments, the analytical segmentation method may segment hotspots using one or more predetermined rules (e.g., an ordered sequence of image processing steps, application of one or more mathematical functions to an image, conditional logical branching, and the like). Analytical segmentation methods may include, but are not limited to, threshold-based methods (e.g., including image thresholding steps), level setting methods (e.g., fast-marching methods), graph cutting methods (e.g., watershed segmentation), or active contour models. In some embodiments, the analytical segmentation method does not rely on a training step. In contrast, in certain embodiments, the machine learning model will segment hotspots using a model that has been automatically trained to pre-segment the hotspots using a training data set (e.g., including, for example, instances of images and hotspots manually segmented by a radiologist or other medical practitioner), and is intended to mimic the segmentation behavior in the training set.
Using an analytical segmentation model to determine the final segmentation may be advantageous, for example, because in some cases the analytical model may be easier to understand and debug than a machine learning approach. In some embodiments, such analytical segmentation methods may operate on 3D functional images along with lesion segmentation produced by machine learning techniques.
For example, as shown in fig. 12, in an example process 1200 of hotspot segmentation using an analytical model, a machine learning module 1208 receives as input a PET image 1202, a CT image 1204, and a segmentation map 1206. The machine learning module 1208 performs segmentation to create a 3D hotspot graph 1210 identifying one or more hotspot volumes. The analytical segmentation model 1212 uses the machine learning module generated 3D hotspot graph 1210, along with the PET image 1202 to perform segmentation and create a 3D hotspot graph 1214 identifying the analyzed segmented hotspot volume.
Exemplary hotspot splitting
Fig. 13A and 13B show examples of machine learning module architectures for hotspot detection and/or segmentation. Fig. 13A shows an example U-net architecture (the "n=" in brackets of fig. 13A identifies the number of filters in each layer) and fig. 13B shows an example FPN architecture. FIG. 13C shows another example FPN architecture.
Fig. 14A-C show example results of hotspot splitting obtained using a machine learning module implementing a U-net architecture. Crosshairs and bright spots in the image indicate segmented hot spots 1402 (representing potential lesions). Fig. 15A and 15B show example hotspot segmentation results obtained using a machine learning module implementing FPN. In particular, fig. 15A shows an input PET image superimposed on a CT image. FIG. 15B shows an exemplary hotspot graph determined overlying a CT image using a machine learning module implementing FPN. The overlaid hotspot graph may show the hotspot volume 1502 near the subject's spine in dark red.
D. Example graphical user interface
In certain embodiments, the lesion detection, segmentation, classification, and related techniques described herein may include a GUI that facilitates user interaction (e.g., with a software program implementing the various methods described herein) and/or result viewing. For example, in certain embodiments, GUI portions and windows allow, among other things, users to upload and manage data to be analyzed, visualize images and results produced via the methods described herein, and produce reports summarizing findings. Screen shots of certain example GUI views are shown in fig. 16A-16E.
For example, fig. 16A shows an exemplary GUI window providing for uploading and viewing of study [ e.g., image data collected during the same examination and/or scan (e.g., according to digital imaging and communications in medicine (DICOM) standards), such as PET images and CT images collected via PET/CT scan ] by a user. In certain embodiments, the uploaded study is automatically added to a patient list listing identifiers of subjects/patients who have uploaded one or more PET/CT images. For each item in the patient list shown in fig. 16, the patient ID is shown along with the available PET/CT study for that patient, along with a corresponding report. In some embodiments, the team concept allows for the establishment of a group (e.g., team) of multiple users working on and being provided access to a particular subset of the uploaded data. In some embodiments, the patient list may be associated with and automatically shared with a particular team in order to provide access to the patient list to each member of the team.
Fig. 16B shows an example GUI viewer 1610 that allows a user to view medical image data. In some embodiments, the viewer is a multimodal viewer to allow a user to view multiple imaging modalities, as well as various formats and/or combinations thereof. For example, the viewer shown in fig. 16B allows a user to view PET and/or CT images, as well as fusions (e.g., overlays) thereof. In some embodiments, the viewer allows a user to view 3D medical image data in various formats. For example, a viewer may allow a user to select and view various 2D tiles along a particular (e.g., selected) cross-sectional plane of a 3D image. In some embodiments, the viewer allows the user to view a Maximum Intensity Projection (MIP) of the 3D image data. Other ways of visualizing the 3D image data may also be provided. In this example, as shown in fig. 16B, a control panel graphical widget 1612 is provided on the left-hand side of the viewer and allows the user to view available study information (e.g., date, various patient data, and imaging parameters, etc.).
Turning to fig. 16C, in some embodiments, the GUI viewer includes a lesion selection tool that allows a user to select a lesion volume that is a volume of interest (VOI) that the user identifies and selects as, for example, an image that is likely to represent a true potential physical lesion. In certain embodiments, the lesion volume is selected from a set of hot spot volumes that are automatically identified and segmented, for example, via any of the methods described herein. The selected lesion volume may be saved for inclusion in a final set of identified lesion volumes that may be used for reporting and/or further quantitative analysis. In certain embodiments, for example, as shown in fig. 16C, various features/quantitative measures [ e.g., maximum intensity, peak intensity, average intensity, volume, lesion Index (LI), anatomical classification (e.g., miTNM category, location, etc.) etc. ] of a particular lesion are displayed 1614 according to user selection of the particular lesion volume.
Turning to fig. 16D, additionally or alternatively, a GUI viewer may allow a user to view the results of automated segmentation performed in accordance with various embodiments described herein. Segmentation may be performed via automated analysis of CT images as described herein, and may include identification and segmentation of 3D volumes representing the liver and/or aorta. The segmentation results may be superimposed on a representation of the medical image data, for example on a CT and/or PET image representation.
Fig. 16E shows an example report 1620 generated via analysis of medical image data as described herein. In this example, report 1620 summarizes the results of the reviewed study and provides features and quantitative metrics characterizing the selected (e.g., by the user) lesion volume 1622. For example, as shown in fig. 16E, for each selected lesion volume, the report includes a lesion ID, a lesion type (e.g., miTNM classification), a lesion location, an SUV maximum, an SUV peak, an SUV average, a volume, and a lesion index value.
E. Hot spot segmentation and classification using multiple machine learning modules
In some embodiments, multiple machine learning modules are used in parallel to segment and classify hotspots. FIG. 17A is a block flow diagram of an example process 1700 for segmenting and classifying hotspots. The example process 1700 performs image segmentation on the 3D PET/CT image to segment the hot spot volumes and classifies each segmented hot spot volume as a lymphatic, bone, or prostate hot spot in particular according to an (automatically) determined anatomical location.
The example process 1700 receives as input and operates on the 3D PET image 1702 and the 3D CT image 1704. The CT image 1704 is input to a first, organ segmentation machine learning module 1706, which first, organ segmentation machine learning module 1706 performs segmentation to identify a 3D volume in the CT image representing a particular tissue region and/or organ or regions of interest (e.g., related) tissue and/or anatomical group of organs. Thus, the organ segmentation machine learning module 1706 is used to generate a 3D segmentation map 1708 that identifies a particular tissue region and/or organ of interest or anatomical group thereof within the CT image. For example, in certain embodiments, the segmentation map 1708 identifies two volumes of interest corresponding to two anatomical groups of organs-one volume of interest corresponding to an anatomical group of high-uptake soft tissue organs including liver, spleen, kidney, and urinary bladder, and a second volume of interest corresponding to an aorta (e.g., thoracic and abdominal portions) as a low-uptake soft tissue organ. In some embodiments, organ segmentation machine learning module 1706 generates as output an initial segmentation map identifying various individual organs (including those that comprise the anatomical group of segmentation map 1708, and in some embodiments other individual organs), and segmentation map 1708 is built from the initial segmentation map (e.g., by assigning the same labels to volumes of individual organs corresponding to the anatomical group). Thus, in some embodiments, the 3D segmentation map 1708 uses three markers that identify and distinguish between: (i) voxels belonging to high uptake soft tissue organs; (ii) low uptake soft tissue organs-i.e., the aorta; and (iii) other areas as background.
In the example process 1700 shown in fig. 17A, the organ segmentation machine learning module 1706 implements a U-grid architecture. Other architectures (e.g., FPN) may be used. The PET image 1702, CT image 1704, and 3D segmentation map 1708 are used as inputs to two parallel hot spot segmentation modules.
In some embodiments, the example process 1700 uses two machine learning modules in parallel to segment and classify hotspots differently, and then merges their results. For example, it was found that the machine learning module performs more accurate segmentation when it only identifies a single hotspot category-e.g., whether or not to identify an image region as a hotspot-rather than multiple-lymph, bone, prostate-desired hotspot categories. Thus, the process 1700 utilizes a first, single-category hotspot splitting module 1712 to perform accurate splitting and a second, multi-category hotspot splitting module 1714 to classify the hotspot into the desired three categories.
In particular, the first, single-category hotspot segmentation module 1712 performs segmentation to produce a first, single-category 3D hotspot graph 1716 that identifies a 3D volume representing the lesion (while other image areas are identified as background). Thus, the single category hotspot segmentation module 1712 performs binary classification to label image voxels as belonging to one of two category-background or single hotspot categories. The second, multi-category hotspot segmentation module 1714 segments the hotspots and assigns one of a plurality of hotspot classification tags to the segmented hotspot volumes, as opposed to using a single hotspot category. In particular, the multi-category hotspot segmentation module 1714 classifies the segmented hotspot volumes as lymphatic, bone, or prostate hotspots. Thus, the multi-category hotspot segmentation module generates a second, multi-category 3D hotspot graph 1718 that identifies 3D volumes representing hotspots and labels them as lymph, bone, or prostate (while other image areas are identified as background). In process 1700, the single-category hotspot splitting module and the multi-category hotspot splitting module each implement a FPN architecture. Other machine learning architectures (e.g., U-net) may be used.
In some embodiments, to generate the final 3D hotspot graph of the segmented and categorized hotspot 1724, the single-class hotspot graph 1716 and the multi-class hotspot graph 1718 are merged 1722. In particular, each hotspot volume of the single-category hotspot graph 1716 is compared to the hotspot volumes of the multi-category hotspot graph 1718 to identify matching hotspot volumes that represent the same physical location and thus the same (potential) physical lesion. Matching hotspot volumes may be identified, for example, based on various metrics of spatial overlap (e.g., volume overlap percentage), proximity (e.g., center of gravity within a threshold distance), and the like. The hotspot volumes for which the matching hotspot volumes from the multi-category hotspot graph 1718 are identified-the single-category hotspot graph 1716 is assigned a marker of the matching hotspot volume-lymph, bone or prostate. In this way, hotspots are accurately segmented via the single-category hotspot segmentation module 1712 and then marked using the results of the multi-category hotspot segmentation module 1714.
Turning to fig. 17B, in some cases, for a particular hotspot volume of the single-category hotspot graph 1716, no matching hotspot volumes from the multi-category hotspot graph 1718 are found. Such hot spot volumes are labeled based on a comparison with 3D segmentation map 1738, 3D segmentation map 1738 may differ from segmentation map 1708 in identifying 3D volumes corresponding to lymphatic and bone regions.
In some embodiments, the single category hotspot segmentation module 1712 may not segment hotspots in the prostate region such that the single category hotspot graph does not include any hotspots in the prostate region. The hotspot volumes from the multi-category hotspot graph 1718 labeled as prostate hotspots are available for inclusion in the merged hotspot graph 1724. In some embodiments, the single category hotspot segmentation module 1712 may segment some hotspots in the prostate region, but additional hotspots (e.g., not identified in the single category hotspot graph 1716) may be segmented by the multi-category hotspot segmentation module 1714 and identified as prostate hotspots by the multi-category hotspot segmentation module 1714. These additional hotspot volumes present in the multi-category hotspot graph 1718 may be included in the merged hotspot graph 1724.
Thus, in some embodiments, information from the CT image 1704, the PET image 1702, the 3D organ segmentation map 1738, the single category heat map 1716, and the multi-category heat map 1718 is used in a heat spot merge step 1722 to generate a merged 3D heat map of the segmented and categorized heat spot volume 1724.
In one example merging method, overlap is determined (e.g., between two hotspot volumes) when any two voxels from the hotspot volumes of the multi-category and single-category heatmaps correspond to/represent the same physical location. If a particular hotspot volume of a single-category hotspot graph overlaps only one hotspot volume of a multi-category hotspot graph (e.g., only one matching hotspot volume from the multi-category hotspot graph is identified), then the particular hotspot volume of the single-category hotspot graph is labeled according to the category to which the overlapping hotspot volume of the multi-category hotspot graph belongs. If a particular hotspot volume overlaps with two or more hotspot volumes of the multi-category hotspot graph (each identified as belonging to a different hotspot category), each voxel of the single-category hotspot volume is assigned the same category as the closest voxel in the overlapped hotspot volumes from the multi-category hotspot graph. If a particular hotspot volume of a single-category hotspot map does not overlap with any hotspot volumes of a multi-category hotspot map, the particular hotspot volume is assigned a hotspot category based on a comparison to a 3D segmentation map identifying a soft tissue region (e.g., organ) and/or bone. For example, in some embodiments, a particular hotspot volume may be marked as belonging to a bone class if any of the following statements is true:
(i) If more than 20% of the volume of the hot spot overlaps with rib segmentation;
(ii) If the hot spot volume does not overlap with any markers in the organ segmentation and the average value of CT in the hot spot mask is greater than 100 henry's units;
(iii) SUVs if hot spot volumes max Is overlapped with the bone markers in the organ segmentation; or (b)
(iv) If more than 50% of the volume of the hot spot overlaps with bone markers in the organ segmentation.
In some embodiments, a particular hotspot volume may be identified as lymph if 50% or more of the hotspot volume does not overlap with bone markers in the organ segmentation.
In some embodiments, when all hotspot volumes of the single-class heatmap are classified as lymph, bone, or prostate, any remaining prostate hotspots from the multi-class model are superimposed onto the single-class heatmap and included in the merged heatmap.
Fig. 17C shows an example computer program 1750 for implementing the hotspot segmentation and classification method, according to the embodiment described with reference to fig. 17A and 17B.
F. Analytical segmentation via adaptive thresholding methods
In certain embodiments, for example, as described herein in section c.iv, the image analysis techniques described herein utilize an analysis segmentation step to refine the hotspot segmentation determined via the machine learning module as described herein. For example, in certain embodiments, a 3D hotspot graph generated by a machine learning method as described herein is used as an initial input to an analytical segmentation model that refines and/or performs an entirely new segmentation.
In certain embodiments, the analytical segmentation model utilizes thresholding algorithms whereby hot spots are segmented by comparing intensities of voxels in anatomical images (e.g., CT images, MR images) and/or functional images (e.g., SPECT images, PET images) (e.g., synthetic anatomical and functional images, such as PET/CT or SPECT/CT images) to one or more thresholds.
Turning to fig. 18A, in some embodiments, an adaptive thresholding method whereby, for a particular hotspot, the intensity within an initial hotspot volume determined for the particular hotspot, e.g., via a machine learning method as described herein, is compared to one or more reference values to determine a threshold for the particular hotspot. The threshold for the particular hot spot is then used by the analytical segmentation model to segment the particular hot spot and determine the final hot spot volume.
Fig. 18A shows an example process 1800 for partitioning hotspots via an adaptive thresholding method. The process 1800 utilizes an initial 3D hot spot map 1802, a PET image 1804, and a 3D organ segmentation map 1806 that identify one or more 3D hot spot volumes. The initial 3D hotspot graph 1802 may be automatically determined via the various machine learning methods described herein and/or based on user interaction with the GUI. The user may refine the set of automatically determined hotspot volumes, for example, by selecting a subset for inclusion in the 3D hotspot graph 1802. Additionally or alternatively, the user may manually determine the 3D hotspot volume, for example, by drawing a boundary on the image with a GUI.
In certain embodiments, the 3D organ segmentation map identifies one or more reference volumes corresponding to particular reference tissue regions (e.g., aortic portions and/or liver). As described herein, for example, in section b.iii, intensities of voxels within certain reference volumes may be used to calculate an associated reference value 1808 with which intensities of identified and segmented hot spots may be compared (e.g., acting as a "dipstick"). For example, liver volume may be used to calculate a liver reference value and aortic portion used to calculate an aortic or blood pool reference value. In process 1800, the intensity of the aortic portion is used to calculate 1808 a blood pool reference value 1810. The blood pool reference values 1810 are used in conjunction with the initial 3D hotspot map 1802 and the PET image 1804 to determine thresholds for performing threshold-based analytical segmentation of hotspots in the initial 3D hotspot map 1802.
In particular, for a particular hotspot volume identified in the initial 3D hotspot graph 1802 (which identifies a particular hotspot representing a physical lesion), the intensity of PET image 1804 voxels located within the particular hotspot volume is used to determine the hotspot intensity of the particular hotspot. In some embodiments, the hotspot intensity is the maximum of intensities of voxels located within a particular hotspot volume. For example, for PET image intensities representing SUVs, a maximum SUV (SUV) within a particular hot spot volume is determined max ). Other metrics may be used, such as peak (e.g., SUV peak ) Average, median, quartile average (IQR mean )。
In certain embodiments, a hotspot-specific threshold for a particular hotspot is determined based on a comparison of the hotspot intensity to a blood pool reference. In certain embodiments, the comparison between the hotspot intensity and the blood pool reference value is used to select one of a plurality (e.g., predefined) of threshold functions, and the selected threshold function is used to calculate a hotspot-specific threshold for the specific hotspot. In certain embodiments, the threshold function calculates the hotspot-specific threshold as a function of the hotspot intensity (e.g., maximum intensity) and/or the blood pool reference value of the specific hotspot. For example, the threshold function may calculate the hotspot-specific threshold as the product of (i) the scaling factor and (ii) the hotspot intensity (or other intensity measure) and/or blood pool reference for the specific hotspot. In some embodiments, the scaling factor is a constant. In some embodiments, the scaling factor is an interpolated value determined as a function of the intensity measure of the particular hotspot. In certain embodiments and/or for certain threshold functions, the scaling factor is a constant used to determine the plateau level corresponding to the maximum threshold, e.g., as described in further detail in section G herein.
For example, pseudo code for an exemplary method of selecting between (e.g., via conditional logic) and calculating various threshold functions is shown below:
Figure BPA0000334647880000581
18B and 18C illustrate specific example adaptive thresholding methods implemented by the above pseudo code. Fig. 18B plots threshold 1832 as a function of hotspot strength for a particular hotspot—suv in the example max -a variation. FIG. 18C depicts an SUV as a specific hotspot max Is dependent on the SUV of the particular hotspot max Is a variation of (a). The dashed lines in each graph indicate certain values relative to the blood pool reference (SUV with 1.5 in the exemplary plots of fig. 18B and 18C), and also indicate the SUV in fig. 18C max 90% and 50% of (a) are used.
Turning to fig. 18D-F, the adaptive thresholding method as described herein addresses challenges and drawbacks associated with previous thresholding techniques that utilize fixed or relative thresholds. In particular, although in some embodiments the maximum Standard Uptake Value (SUV) max ) Is provided to segment hot spot volumes for estimating, for example, uptake volumes and SUVs mean Transparent and reproducible manner of parameters of (a), but conventional fixed and relative threshold values are in a diseased SUV max The effect is poor in the full dynamic range of (c). The fixed threshold method uses a single (e.g., user-defined) SUV value as a threshold for segmenting hotspots within an image. For example, the user may set the fixed threshold level to a value of 4.5. The relative thresholding method segments the hotspots using a specific, constant fraction or percentage and using a local thresholding of each hotspot set to the hotspot's maximum SUV. For example, the user may set the relative threshold to 40% such that a threshold calculated as 40% of the maximum hotspot SUV value is used Each hotspot is partitioned. These two methods-conventional fixed threshold and relative threshold-have drawbacks. For example, it is difficult to define an appropriate fixed threshold for all patients. Conventional relative thresholding methods are also problematic because defining a threshold as a fixed fraction of the maximum or peak intensity of a hot spot results in splitting hot spots with lower overall intensities using lower thresholds. Thus, using low threshold segmentation may represent a low intensity hot spot of a smaller lesion with relatively low uptake, which may result in a larger identified hot spot volume than a higher intensity hot spot that actually represents a physically larger lesion.
For example, fig. 18D and 18E illustrate the use of a heat spot determined to be 50% of the maximum hot spot intensity (e.g., 50% suv max ) Is divided into two hot spots. Each plot plots intensity as a function of position in the vertical direction, showing a line passing through the hotspot. Fig. 18D shows a graph 1840 illustrating the variation in intensity of a high intensity hot spot representing a large physical lesion 1848. The hotspot intensity 1842 peaks near the center of the hotspot and the hotspot threshold 1844 is set to 50% of the maximum of the hotspot intensity 1842. Segmentation of the hotspot using the hotspot threshold 1844 results in a segmented volume that approximately matches the size of the physical lesion (as shown, for example, by comparing the linear dimension 1846 to the illustrated lesion 1848). Fig. 18E shows a graph 1850 illustrating the variation of the intensity of a low intensity hot spot representing a small physical lesion 1858. The hot spot intensity 1852 also peaks near the center of the hot spot and the hot spot threshold 1854 is also set to 50% of the maximum hot spot intensity 1852. However, since the peak of the hotspot intensity 1852 is less sharp and has a lower intensity peak compared to the hotspot intensity 1842 of the high intensity hotspot, setting the threshold relative to the maximum of the hotspot intensity results in a much lower absolute threshold. Thus, the threshold-based segmentation produces a larger hotspot volume compared to the hotspot volume of the higher intensity hotspot, although the represented physical lesion is smaller, as shown, for example, by comparing the linear dimension 1856 to the illustrated lesion 1858. Thus, the relative threshold may produce a larger apparent hot spot volume for smaller physical lesions. This is particularly problematic for assessing treatment response, as lower intensity lesions will have a lower threshold and thus lesions that respond to treatment may appear to be Increasing in volume.
In certain embodiments, adaptive thresholding as described herein addresses these shortcomings by utilizing an adaptive threshold calculated as a percentage of hotspot intensity, the percentage (i) being a function of hotspot intensity (e.g., SUV max ) Increasing and decreasing and (ii) depending on the hotspot strength (e.g., SUV max ) And overall physiological uptake (e.g., as measured by a reference value such as a blood pool reference value). Thus, unlike conventional relative thresholding methods, the particular fraction/percentage variation of hotspot intensity used in the adaptive thresholding methods described herein is itself a function of the hotspot intensity and in some embodiments also takes into account physiological uptake. For example, as shown in the illustrative plot 1860 of fig. 18F, the threshold 1864 is set to a-higher percentage of the peak hotspot intensity 1852, e.g., as shown in fig. 18F, using a variable, adaptive thresholding method as described herein. Doing so, as illustrated in fig. 18F, allows for threshold-based segmentation to identify a true-sized hotspot volume of lesion 1866 that more accurately reflects the hotspot representation.
In certain embodiments, thresholding is facilitated by first using a watershed algorithm to divide the heterogeneous lesions into homogeneous subcomponents, and eventually excluding uptake from nearby intensity peaks. As described herein, adaptive thresholding may be applied to lesions and automated detection of manual pre-segmentation, for example, by deep neural networks implemented via a machine learning module as described herein, to improve reproducibility and robustness and increase interpretability.
G. Exemplary studies comparing exemplary threshold functions and scaling factors for PYL-PET/CT imaging
This example describes a study performed to evaluate various parameters for use in an adaptive thresholding method as described herein, e.g., in section F, and to compare fixed and relative thresholds using manually annotated lesions as a reference.
The study of this example used 242 patients 18 F-DCFPyL PET/CT scan, wherein the hot spots correspond to bones, lymph and prostate manually segmented by an experienced nuclear medicine readerAdenosis. A total of 792 hotspot volumes were annotated, involving 167 patients. Two studies were performed to evaluate the thresholding algorithm. In the first study, the manually annotated hotspots were refined with different thresholding algorithms and the degree of retention of the order of size was estimated, i.e. the degree to which the smaller hotspot volume remained smaller than the original larger hotspot volume after refinement. In a second study, refinement by thresholding suspicious hotspots automatically detected by the machine learning method according to various embodiments described herein was performed and compared to manual annotations.
The PET image intensity in this example is scaled to represent a Standard Uptake Value (SUV), and is referred to as uptake or uptake intensity in this section. The different thresholding algorithms compared are as follows: fixed threshold at suv=2.5, SUV max A variation of the relative threshold and the adaptive threshold of 50%. The adaptive threshold is determined by the SUV max Is defined (with and without a maximum threshold level). The plateau level is set higher than the normal uptake intensity in the region corresponding to healthy tissue. Two support survey studies were performed to select appropriate plateau levels: one study selected for normal uptake intensity in the aorta and one study selected for normal uptake intensity in the prostate. Thresholding methods are evaluated based on, among other things, the order in which they remain compared to annotations performed by the nuclear medicine reader. For example, if a nuclear medicine reader manually segments hotspots and manually segmented hotspot volumes are ordered by size, then preservation of the order of size refers to the extent to which the hotspot volumes generated by segmentation of the same hotspots using an automated thresholding method (e.g., which does not involve user interaction) will be ordered in the same manner by their size. Two embodiments of the adaptive thresholding method achieve optimal performance in terms of order of magnitude preservation, based on the weighted rank correlation metric. Both of these adaptive thresholding methods utilize SUVs with low intensity lesions max Starting at 90% of the blood pool reference value and at twice the blood pool reference value (e.g., 2x aortic reference uptake]) Where the threshold for the plateau region is reached. The first method (referred to as "P9050-sat") is SUV at the plateau level max Reaches a plateau at 50% ofThe region, another method (called "P9040-sat") is SUV in the plateau region max To reach the plateau at 40%.
It has also been found that using thresholding refines the automatically detected and segmented hotspots changes the accuracy-recall tradeoff. Although the original, automatically detected and segmented hotspots have high recall and low precision, refining the segmentation using the P9050-sat thresholding method yields a more balanced performance in terms of precision and recall.
The improved relative size retention indicates that the assessment of therapeutic response will be improved/more accurate because the algorithm better captures the order of the size of the endorsements of the nuclear medicine reader. The tradeoff between handling over-segmentation and under-segmentation may be decoupled from the detection step by introducing a separation thresholding method-i.e., in addition to the automated hot spot detection and segmentation methods performed using the machine learning methods as described herein, the analysis, adaptive segmentation methods as described herein are also used.
The example support studies described herein are for determining a scaling factor used to calculate a plateau value corresponding to a maximum threshold. For example, as described herein, in this example, such scaling factors are determined based on intensities in normal, healthy tissue in various reference regions. For example, multiplying the blood pool reference based on intensity in the aortic region by a factor of 1.6 yields a level that is generally higher than 95% of the intensity value in the aorta but lower than typical normal uptake in the prostate. Thus, in some example threshold functions, higher values are used. In particular, to achieve a level that is also generally higher than most intensities in normal prostate tissue, a factor of 2 is determined. The values are manually determined based on investigation of histograms and image projections in the sagittal, coronal and transverse planes of PET image voxels within the prostate volume, but excluding any portion corresponding to tumor uptake. The example image tiles are shown in fig. 18G, with the corresponding histograms of the scaling factors shown.
i. Introduction to the invention
Defining the lesion volume in PET/CT can be a subjective process, as lesions appear as hot spots in PET and generally do not have well-defined boundaries. Sometimes, lesion volumes may be segmented based on their anatomical extent in the CT image, however, this approach will lead to ignoring certain information about tracer uptake, as complete uptake will not be covered. Furthermore, certain lesions may be visible in functional, PET images, but not in CT images. This section describes an exemplary study of thresholding methods designed to accurately identify hot spot volumes that reflect physiological uptake volumes (i.e., volumes where uptake is above background). To perform segmentation and identify hot spot volumes in this way, a threshold is chosen in order to balance the risk of including background with the risk of not segmenting a hot spot volume large enough to reflect a full uptake volume.
This risk tradeoff is typically achieved by selecting an SUV that is determined for the expected hotspot volume max 50% or 40% of the value is resolved as a threshold. The rationale for this approach is that for high uptake lesions (e.g., corresponding to high intensity hot spots in a PET image), the threshold can be set higher than for low uptake lesions (e.g., corresponding to lower intensity hot spots) while maintaining the same risk level as the hot spot volume that does not segment the volume representing the full uptake volume. However, for low signal-to-noise hotspots, SUVs are used max A 50% threshold of (c) will result in background being included in the segmentation. To avoid this, SUVs may be used max For example, starting at, for example, 90% or 75% for low intensity hot spots. Furthermore, once the threshold is sufficiently above the background level, the risk of containing background is low, which is for SUVs well below high uptake lesions max Occurs at 50% of the threshold value. Thus, the threshold may be an upper limit on the level of the plateau above the typical background intensity.
One reference to an uptake intensity level that is well above the typical background uptake intensity is average liver uptake. Other reference levels may be required based on actual background uptake intensity. Background uptake intensities differ in bone, lymph and prostate, with bone having the lowest background uptake intensity and prostate having the highest background uptake intensity. The use of the same thresholding method irrespective of the tissue is advantageous/preferred as it allows the use of the same segmentation method irrespective of the location and/or classification of a particular lesion. Thus, the study of this example uses the same thresholding parameters to evaluate the threshold for lesions in all three tissue types. The adaptive thresholding variants evaluated in this example include a variant that reached the plateau at the liver uptake intensity, a variant that reached the plateau at a level estimated to be higher than the aortic uptake, and several variants that reached the plateau at a level estimated to be higher than the prostate uptake intensity.
Some previous methods have determined the level as a function of the mediastinal blood pool uptake intensity calculated as the average of the blood pool uptake intensities plus twice the standard deviation of the blood pool uptake intensities (e.g., average of blood pool uptake intensities +2xSD). However, such an approach relying on standard deviation estimation may lead to undesirable errors and noise sensitivity. In particular, the estimated standard deviation is far less robust than the estimated mean and can be affected by noise, micro-segmentation errors, or PET/CT misalignment. A more robust way to estimate a level above the blood uptake intensity uses a fixed factor multiplied by the mean or reference aortic value. To find the appropriate factors, the distribution of uptake intensity in the aorta is studied and described in this example. The normal prostate uptake intensity is also studied to determine appropriate factors that can be applied to the reference aortic uptake to calculate levels that are generally higher than the normal prostate intensity.
ii. method
Thresholding manual annotations
This study used a subset of data that contained only lesions in which at least one other lesion of the same type was in the same patient. This resulted in a dataset with 684 manually segmented lesion uptake volumes (278 in bone, 357 in lymph node, 49 in prostate) involving 92 patients. An automatic refinement by thresholding is performed and the output is compared to the original volume. Performance is measured by a weighted average of rank correlations between refined volumes and original volumes within a patient and tissue type, where weights are given by the number of segmented hot spot volumes in the patient. This performance metric indicates whether the relative size between the segmented hotspot volumes is preserved, but the absolute size is ignored (which is subjectively defined because the uptake volumes do not have a well-defined boundary). However, for a particular patient and tissue type, the same nuclear medicine reader makes all annotations, and thus it may be assumed that the annotations are made in a systematic manner, with smaller lesion annotations actually reflecting smaller uptake volumes than larger lesion annotations.
Thresholding automatically detected lesions
This study used a subset of the data that had not been used to train the machine learning module for hotspot detection and segmentation, resulting in a dataset with 285 manually segmented lesion uptake volumes (bone 104, lymph 129, prostate 52) involving 67 patients. Accuracy and recall (sensitivity 90% to 91% for bone, 92% to 93% for lymph, 94% to 98% for prostate) were measured between the automated detection volumes matching the refinement (and non-refinement) of the manually segmented lesions. These performance metrics quantify the similarity between automatically detected and possibly refined hotspots and manually annotated hotspots.
Blood uptake
For 242 patients, the thoracic portion of the aorta was segmented in the CT component using a deep learning line. The segmented aortic volume is projected into PET space and eroded 3mm to minimize the risk that the aortic volume contains areas in the outer or vascular walls of the aorta while preserving as much of the uptake inside the aorta as possible. For the remaining uptake intensity, in each patient, the quotient q= (aortamean+2x aortaSD)/aortaMEAN is calculated.
Prostate uptake
Among 29 patients, normal uptake in the prostate was studied. The study is performed using the segmented prostate volume determined via the machine learning module. The uptake intensity in manually annotated prostate lesions was excluded. The remaining uptake intensities normalized with respect to the aortic reference uptake intensity are visualized by histogram and maximum intensity projection in the axial, sagittal and coronal planes, see the example in fig. 18G. The purpose of the maximum projection is to find an explanation for the outlier intensities in the histogram (especially those related to bladder uptake-intensities higher than the maximum uptake intensity in healthy tissue).
Thresholding method
The two baseline methods were compared (fixed threshold at suv=2.5 and at SUV max 50% of the relative threshold) and six variants of the adaptive threshold. The adaptive threshold is to use each and a specific range of SUVs max Three threshold functions associated with values are defined. In particular:
(1) Low range threshold function: the first threshold function is used to calculate SUVs in the low range max Threshold of value. The first threshold function calculates the threshold as SUV max Is a fixed (high) percentage of the (c) for the (c) film,
(2) Mid-range threshold function: the second threshold function is used to calculate SUVs within the mid-range max Threshold of value. The second threshold function calculates the threshold as SUV max Is limited by a maximum threshold value equal to the threshold value of the upper limit of the range, and
(3) High range threshold function: the high range threshold function is used to calculate SUVs within a high range max Threshold of value. The high range threshold function sets the threshold to a maximum fixed threshold (saturation threshold) or SUV max A fixed (low) percentage (non-saturation threshold).
The exact parameters and ranges of the three above-described thresholding functions vary between various adaptive thresholding algorithms and are listed in table 2 below.
Table 2: the range of threshold functions and parameters for the adaptive thresholding algorithm.
Figure BPA0000334647880000631
Figure BPA0000334647880000641
/>
Figure BPA0000334647880000651
Intermediate SUVs are calculated for P9050-sat in the following manner max Interpolation percentages used in the range:
Figure BPA0000334647880000652
wherein SUV is high 50% of (2 x) is equal to [ aortic uptake intensity]And SUV (SUV) low 90% of (2) is equal to the aortic uptake intensity.
The interpolation percentages used in other adaptive thresholding algorithms are similarly calculated. The thresholds in the mid-range are then:
thr=min(p%·SUV max ,50%·SUV high )
and similarly for other adaptive thresholding algorithms.
Results (iii)
Thresholding manually annotated lesions
The highest weighted rank correlation (0.81) is obtained by the P9050-sat and 9040-sat methods, with P7540-sat, A9050-sat, and L9050-sat also providing high values. SUV (sports utility vehicle) max The relative 50% (0.37) of P9050-non-sat (0.61) results in the lowest weighted rank correlation. A fixed threshold at suv=2.5 results in a rank correlation between (0.74) lower than most adaptive thresholding methods. The weighted rank correlation results for each of the thresholding methods are summarized in table 3 below.
Table 3: weighted average of rank correlation for an evaluated thresholding method
Figure BPA0000334647880000653
Figure BPA0000334647880000661
Thresholding of automatically detected lesions
Without refinement, automatic hot spot detection has low accuracy (0.31 to 0.47) but high recall (0.83 to 0.92) to indicate over-segmentation. Using SUVs max Refinement improves accuracy (0.70 to 0.77) with respect to the 50% thresholding algorithm, but reduces recall to about 50% (0.44 to 0.58). Refinement using P9050-sat improves accuracy (0.51 to 0.84), with less recall drop (0.61 to 0.89) to indicate a balance of less over-segmentation but more under-segmentation. P9040-sat performs similarly to P9050-sat in these respects, with L9050-sat having the highest accuracy (0.85 to 0.95) but the lowest recall (0.31 to 0.56). Tables 4 a-e show the complete results of precision and recall.
TABLE 4 accuracy and recall values without analytical segmentation refinement
No refinement Accuracy of Recall rate of recall
Bone hot spot 0.38 0.92
Lymphatic hot spot 0.47 0.83
Prostate hotspots 0.31 0.93
TABLE 4b under SUV max Accuracy and recall values in the case of refinement with respect to the 50% thresholding method.
SUV max Is 50% of the relative position of Accuracy of Recall rate of recall
Bone hot spot 0.74 0.58
Lymphatic hot spot 0.77 0.44
Prostate hotspots 0.70 0.51
TABLE 4c precision and recall values in the case of adaptive segmentation using the P9050-sat implementation
P9050-sat Accuracy of Recall rate of recall
Bone hot spot 0.84 0.61
Lymphatic hot spot 0.70 0.67
Prostate hotspots 0.51 0.89
Table 4d accuracy and recall values in the case of adaptive segmentation using the P9040-sat implementation.
P9040-sat Accuracy of Recall rate of recall
Bone hot spot 0.84 0.59
Lymphatic hot spot 0.71 0.66
Prostate hotspots 0.52 0.89
Table 4e accuracy and recall values in the case of adaptive segmentation using the L9050-sat implementation.
L9050-sat Accuracy of Recall rate of recall
Bone hot spot 0.95 0.31
Lymphatic hot spot 0.91 0.39
Prostate hotspots 0.85 0.56
Support thresholding methods: blood uptake
For the resulting quotient qmean+2xqsd is 1.54, so a factor of 1.6 is used to be determined as a good candidate for achieving threshold levels above most blood intake intensity values. In the exemplary study, only three patients had an aortamean+2x aortaSD above 1.6x aortaMEAN. Three outlier patients had q=1.64, 1.92 and 1.61, with patients with a factor of 1.92 having false aortic segmentations overflowing into the spleen and other patients having a quotient approaching 1.6.
Support thresholding methods: prostate uptake
Based on manual review taking into account the histogram of normal prostate intensities and projections in the axial, sagittal and coronal planes, a value of 2.0 would be an appropriate scaling factor applied to the aortic reference values to obtain levels above the typical uptake intensities in the prostate.
H. Examples: comparison using AI-based hotspot segmentation with thresholding alone
In this example, hotspot detection and segmentation performed using the AI-based method of segmenting and classifying hotspots using a machine learning module as described herein is compared to conventional methods that only utilize threshold-based segmentation.
FIG. 19A shows a conventional hotspot segmentation method 1900 without utilizing machine learning techniques. Instead, hotspot segmentation is performed by the user based on manual depictions of hotspots, followed by-thresholding 1904 based on intensity (e.g., SUV). User manual mask placement 1922 indicates a circular logo for a region of interest (ROI) 1924 within image 1920. Once the ROI is placed, a fixed or relative thresholding method may be used to segment the hotspots 1926 within the manually placed ROI. Individually, the relative thresholding method sets the threshold of a particular ROI to a fixed percentage of the maximum SUV within the ROI, and the SUV-based thresholding method is used to segment each user-identified hotspot, thereby refining the initial user-drawn boundary. Since such conventional approaches rely on the user to manually identify and draw the boundaries of the hotspot, they can be time consuming, and furthermore, the segmentation results as well as the downstream quantization 1906 (e.g., calculation of hotspot metrics) can vary from user to user. Moreover, as conceptually illustrated in images 1928 and 1930, different thresholds may produce different hotspot segmentations 1929, 1931 depending on the particular threshold. In addition, while SUV threshold levels may be tuned to detect early stage disease, doing so typically results in a large number of false positive results, thereby distracting from true positives. Finally, conventional fixed or relative SUV-based thresholding methods suffer from overestimation and/or underestimation of lesion size, for example, as described herein.
Turning to fig. 19B, rather than utilizing manual, user-based selection of ROIs containing hotspots in combination with SUV-based thresholding, an AI-based method 1950 in accordance with certain embodiments described herein utilizes one or more machine learning modules to automatically analyze CT 1954 and PET images 1952 (e.g., of synthetic PET/CT) to detect, segment and classify 1956 hotspots. As described in further detail herein, machine learning-based hotspot segmentation and classification can be used to build an initial 3D hotspot graph, which can then be used as input to an analytical segmentation method 1958 (e.g., the adaptive thresholding techniques described herein, e.g., in sections F and G). Among other things, using machine learning methods reduces the user subjectivity and time required to view an image (e.g., by a medical practitioner, such as a radiologist). Furthermore, AI models are capable of performing complex tasks and can identify early lesions as well as high load transfer diseases while keeping the false positive rate low. The hotspot segmentation improved in this way improves the accuracy of the downstream quantification 1960 in relation to measurements that can be used to evaluate metrics of disease severity, prognosis, treatment response, and the like.
Fig. 20 demonstrates the improved performance of the machine learning based segmentation method compared to conventional thresholding methods. In a machine learning based approach, hotspot splitting is performed by first detecting and splitting hotspots using a machine learning module as described herein (e.g., in section E), along with refinement using an analytical model that implements versions of the adaptive thresholding techniques described in sections F and G. Conventional thresholding methods are performed using a fixed thresholding to segment clusters of voxels having intensities above a fixed threshold. As shown in fig. 20, while the conventional thresholding method produces false positives 2002a and 2002b due to uptake of the radiopharmaceutical in the urethra, the machine learning segmentation technique correctly ignores urethral uptake and only segments the prostate lesions 2004 and 2006.
Fig. 21A-I compare the hotspot segmentation results (left-hand images) in the abdominal region performed by a conventional thresholding method with the hotspot segmentation results (right-hand images) of a machine learning method according to some embodiments described herein. Fig. 21A-I show a 2D tile series of 3D images moving in a vertical direction in the abdominal region, with hot spot regions identified by each method superimposed. The results shown in the figures demonstrate that abdominal uptake is a problem with conventional thresholding methods, where large false positive regions appear in the left hand side image. This may be caused by large uptake in the kidneys and bladder. Conventional segmentation methods require complex methods to suppress this uptake and limit such false positives. In contrast, the machine learning model for segmenting images shown in fig. 21A-I does not rely on any such suppression, and instead learns to ignore such uptake.
I. Exemplary CAD apparatus embodiments
This section describes an example CAD device implementation according to certain embodiments described herein. The CAD apparatus described in this example is referred to as "pro mise" and performs automated organ segmentation using multiple machine learning modules. Example CAD device implementations use analytical models to perform hotspot detection and segmentation.
The aPROMISE (automated prostate specific membrane antigen imaging segmentation) exemplary implementation described in this example utilizes a cloud-based software platform with a network interface where a user can upload a body scan of PSMA PET/CT image data in the form of a DICOM file, view patient studies and share study assessments within a team. The software complies with digital imaging and communications in medicine (DICOM) 3 standard. Multiple scans may be uploaded for each patient, and the system provides a separate view for each study. The software includes a GUI that provides a view page that displays and allows the user to view studies in a 4-panel view that simultaneously shows PET, CT, PET/CT fusion and Maximum Intensity Projection (MIP), and includes the option of displaying each view separately. The device is used to view the entire patient study to enable the user to identify and annotate a region of interest (ROI) using image visualization and analysis tools. In viewing the image data, the user may annotate the ROI by selecting from predefined hotspots that are highlighted when hovering over the segmented region with a mouse pointer, or by manual drawing (i.e., selecting individual voxels in the image tile to be included as hotspots). Quantitative analysis is automatically performed for selected or (manually) drawn hotspots. The user can view the results of this quantitative analysis and determine which hotspots should be reported as suspicious lesions. In aPROMISE, the region of interest (ROI) refers to a contiguous sub-portion of the image; a hotspot refers to a ROI with high local intensity (e.g., indicative of high uptake) (e.g., relative to surrounding areas), and a lesion refers to a user-defined or user-selected ROI that is considered suspicious of disease.
To build a report, the software of the example embodiment requires the signing user to confirm the quality control and electronically sign the report preview. Signed reports are stored in the device and may be exported as JPG or DICOM files.
The aPROMISE device is implemented in a micro-service architecture, as described in further detail herein and shown in FIGS. 29A and 29B.
i. Workflow process
Figure 22 depicts the workflow of the aPROMISE device from uploading DICOM files to exporting reports to be electronically signed. When logged in, the user may import the DICOM file into the aomanagement. The imported DICOM file is uploaded to a patient list where the user can click on the patient to display the corresponding study for review. The layout principle of the patient list is shown in fig. 23.
This view 2300 lists all patients within the team who have uploaded studies and displays patient information (name, ID and gender), the latest study upload date and study status. The study status indicates whether each patient's study is ready for review (blue symbol, 2302), study with errors (red symbol, 2304), calculated study (orange symbol, 2304), and study with available reports (black symbol, 2308). The number in the upper right hand corner of the status symbol indicates the number of studies with a particular status for each patient. Review of the study was initiated by selecting patients, selecting the study, and identifying whether the patients had undergone prostatectomy. The study data will be published and displayed in a viewing window.
Fig. 24 shows a viewing window 2400 in which a user can review PET/CT image data. Lesions are manually noted and reported by a user selected from predefined hotspots segmented by software, or from user-defined hotspots made by using a drawing tool for selecting voxels to be included in the program as hotspots. The predefined hotspots (regions of interest with high local intensity uptake) are automatically segmented using specific methods for soft tissue (prostate and lymph nodes) and bone, and are highlighted when hovered over the segmented region with a mouse pointer. The user may choose to turn on the segmentation display option to visually present the segmentation of the predefined hot spot at the same time. The selected or drawn hotspots are the subject of automated quantitative analysis and are detailed in panels 2402, 2422 and 2442.
The left side of the retractable panel 2402 outlines patient and study information extracted from DICOM data. Panel 2402 also displays and lists quantitative information about hotspots selected by the user. Manually verifying the hot spot position and the type-T: localization in primary tumors; n: regional metastatic disease; ma/b/c: distant metastatic disease (lymph nodes, bones, and soft tissues). The device displays automated quantitative analysis-SUV maximum, SUV peak, SUV average, lesion volume, lesion Index (LI) for user selected hotspots to allow the user to view and decide which hotspots report as lesions in a standardized report.
The middle panel 2422 contains a four-panel view display of DICOM image data. The upper left corner shows a CT image, the upper right corner shows a PET/CT fusion view, the lower left corner shows a PET image and the lower right corner shows MIP.
MIP is a visualization method for volumetric data that displays 2D projections of 3D image volumes from various perspectives. IEEE medical imaging journal of Klebsiella JW (Wallis JW), mi Le TR (Miller TR), lerner CA (Lerner CA), and Kerilapl EC (Kleerup EC) (IEEE Transactions on Medical Imaging) 1989;8 (4): 297-30.Doi:10.1109/42.41482, PMID: MIP imaging is described in three-dimensional displays in 18230529 nuclear medicine.
The retractable right panel 2442 includes the following visual controls for optimizing image viewing and their shortcuts to manipulate images for viewing purposes:
viewing port:
● With or without cross-hairs
● Gradient options for PET/CT fusion images
● Which standard nuclear medicine color map is selected to visualize PET tracer uptake intensity.
SUV and CT window:
● Windowing an image, also known as contrast expansion, histogram modification, or contrast enhancement, wherein the image is manipulated via intensity, changes the appearance of the picture to highlight a particular structure.
● In the SUV window, the windowing presets for the SUV intensity may be adjusted by a slider or shortcut key.
● In a CT window, window presets for henry's intensity may be selected from a drop-down list using a shortcut key or by clicking and dragging an input, where the brightness of the image is adjusted via window level and the contrast is adjusted via window width.
Segmentation
● An organ segmentation display option to turn on or off the segmentation of the reference organ or visualization of the whole body segmentation.
● The user can select which panel views are used to display the organ segmentation.
● A hotspot splitting display option to turn on or off presentation of a predefined hotspot in the selected area; pelvic regions, in bones or all hotspots.
Viewer gestures
● Shortcut keys and combinations for scaling, panning, changing tiles and hiding hotspots of a viewing window for a CT window.
To make a report setup, the signing user clicks on the setup report button 2462. Before a report will be established, the user must confirm the following quality control items:
● The image quality is acceptable
● PET and CT images are properly aligned
● The patient study data was correct
● Reference values (blood pool, liver) are acceptable
● Research is not super scanning
After confirming the quality control item, a preview of the report is presented for electronic signature by the user. The report contains a patient summary, total quantitative lesion load, and quantitative assessment of individual lesions from user-selected hotspots for user identification as lesions.
Fig. 25 shows an example report 2500 generated. Report 2500 includes three sections 2502, 2522 and 2542.
Section 2502 of report 2500 provides an overview of patient data obtained from DICOM tags. It comprises: overview of the patient; summary of patient name, patient ID, age and weight and study data; study date, injected dose at the time of injection, radiopharmaceutical imaging tracer used and its half-life, and time between tracer injection and acquisition of image data.
Section 2522 of report 2500 provides summarized quantitative information from hotspots selected by the user to be included as lesions. The summarized quantitative information shows the total lesion load per lesion type (primary prostate tumor (T), regional/regional pelvic lymph node (N), distant metastasis-lymph node, bone or soft tissue organ (Ma/b/c)). The summary section 2522 also shows the quantitative uptake (SUV average) observed in the reference organ.
Section 2542 of report 2500 is a detailed quantitative assessment and location of each lesion from the selected hotspot identified by the user. In viewing the report, the user must electronically sign his/her patient study viewing results (including selected hotspots and quantified as lesions). The report is then saved in the device and may be exported as a JPG or DICOM file.
image processing
Preprocessing DICOM input data
The image input data is presented in DICOM format, which is a rich data representation. DICOM datagrams contain intensity data, metadata, and communication structures. To optimize data for use by an aPROMISE, data is passed through a micro service that re-encodes, compresses, and removes unnecessary or sensitive information. Intensity data is also collected from the individual DICOM series and encoded into a single lossless PNG file with an associated JSON meta-information file.
The data processing of the PET image data includes estimating SUV (normalized uptake value) factors, which are included in the JSON meta-information file. The SUV factor is a scalar for converting the image intensity into SUV values. SUV factors are calculated according to the QIBA guidelines (quantitative imaging biomarker alliance).
Algorithmic image processing
Fig. 26 shows an exemplary image processing workflow (process) 2600.
The aPROMISE uses a CNN (convolutional neural network) model to segment 2602 the patient's bones and selected organs. Organ segmentation 2602 allows for automated calculation of Standard Uptake Value (SUV) references 2604 in the patient's aorta and liver. Then, in determining certain quantitative indices based on SUV values, such as Lesion Index (LI) and intensity weighted tissue lesion volume (ITLV), SUV references for the aorta and liver are used as reference values. A detailed description of the quantitative index is provided in table 6 below.
Lesions 2608 are manually noted and reported by a user, who selects 2608a from predefined hotspots segmented by software, or 2608b from user-defined hotspots made by using a drawing tool for selecting voxels to be included as hotspots within the GUI. The predefined hotspots (regions of interest with high local intensity uptake) are automatically segmented using some specific methods for soft tissue (prostate and lymph nodes) and bone (e.g., as shown in fig. 28, one specific segmentation method for bone and another specific segmentation method for soft tissue region may be used). Based on the organ segmentation, the software determines the type and location of selected hot spots in the prostate, lymph or bone region. The determined type and location are displayed in the list of selected hotspots shown in the panel 2502 of the viewer 2400. The type and location of the selected hot spot in other areas (e.g., not in the prostate, lymph or bone areas) is manually added by the user. The user can add and edit the types and positions of all hotspots at any time as applicable during the selection of the hotspots. The hotspot type is determined using the mitlm system, which is a clinical standard and notation system for reporting cancer spread. In this approach, individual hotspots are assigned types according to the following letter-based code that indicates certain physical features:
● T indicates a primary tumor
● N indicates a lymph node affected by the primary tumor in the vicinity
● M indicates distant metastasis
For distant metastasis, localization is grouped into a/b/c systems corresponding to additional pelvic lymph nodes (a), bones (b) and soft tissue organs (c).
For all hotspots selected to be included as lesions, 2610SUV values and indices are calculated and displayed in the report.
Organ segmentation in CT
Organ segmentation 2602 is performed using the CT image as input. Starting with two thick segmentations from the complete image, smaller image segments are extracted and selected to contain a given organ set. Sub-division of the organ is performed on each image segment. Finally, all segmented organs from all image segments are assembled into a complete image segmentation displayed in an aPROMISE. The successfully completed segmentation identified 52 different bones and 13 soft tissue organs as visualized in fig. 27 and presented in table 5. The rough segmentation process and the fine segmentation process all comprise three steps:
1. the CT image is pre-processed and,
CNN segmentation
3. And (5) post-processing segmentation.
Preprocessing the CT image prior to rough segmentation comprises three steps: (1) Removing image tiles representing only air (e.g., having < = 0 henry units); (2) resampling the image to a fixed size; and (3) normalizing the image based on the mean and standard deviation of the training data, as described below.
The CNN model performs semantic segmentation, where each pixel in the input image is assigned a label corresponding to the background or its segmented organ, resulting in a label map of the same size as the input data.
Post-processing is performed after segmentation and includes the steps of:
-absorbing neighboring clusters of pixels at a time.
-absorbing clusters of neighboring pixels until no such clusters are present.
-removing all clusters that are not the largest of each marker.
-discarding bone parts from the segmentation; some segmentation models segment bone portions into reference points when segmenting soft tissue.
After the segmentation is completed, the bone parts in these models are intended to be removed.
Two different coarsely segmented neural networks and ten different finely segmented neural networks were used, including segmentation of the prostate. If the patient had undergone prostatectomy prior to the examination (information provided by the user in verifying the patient study background prior to opening the study for review), then the prostate is not segmented. Combinations of fine and coarse segmentation and which body part each combination provides are presented in table 5.
Table 5: an overview of how to combine a coarse-segmentation network and a fine-segmentation network to segment different body parts.
Figure BPA0000334647880000741
* The additional segmentation network is only suitable for patients with prostate.
Training the CNN model involves iterative minimization problems, wherein a training algorithm updates model parameters to reduce segmentation errors. Segmentation errors are defined as deviations from perfect overlap between manual segmentation and CNN model segmentation. Each neural network for organ segmentation is trained to configure optimal parameters and weights. As described above, the training data for developing neural networks for the pro mise consists of low dose CT images with manually segmented and labeled body parts. CT images for training the segmented network were collected during phase II clinical trials as part of NIMSA project (http:// NIMSA. Se /) and registering drug candidates 99mTc-MIP-1404 at clinical trials of clinicaltrias. The NIMSA project consisted of 184 patients and the 99mTc-MIP-1404 data consisted of 62 patients.
Calculation of reference data in PSMA PET (SUV reference)
The reference value is used when assessing physiological uptake of the PSMA tracer. Current clinical practice is to use SUV intensities in an identified volume corresponding to the blood pool or in the liver or both tissues as reference values. For PSMA tracer intensity, blood pool was measured in aortic volume.
In aPROMISE, SUV intensities in volumes corresponding to thoracic and hepatic portions of the aorta are used as reference values. Uptake and organ segmentation of aortic and liver volumes shown in PET images are the basis for calculating SUV references in the respective organs.
The aorta. To ensure that the image portion corresponding to the vessel wall is not contained in the volume of the SUV reference used to calculate the aortic region, the segmented aortic volume is reduced. The segmentation reduction (3 mm) is heuristically chosen to balance the trade-off of maintaining as much aortic volume as possible when no vessel wall region is involved. The reference SUV of the blood pool is a robust average of SUVs from pixels inside the reduced segmentation mask that identify the aortic volume. The robust average is calculated as an average of values within a quartile range.
Liver. When measuring reference values in the liver volume, the segmentation is reduced along the edges to create a buffer that adjusts for possible misalignment between the PET image and the CT image. The reduction (9 mm) was determined heuristically using manual observation of images with PET/CT misalignment.
Cysts or malignant tumors in the liver may lead to areas of low tracer uptake in the liver. To reduce the effect of these local differences in tracer uptake on the calculation of SUV reference, a two-component gaussian mixture model method according to the embodiment described in section b.iii above was used with respect to fig. 2A. In particular, the two-component gaussian mixture model is an SUV fitted to voxels from inside the reference organ mask and the identified primary and secondary components of the distribution. SUV reference of liver volume was initially calculated as the average SUV from the main component of the gaussian mixture model. If the secondary component is determined to have an average SUV that is greater than the primary component, the liver reference organ mask remains unchanged unless the weight of the secondary component exceeds 0.33-in this case, when the weight of the secondary component exceeds 0.33, the castback is erroneous and the liver reference value will not be calculated.
If the secondary component has a smaller average SUV than the primary component, a separation threshold is calculated, for example as shown in fig. 2A. Defining the separation threshold is such that:
● Probability of belonging to the main component of SUV at threshold or more, and;
● Probability of belonging to minor components of SUV at threshold or less;
are identical.
The reference mask is then refined by removing pixels below the separation threshold.
Predefining hot spots in PSMA PET
Turning to fig. 28, in the pro mie implementation of the present example, segmenting regions of high local intensity (so-called predefined hotspots) in PSMA PET by pro mie is performed by an analytical model 2800 based on inputs from a PET image 2802 and an organ segmentation map 2804 determined from a CT image and projected in PET space. To allow the software to segment hotspots in the bone, the original PET image 2802 is used and to segment hotspots in the lymph and prostate, the PET image is processed by suppressing normal PET tracer uptake 2806. A graphical overview of the analytical model used in this example embodiment is presented in fig. 28. The analysis methods as described further below are designed to find regions of high local uptake intensity that can represent the ROI without excessive irrelevant regions or PET tracer background noise. The analysis method was developed from a labeled dataset comprising PSMA PET/CT images.
Inhibition of normal PSMA tracer uptake intensity was performed in one high uptake organ at that time 2806. First, the uptake intensity in the kidneys is inhibited, followed by the liver and finally the urinary bladder. Suppression is performed by applying the estimated suppression map to the high intensity region of the PET. The suppression map is created using an organ map previously segmented in CT and projected and adjusted to a PET image, creating a PET adjusted organ mask.
Small misalignments between the corrected PET image and the CT image are adjusted. Using the adjusted map, a background image is calculated. This background image is subtracted from the original PET image and a uptake estimate image is produced. A suppression map is then estimated from the uptake estimation image using an exponential function that depends on euclidean distances from voxels outside the segmentation to PET adjusted organ masks (Euclidean distance). Since uptake intensity decreases exponentially with distance from the organ, an exponential function is used. Finally, the inhibition map is subtracted from the original PET image, thereby inhibiting the intensity associated with Gao Zhengchang uptake in the organ.
After suppressing the normal PSMA tracer uptake intensity, the hot spot 2812 is segmented in the prostate and lymph using the organ segmentation mask 2804 and the suppressed PET image 2808 produced by the suppression step 2806. Prostate hotspots were not segmented for patients who had undergone prostatectomy. Bone and lymph node hot spot segmentation is applicable to all patients. Each hotspot is segmented using a fast-travel method, where the base PET image is used as a velocity map and the volume of the input region determines the travel time. The input region is also used as an initial segmentation mask to identify the volume of interest of the fast-marching method and is established differently depending on whether hot spot segmentation is performed in bone or soft tissue. Bone hotspots are segmented using a fast marching method and a gaussian difference (DoG) filtering method 2810 and lymph and, where applicable, prostate hotspots are segmented using a fast marching method and a laplacian of gaussian (LoG) filtering method 2812.
For detection and segmentation of bone hotspots, a bone region mask is established to identify the bone volume in which the bone hotspots may be detected. The bone region mask includes the following bone regions: thoracic (1-12), lumbar (1-5), clavicle (L+R), scapula (L+R), sternal rib (L+R, 1-12), hip (L+R), femur (L+R), sacrum and coccyx. Normalizing the masked image based on the average intensity of healthy bone tissue in the PET image is performed by iterating the normalized image using DoG filtering. The filter size used in DoG is 3 mm/pitch and 5 mm/pitch. DoG filtering acts as a band pass filter on the image that attenuates signals farther from the center of the band, emphasizing the clusters of voxels with higher intensities relative to their surroundings. Thresholding the normalized image obtained in this way produces clusters of voxels that are distinguishable from the background and thus segmented, creating a 3D segmentation map 2814 that identifies the hot spot volumes located in the bone region.
For the detection and segmentation of lymphatic hotspots, a lymphatic region mask is established in which hotspots corresponding to potential lymphatic nodules may be detected. The lymphatic region mask includes voxels within a bounding box enclosing all segmented bone and organ regions, but excludes voxels within the segmented organ itself (its voxels are preserved except for lung volume). Another, prostate region mask is established in which hot spots corresponding to potential prostate tumors can be detected. This prostate region mask is the monosomic expansion of the prostate volume determined from the organ segmentation step described herein. Applying the lymphatic region mask to the PET image generates a masked image that includes voxels within the lymphatic region (e.g., and excludes other voxels) and likewise, applying the prostate region mask to the PET image generates a masked image that includes voxels within the prostate volume.
Soft tissue hotspots-i.e. lymph and prostate hotspots-are detected by applying three LoG filters of different sizes-one with 4 mm/pitch XYZ, one with 8 mm/pitch XYZ and one with 12 mm/pitch XYZ-to the images of the lymph and/or prostate masks respectively-resulting in three LoG filtered images for each of the two soft tissue types (prostate and lymph). For each soft tissue type, three corresponding Log filtered images were thresholded using a negative 70% value of the aortic SUV reference and a local minimum was found using a 3x3x3 min filter. This method produces three filtered images each comprising a cluster of voxels corresponding to the hot spot. The three filtered images are combined by taking a union of local minima from the three images to produce a hot spot region mask. Each component in the hotspot region mask is partitioned using a level-set approach to determine one or more hotspot volumes. This segmentation method is performed for both the prostate and for lymphatic hotspots, thereby automatically segmenting hotspots in the prostate and lymphatic regions.
Quantification of
Table 6 identifies the values that are calculated by the software and displayed for each hotspot after user selection. The ITLV is an overview value and is only shown in the report. All calculations are variants of SUVs from PSMA PET/CT.
TABLE 6 values calculated by aPROMISE
Figure BPA0000334647880000771
Figure BPA0000334647880000781
/>
Network-based platform architecture
The aPROMISE utilizes a micro-service architecture. Deployment to the AWS is handled in a cloud formation script file found in the AWS code repository. An aPROMISE cloud architecture is provided in FIG. 29A and a microservice communication design diagram is provided in FIG. 29B.
J. Imaging agent
PSMA binders labeled with PET imaging radionuclides
In certain embodiments, the radionuclide-labeled PSMA binder is a radionuclide-labeled PSMA binder suitable for PET imaging.
In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises [18F]DCFPyL (also known as PyL) TM The method comprises the steps of carrying out a first treatment on the surface of the Also known as DCFPyL-18F):
Figure BPA0000334647880000791
or a pharmaceutically acceptable salt thereof.
In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises [18f ] dcfbc:
Figure BPA0000334647880000792
or a pharmaceutically acceptable salt thereof.
In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises 68 Ga-PSMA-HBED-CC (also referred to as 68 Ga-PSMA-11):
Figure BPA0000334647880000793
Figure BPA0000334647880000801
Or a pharmaceutically acceptable salt thereof.
In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises PSMA-617:
Figure BPA0000334647880000802
or a pharmaceutically acceptable salt thereof. In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises 68 Ga-PSMA-617 (which is made of 68 Ga-labeled PSMA-617), or a pharmaceutically acceptable salt thereof. In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises 177 Lu-PSMA-617 (which is prepared by 177 Lu-labeled PSMA-617), or a pharmaceutically acceptable salt thereof.
In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises PSMA-I & T:
Figure BPA0000334647880000803
or a pharmaceutically acceptable salt thereof. In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises 68 Ga-PSMA-I&T (which is used 68 Ga-labeled PSMA-I&T), or a pharmaceutically acceptable salt thereof.
In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises PSMA-1007:
Figure BPA0000334647880000811
or a pharmaceutically acceptable salt thereof. In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises 18 F-PSMA-1007 (which is used 18 F-labeled PSMA-1007), or a pharmaceutically acceptable salt thereof.
PSMA binding agents labeled with SPECT imaging radionuclides
In certain embodiments, the radionuclide-labeled PSMA binding agent is a radionuclide-labeled PSMA binding agent suitable for SPECT imaging.
In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises 1404 (also referred to as MIP-1404):
Figure BPA0000334647880000812
or a pharmaceutically acceptable salt thereof.
In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises 1405 (also referred to as MIP-1405):
Figure BPA0000334647880000821
or a pharmaceutically acceptable salt thereof.
In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises 1427 (also referred to as MIP-1427):
Figure BPA0000334647880000822
or a pharmaceutically acceptable salt thereof.
In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises 1428 (also referred to as MIP-1428):
Figure BPA0000334647880000831
or a pharmaceutically acceptable salt thereof.
In certain embodiments, the PSMA-binding agent is a radioisotope (e.g., technetium 99m #, for example) by chelating a radionuclide to a radioisotope of a metal (e.g., a radioisotope of technetium (Tc) 99m Tc); for example, a radioisotope of rhenium (Re) (e.g., rhenium 188 #) 188 Re; for example, rhenium 186 # 186 Re); for example, a radioisotope of yttrium (Y) (e.g., 90 y); for example, radioactive isotopes of lutetium (Lu) (e.g., 177 lu); for example, a radioisotope of gallium (Ga) (e.g., 68 ga; for example, the number of the cells to be processed, 67 ga); for example, a radioisotope of indium (e.g., 111 in); for example, a radioisotope of copper (Cu) (e.g., 67 Cu)]and labeled with the radionuclide.
In certain embodiments, 1404 is labeled with a radionuclide (e.g., a radioisotope chelated to a metal). In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises 99m Tc-MIP-1404, which is used 99m Tc labeling (e.g. chelation to 99m Tc) 1404:
Figure BPA0000334647880000832
or a pharmaceutically acceptable salt thereof. In certain embodiments, 1404 may be chelated to other metal radioisotopes [ e.g., a radioisotope of rhenium (Re) (e.g., rhenium 188 ] 188 Re; for example, rhenium 186 # 186 Re); for example, a radioisotope of yttrium (Y) (e.g., 90 y); for example, radioactive isotopes of lutetium (Lu) (e.g., 177 lu); for example, a radioisotope of gallium (Ga) (e.g., 68 ga; for example, the number of the cells to be processed, 67 ga); for example, a radioisotope of indium (e.g., 111 in); for example, copper (Cu) depositionThe radioisotope(s) (e.g., 67 Cu)]to form a semiconductor device having the same structure as that described above 99m Tc-MIP-1404 displays a structurally similar compound (wherein the compound is substituted with another metallic radioisotope 99m Tc)。
In certain embodiments, 1405 is labeled with a radionuclide (e.g., a radioisotope chelated to a metal). In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises 99m Tc-MIP-1405, which is using 99m Tc labeling (e.g. chelation to 99m Tc) 1405:
Figure BPA0000334647880000841
or a pharmaceutically acceptable salt thereof. In certain embodiments, 1405 may be chelated to other metal radioisotopes [ e.g., a radioisotope of rhenium (Re) (e.g., rhenium 188 ] 188 Re; for example, rhenium 186 # 186 Re); for example, a radioisotope of yttrium (Y) (e.g., 90 y); for example, radioactive isotopes of lutetium (Lu) (e.g., 177 lu); for example, a radioisotope of gallium (Ga) (e.g., 68 ga; for example, the number of the cells to be processed, 67 ga); for example, a radioisotope of indium (e.g., 111 in); for example, a radioisotope of copper (Cu) (e.g., 67 Cu)]to form a semiconductor device having the same structure as that described above 99m Tc-MIP-1405 shows a structurally similar compound (wherein another metallic radioisotope is substituted) 99m Tc)。
In certain embodiments, 1427 is labeled with (e.g., chelated to) a radioisotope of a metal to form a compound according to the formula:
Figure BPA0000334647880000851
or a pharmaceutically acceptable salt thereof, wherein M is a metallic radioisotope labeled 1427 therewith [ e.g., a radioisotope of technetium (Tc) (e.g., technetium 99M ] 99m Tc); for example, a radioisotope of rhenium (Re) (e.g., rhenium 188 #) 188 Re; for example, rhenium 186 # 186 Re); for example, a radioisotope of yttrium (Y) (e.g., 90 y); for example, radioactive isotopes of lutetium (Lu) (e.g., 177 lu); for example, a radioisotope of gallium (Ga) (e.g., 68 ga; for example, the number of the cells to be processed, 67 ga); for example, a radioisotope of indium (e.g., 111 In); for example, a radioisotope of copper (Cu) (e.g., 67 Cu)]。
in certain embodiments, 1428 is labeled with (e.g., chelated to) a radioisotope of a metal to form a compound according to the formula:
Figure BPA0000334647880000852
or a pharmaceutically acceptable salt thereof, wherein M is a metallic radioisotope labeled 1428 therewith [ e.g., a radioisotope of technetium (Tc) (e.g., technetium 99M ] 99m Tc); for example, a radioisotope of rhenium (Re) (e.g., rhenium 188 #) 188 Re; for example, rhenium 186 # 186 Re); for example, a radioisotope of yttrium (Y) (e.g., 90 y); for example, radioactive isotopes of lutetium (Lu) (e.g., 177 lu); for example, a radioisotope of gallium (Ga) (e.g., 68 ga; for example, the number of the cells to be processed, 67 ga); for example, a radioisotope of indium (e.g., 111 in); for example, a radioisotope of copper (Cu) (e.g., 67 Cu)]。
in certain embodiments, the radionuclide-labeled PSMA binding agent comprises PSMA I & S:
Figure BPA0000334647880000861
or a pharmaceutically acceptable salt thereof. In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises 99m Tc-PSMA I&S (which is used 99m Tc-labeled PSMA I&S), or a pharmaceutically acceptable salt thereof.
K. Computer system and network architecture
As shown in fig. 30, an implementation of a network environment 3000 for providing the systems, methods, and architectures described herein is shown and described. In brief overview, a block diagram of an exemplary cloud computing environment 3000 is shown and described with reference to fig. 30. Cloud computing environment 3000 may include one or more resource providers 3002a, 3002b, 3002c (collectively 3002). Each resource provider 3002 may include computing resources. In some implementations, the computing resources may include any hardware and/or software for processing data. For example, a computing resource may include hardware and/or software capable of executing algorithms, computer programs, and/or computer applications. In some implementations, the exemplary computing resources may include application servers and/or databases with storage and retrieval capabilities. Each resource provider 3002 may be connected to any other resource provider 3002 in cloud computing environment 3000. In some embodiments, the resource provider 3002 may be connected via a computer network 3008. Each resource provider 3002 may be connected to one or more computing devices 3004a, 3004b, 3004c (collectively 3004) via a computer network 3008.
Cloud computing environment 3000 may include resource manager 3006. The resource manager 3006 may be connected to the resource provider 3002 and the computing device 3004 via a computer network 3008. In some implementations, the resource manager 3006 can facilitate provisioning of computing resources to one or more computing devices 3004 by one or more resource providers 3002. The resource manager 3006 may receive a request for computing resources from a particular computing device 3004. The resource manager 3006 may identify one or more resource providers 3002 capable of providing computing resources requested by the computing device 3004. The resource manager 3006 may select a resource provider 3002 that provides the computing resource. The resource manager 3006 may facilitate a connection between the resource provider 3002 and a particular computing device 3004. In some implementations, the resource manager 3006 can establish a connection between a particular resource provider 3002 and a particular computing device 3004. In some implementations, the resource manager 3006 can redirect a particular computing device 3004 to a particular resource provider 3002 having the requested computing resource.
Fig. 31 shows an example of a computing device 3100 and a mobile computing device 3150 that may be used to implement the techniques described in this disclosure. The computing device 3100 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Mobile computing device 3150 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smartphones, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to be limiting.
The computing device 3100 includes a processor 3102, a memory 3104, a storage device 3106, a high-speed interface 3108 connected to the memory 3104 and a plurality of high-speed expansion ports 3110, and a low-speed interface 3112 connected to the low-speed expansion ports 3114 and storage device 3106. Each of the processor 3102, memory 3104, storage 3106, high-speed interface 3108, high-speed expansion port 3110, and low-speed interface 3112 are interconnected using various buses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 3102 may process instructions executing within the computing device 3100, including instructions stored in memory 3104 or on the storage device 3106 to display graphical information for a GUI on an external input/output device, such as display 3116 coupled to high speed interface 3108. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Moreover, multiple computing devices may be connected with each device that provides part of the necessary operations, such as, for example, a serVer bank (serVer bank), a blade serVer group, or a multiprocessor system. Thus, where the term is used herein, where multiple functions are described as being performed by a "processor," this encompasses embodiments in which the multiple functions are performed by any number of processor(s) of any number of computing device(s). Moreover, where a function is described as being performed by a "processor," this encompasses embodiments in which the function is performed by any number of processor(s) of any number of computing device(s) (e.g., in a distributed computing system).
The memory 3104 stores information within the computing device 3100. In some implementations, the memory 3104 is one or several volatile memory units. In some implementations, the memory 3104 is one or several non-volatile memory cells. The memory 3104 may also be another form of computer-readable medium, such as a magnetic or optical disk.
The storage device 3106 is capable of providing mass storage for the computing device 3100. In some implementations, the storage device 3106 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices including devices in a storage area network or other configurations. The instructions may be stored in an information carrier. The instructions, when executed by one or more processing devices (e.g., the processor 3102), perform one or more methods (e.g., the methods described above). The instructions may also be stored by one or more storage devices, such as a computer-readable or machine-readable medium (e.g., memory 3104, storage device 3106, or memory on processor 3102).
The high-speed interface 3108 manages bandwidth-intensive operations for the computing device 3100, while the low-speed interface 3112 manages lower bandwidth-intensive operations. This allocation of functions is merely an example. In some implementations, the high-speed interface 3108 is coupled to the memory 3104, the display 3116 (e.g., through a graphics processor or accelerator), and to a high-speed expansion port 3110 that can accept various expansion cards (not shown). In an implementation, a low-speed interface 3112 is coupled to a storage device 3106 and a low-speed expansion port 3114. May include various communication ports (e.g., USB,
Figure BPA0000334647880000881
Ethernet, wireless ethernet) low-speed expansion port 3114 may be coupled to one or more input/output devices (e.g., keyboard, pointing device, scanner), or (e.gThrough a network adapter) to a networking device (e.g., a switch or router).
The computing device 3100 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 3120 or multiple times in a group of such servers. In addition, it may be implemented in a personal computer (e.g., laptop 3122). It may also be implemented as part of a rack server system 3124. Instead, components from the computing device 3100 may be combined with other components (not shown) in a mobile device, such as the mobile computing device 3150. Each of such devices may contain one or more of computing device 3100 and mobile computing device 3150, and the entire system may be made up of multiple computing devices in communication with each other.
The mobile computing device 3150 includes a processor 3152, memory 3164, input/output devices (e.g., display 3154), a communication interface 3166, and transceiver 3168, among other components. The mobile computing device 3150 may also have a storage device (e.g., a micro drive or other device) to provide additional storage. Each of the processor 3152, memory 3164, display 3154, communication interface 3166, and transceiver 3168 are interconnected using various buses, and several of the components may be mounted on a common motherboard or in other manners as appropriate.
The processor 3152 may execute instructions within the mobile computing device 3150, including instructions stored in the memory 3164. Processor 3152 may be implemented as a chipset of chips that include separate and multiple analog and digital processors. The processor 3152 may provide, for example, for coordination of the other components of the mobile computing device 3150, such as control of user interfaces, applications run by the mobile computing device 3150, and wireless communication by the mobile computing device 3150.
The processor 3152 may communicate with a user through a control interface 3158 and a display interface 3156 coupled to the display 3154. The display 3154 may be, for example, a TFT (thin film transistor liquid crystal display) display or an OLED (organic light emitting diode) display or other suitable display technology. Display interface 3156 may include appropriate circuitry for driving display 3154 to present graphical and other information to a user. The control interface 3158 may receive commands from a user and convert the commands for submission to the processor 3152. In addition, an external interface 3162 may provide communication with the processor 3152 in order to enable near area communication of the mobile computing device 3150 with other devices. External interface 3162 may provide, for example, wired communication in some implementations, or wireless communication in other implementations, and multiple interfaces may also be used.
The memory 3164 stores information within the mobile computing device 3150. The memory 3164 may be implemented as one or more of one or more computer-readable media, one or more volatile memory units, or one or more non-volatile memory units. Expansion memory 3174 may also be provided and connected to mobile computing device 3150 through expansion interface 3172, which expansion interface 3172 may include, for example, a SIMM (Single in line memory Module) card interface. Expansion memory 3174 may provide additional storage for mobile computing device 3150 or may also store applications or other information for mobile computing device 3150. In particular, expansion memory 3174 may include instructions to perform or supplement the processes described above, and may also include secure information. Thus, for example, expansion memory 3174 may be provided as a security module for mobile computing device 3150, and may be programmed with instructions that allow for secure use of mobile computing device 3150. In addition, secure applications may be provided via the SIMM card along with additional information (e.g., placing identifying information on the SIMM card in a non-hackable manner).
The memory may include, for example, flash memory and/or NVRAM memory (non-volatile random access memory), as discussed below. In some implementations, instructions are stored in an information carrier that, when executed by one or more processing devices (e.g., processor 3152), perform one or more methods (e.g., the methods described above). The instructions may also be stored by one or more storage devices, such as one or more computer-readable or machine-readable media (e.g., memory 3164, expansion memory 3174, or memory on processor 3152). In some implementations, the instructions may be received in a propagated signal, for example, via transceiver 3168 or external interface 3162.
The mobile computing device 3150 may communicate wirelessly through a communication interface 3166, which communication interface 3166 may include digital signal processing circuitry as necessary. Communication interface 3166 may provide communication in various modes or protocols, such as GSM voice telephony (global system for mobile communications), SMS (short message service), EMS (enhanced messaging service) or MMS messaging (multimedia messaging service), CDMA (code division multiple access), TDMA (time division multiple access), PDC (personal digital cellular telephone), WCDMA (wideband code division multiple access), CDMA2000 or GPRS (general packet radio service), and the like. This communication may occur, for example, using radio frequencies through transceiver 3168. In addition, it is possible, for example, to use
Figure BPA0000334647880000891
Wi-Fi TM Or other such transceiver (not shown) may be in short-range communication. In addition, the GPS (global positioning system) receiver module 3170 may provide additional navigation-related and location-related wireless data to the mobile computing device 3150 that may be suitably used by applications running on the mobile computing device 3150.
The mobile computing device 3150 may also communicate audibly using the audio codec 3160, which audio codec 3160 may receive spoken information from a user and convert it to usable digital information. The audio codec 3160 may likewise produce audible sound to a user, such as through a speaker in an earpiece of the mobile computing device 3150, for example. Such sound may include sound from voice telephones, may include recorded sound (e.g., voice messages, music files, etc.) and may also include sound generated by applications operating on the mobile computing device 3150.
The mobile computing device 3150 may be implemented in a number of different forms, as shown in the figures. For example, it may be implemented as a cellular telephone 3180. It may also be implemented as part of a smart phone 3182, a personal digital assistant, or other similar mobile device.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These different implementations may include implementations in one or more computer programs that may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be special purpose or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. The terms machine-readable medium and computer-readable medium, as used herein, refer to any computer program product, apparatus and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointer device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a client computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include Local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computing system may include a client and a server. Clients and servers are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
In some implementations, the various modules described herein may be separated, combined, or incorporated into a single or combined module. The modules depicted in the figures are not intended to limit the systems described herein to the software architecture shown therein.
Elements of different implementations described herein may be combined to form other implementations not specifically set forth above. Elements may be excluded from the processes, computer programs, databases, etc. described herein without adversely affecting their operation. Additionally, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. Various individual elements may be combined into one or more individual elements to perform the functions described herein.
Throughout the description where devices and systems are described as having, containing, or comprising specific components or where processes and methods are described as having, containing, or comprising specific steps, it is contemplated that there are additional devices and systems of the present invention consisting essentially of or consisting of the recited components and there are additional processes and methods according to the present invention consisting essentially of or consisting of the recited processing steps.
Various described embodiments of the invention may be used in conjunction with one or more other embodiments unless technically incompatible. It should be understood that the order of steps or order for performing certain actions is not important so long as the invention remains operable. Furthermore, two or more steps or actions may be performed simultaneously.
Although the invention has been shown and described with particular reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (148)

1. A method for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the method comprising:
(a) Receiving, by a processor of a computing device, a 3D functional image of the subject obtained using a functional imaging modality;
(b) Automatically detecting, by the processor, one or more hotspots within the 3D functional image using a machine learning module, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potentially cancerous lesion within the subject, thereby establishing one or both of (i) and (ii) as follows: (i) A list of hotspots that identify, for each hotspot, a location of the hotspot, and (ii) a 3D hotspot graph that identifies, for each hotspot, a corresponding 3D hotspot volume within the 3D functional image; a kind of electronic device with high-pressure air-conditioning system
(c) Storing and/or providing the list of hotspots and/or the 3D hotspot graph for display and/or further processing.
2. The method of claim 1, wherein the machine learning module receives at least a portion of the 3D functional image as input and automatically detects the one or more hotspots based at least in part on intensities of voxels of the received portion of the 3D functional image.
3. The method of claim 1 or 2, wherein the machine learning module receives as input a 3D segmentation map identifying one or more volumes of interest (VOIs) within the 3D functional image, each VOI corresponding to a particular target tissue region and/or a particular anatomical region within the subject.
4. The method of any one of the preceding claims,
comprising receiving, by the processor, a 3D anatomical image of the subject obtained using an anatomical imaging modality, wherein the 3D anatomical image comprises a graphical representation of tissue within the subject,
and wherein the machine learning module receives at least two input channels, the input channels including a first input channel corresponding to at least a portion of the 3D anatomical image and a second input channel corresponding to at least a portion of the 3D functional image.
5. The method of claim 4, wherein the machine learning module receives as input a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image and/or the 3D anatomical image, each VOI corresponding to a particular target tissue region and/or a particular anatomical region.
6. The method of claim 5, comprising automatically segmenting the 3D anatomical image by the processor, thereby creating the 3D segmentation map.
7. The method of any one of the preceding claims, wherein the machine learning module is a region-specific machine learning module that receives as input specific portions of the 3D functional image corresponding to one or more specific tissue regions and/or anatomical regions of the subject.
8. The method of any of the preceding claims, wherein the machine learning module generates the list of hotspots as an output.
9. The method of any of the preceding claims, wherein the machine learning module generates the 3D hotspot graph as an output.
10. The method of any one of the preceding claims, comprising:
(d) A lesion likelihood classification corresponding to a likelihood that the hotspot represents a lesion within the subject is determined for each hotspot of at least a portion of the hotspots by the processor.
11. The method of claim 10, wherein step (d) comprises using the machine learning module to determine the lesion likelihood classification for each hotspot of the portion.
12. The method of claim 10, wherein step (d) comprises using a second machine learning module to determine the lesion likelihood classification for each hotspot.
13. The method of claim 12, comprising determining, by the processor, for each hotspot, a set of one or more hotspot features, and using the set of one or more hotspot features as input to the second machine learning module.
14. The method of any one of claims 10 to 13, comprising:
(e) A subset of the one or more hotspots corresponding to hotspots having a high likelihood of corresponding to cancerous lesions is selected by the processor based at least in part on the lesion likelihood classification of the hotspots.
15. The method of any one of the preceding claims, comprising:
(f) The intensities of voxels of the 3D functional image are adjusted by the processor to correct intensity exudation from one or more high intensity volumes of the 3D functional image, each of the one or more high intensity volumes corresponding to a high uptake tissue region within the subject associated with normally high radiopharmaceutical uptake.
16. The method of claim 15, wherein step (f) comprises correcting intensity bleed from a plurality of high intensity volumes one at a time in a sequential manner.
17. The method of claim 15 or 16, wherein the one or more high intensity volumes correspond to one or more high uptake tissue regions selected from the group consisting of kidney, liver, and bladder.
18. The method of any one of the preceding claims, comprising:
(g) A corresponding lesion index is determined, by the processor, for each of at least a portion of the one or more hotspots, indicative of a level of radiopharmaceutical uptake within a potential lesion corresponding to the hotspot and/or a size of the potential lesion.
19. The method of claim 18, wherein step (g) comprises comparing intensity(s) of one or more voxels associated with the hotspot to one or more reference values, each reference value associated with a particular reference tissue region corresponding to a reference volume of the reference tissue region.
20. The method of claim 19, wherein the one or more reference values comprise one or more members selected from the group consisting of an aortic reference value associated with an aortic portion of the subject and a liver reference value associated with a liver of the subject.
21. The method of claim 19 or 20, wherein determining the particular reference value for at least one particular reference value associated with a particular reference tissue region comprises fitting intensities of voxels within a particular reference volume corresponding to the particular reference tissue region to a multicomponent hybrid model.
22. The method of any one of claims 18-21, comprising calculating an overall risk index for the subject indicative of the subject's cancer status and/or risk using the determined lesion index value.
23. The method of any one of the preceding claims, comprising determining, by the processor, for each hotspot, an anatomical classification corresponding to a particular anatomical region and/or group of anatomical regions within the subject, the potential cancerous lesions represented by the hotspot being determined to be located in the particular anatomical region and/or group of anatomical regions.
24. The method of any one of the preceding claims, comprising:
(h) Causing, by the processor, a graphical representation of at least a portion of the one or more hotspots to be rendered for display within a Graphical User Interface (GUI) for viewing by a user.
25. The method as claimed in claim 24, comprising:
(i) A user selection of a subset of the one or more hotspots identified as likely to represent a potential cancerous lesion within the subject via user view is received via the GUI by the processor.
26. The method of any of the preceding claims, wherein the 3D functional image comprises a PET or SPECT image obtained after administration of an agent to the subject.
27. The method of claim 26, wherein the agent comprises a PSMA binding agent.
28. The method of claim 26 or 27, wherein the reagent comprises 18 F。
29. The method of claim 27 or 28, wherein the reagent comprises [18f ] dcfpyl.
30. The method of claim 27 or 28, wherein the agent comprises PSMA-11.
31. The method of claim 26 or 27, wherein the reagent comprises a reagent selected from the group consisting of 99m Tc、 68 Ga、 177 Lu、 225 Ac、 111 In、 123 I、 124 I, I 131 One or more members of the group of I.
32. The method of any one of the preceding claims, wherein the machine learning module implements a neural network.
33. The method of any one of the preceding claims, wherein the processor is a processor of a cloud-based system.
34. A method for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the method comprising:
(a) Receiving, by a processor of a computing device, a 3D functional image of the subject obtained using a functional imaging modality;
(b) Receiving, by the processor, a 3D anatomical image of the subject obtained using an anatomical imaging modality, wherein the 3D anatomical image comprises a graphical representation of tissue within the subject;
(c) Automatically detecting, by the processor, one or more hotspots within the 3D functional image using a machine learning module, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potentially cancerous lesion within the subject, thereby establishing one or both of (i) and (ii) as follows: (i) A list of hotspots identifying, for each hotspot, a location of the hotspot, and (ii) a 3D hotspot graph identifying, for each hotspot, a corresponding 3D hotspot volume within the 3D functional image,
Wherein the machine learning module receives at least two input channels, the input channels including a first input channel corresponding to at least a portion of the 3D anatomical image and a second input channel corresponding to at least a portion of the 3D functional image and/or anatomical information derived therefrom; a kind of electronic device with high-pressure air-conditioning system
(d) Storing and/or providing the list of hotspots and/or the 3D hotspot graph for display and/or further processing.
35. A method for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the method comprising:
(a) Receiving, by a processor of a computing device, a 3D functional image of the subject obtained using a functional imaging modality;
(b) Automatically detecting, by the processor, one or more hotspots within the 3D functional image using a first machine learning module, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potential cancerous lesion within the subject, thereby establishing a list of hotspots identifying a location of the hotspot for each hotspot;
(c) Automatically determining, by the processor, for each of the one or more hotspots, a corresponding 3D hotspot volume within the 3D functional image using a second machine learning module and the hotspot list, thereby establishing a 3D hotspot graph; a kind of electronic device with high-pressure air-conditioning system
(d) Storing and/or providing the list of hotspots and/or the 3D hotspot graph for display and/or further processing.
36. The method as claimed in claim 35, comprising:
(e) A lesion likelihood classification corresponding to a likelihood that the hotspot represents a lesion within the subject is determined for each hotspot of at least a portion of the hotspots by the processor.
37. The method of claim 36, wherein step (e) comprises using a third machine learning module to determine the lesion likelihood classification for each hotspot.
38. The method of any one of claims 35-37, comprising:
(f) A subset of the one or more hotspots corresponding to hotspots having a high likelihood of corresponding to cancerous lesions is selected by the processor based at least in part on the lesion likelihood classification of the hotspots.
39. A method of measuring intensity values within a reference volume corresponding to a reference tissue region in order to avoid effects from tissue regions associated with low radiopharmaceutical uptake, the method comprising:
(a) Receiving, by a processor of a computing device, a 3D functional image of a subject, the 3D functional image being obtained using a functional imaging modality;
(b) Identifying, by the processor, the reference volume within the 3D functional image;
(c) Fitting, by the processor, a multicomponent hybrid model to intensities of voxels within the reference volume;
(d) Identifying, by the processor, a dominant mode of the multicomponent model;
(e) Determining, by the processor, a measure of intensity corresponding to the primary mode, thereby determining a reference intensity value corresponding to a measure of intensity of a voxel that is (i) within the reference tissue volume and (ii) associated with the primary mode;
(f) Detecting, by the processor, one or more hotspots within the functional image corresponding to potentially cancerous lesions; a kind of electronic device with high-pressure air-conditioning system
(g) For each hotspot of at least a portion of the detected hotspots, determining, by the processor, a lesion index value using at least the reference intensity value.
40. A method of correcting intensity exudation from a region of high uptake tissue within a subject due to association with normally high radiopharmaceutical uptake, the method comprising:
(a) Receiving, by a processor of a computing device, a 3D functional image of the subject, the 3D functional image being obtained using a functional imaging modality;
(b) Identifying, by the processor, a high intensity volume within the 3D functional image, the high intensity volume corresponding to a particular high uptake tissue region in which high radiopharmaceutical uptake occurs normally;
(c) Identifying, by the processor, a suppression volume within the 3D functional image based on the identified high intensity volume, the suppression volume corresponding to a volume that is outside a boundary of the identified high intensity volume and within a predetermined attenuation distance from the boundary;
(d) Determining, by the processor, a background image corresponding to the 3D functional image, wherein intensities of voxels within the high intensity volume are replaced with an interpolated value determined based on intensities of voxels of the 3D functional image within the suppressed volume;
(e) Determining, by the processor, an estimated image by subtracting intensities of voxels of the background image from intensities of voxels from the 3D functional image;
(f) By the processor, determining a inhibition map by:
extrapolating intensities of voxels of the estimated image corresponding to the high intensity volume to locations of voxels within the suppression volume to determine intensities of voxels of the suppression map corresponding to the suppression volume; a kind of electronic device with high-pressure air-conditioning system
Setting intensities of voxels of the inhibition map corresponding to locations outside the inhibition volume to zero; a kind of electronic device with high-pressure air-conditioning system
(g) By the processor, intensity of voxels of the 3D functional image is adjusted based on the inhibition map, thereby correcting intensity bleed from the high intensity volume.
41. The method of claim 40, comprising performing steps (b) through (g) in a sequential manner for each of a plurality of high intensity volumes, thereby correcting intensity bleed from each of the plurality of high intensity volumes.
42. The method of claim 41, wherein the plurality of high intensity volumes comprises one or more members selected from the group consisting of kidney, liver and bladder.
43. A method for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the method comprising:
(a) Receiving, by a processor of a computing device, a 3D functional image of the subject obtained using a functional imaging modality;
(b) Automatically detecting, by the processor, one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potential cancerous lesion within the subject;
(c) Causing, by the processor, a graphical representation of the one or more hotspots to be presented for display within an interactive Graphical User Interface (GUI);
(d) Receiving, by the processor, via the interactive GUI, a user selection of a final set of hotspots including at least a portion of the one or more automatically detected hotspots; a kind of electronic device with high-pressure air-conditioning system
(e) The final set of hotspots is stored and/or provided for display and/or further processing.
44. The method of claim 43, comprising:
(f) Receiving, by the processor, a user selection of one or more additional, user-identified hotspots via the GUI for inclusion in the final set of hotspots; a kind of electronic device with high-pressure air-conditioning system
(g) The final set of hotspots is updated by the processor to include the one or more additional, user-identified hotspots.
45. The method of claim 43 or 44, wherein step (b) comprises using one or more machine learning modules.
46. A method for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the method comprising:
(a) Receiving, by a processor of a computing device, a 3D functional image of the subject obtained using a functional imaging modality;
(b) Automatically detecting, by the processor, one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potential cancerous lesion within the subject;
(c) Automatically determining, by the processor, for each of at least a portion of the one or more hotspots, an anatomical classification corresponding to a particular anatomical region and/or group of anatomical regions within the subject, the potential cancerous lesions represented by the hotspots determined to be located in the particular anatomical region and/or group of anatomical regions; a kind of electronic device with high-pressure air-conditioning system
(d) The identification of the one or more hotspots is stored and/or provided for display and/or further processing along with the anatomical classification for each hotspot corresponding to the hotspot.
47. The method of claim 46, wherein step (b) comprises using one or more machine learning modules.
48. A system for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the system comprising:
a processor of a computing device; a kind of electronic device with high-pressure air-conditioning system
A memory having instructions stored thereon, wherein the instructions when executed by the processor cause the processor to:
(a) Receiving a 3D functional image of the subject obtained using a functional imaging modality;
(b) Automatically detecting, using a machine learning module, one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potential cancerous lesion within the subject, thereby establishing one or both of (i) and (ii) as follows: (i) A list of hotspots that identify, for each hotspot, a location of the hotspot, and (ii) a 3D hotspot graph that identifies, for each hotspot, a corresponding 3D hotspot volume within the 3D functional image; a kind of electronic device with high-pressure air-conditioning system
(c) Storing and/or providing the list of hotspots and/or the 3D hotspot graph for display and/or further processing.
49. The system of claim 48, wherein the machine learning module receives at least a portion of the 3D functional image as input and automatically detects the one or more hotspots based at least in part on intensities of voxels of the received portion of the 3D functional image.
50. The system of claim 48 or 49, wherein the machine learning module receives as input a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image, each VOI corresponding to a particular target tissue region and/or a particular anatomical region within the subject.
51. The system of any one of claims 48 to 50, wherein the instructions cause the processor to:
receiving a 3D anatomical image of the subject obtained using an anatomical imaging modality, wherein the 3D anatomical image comprises a graphical representation of tissue within the subject,
and wherein the machine learning module receives at least two input channels, the input channels including a first input channel corresponding to at least a portion of the 3D anatomical image and a second input channel corresponding to at least a portion of the 3D functional image.
52. The system of claim 51, wherein the machine learning module receives as input a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image and/or the 3D anatomical image, each VOI corresponding to a particular target tissue region and/or a particular anatomical region.
53. The system of claim 52, wherein the instructions cause the processor to automatically segment the 3D anatomical image, thereby establishing the 3D segmentation map.
54. The system of any of claims 48-53, wherein the machine learning module is a region-specific machine learning module that receives as input specific portions of the 3D functional image corresponding to one or more specific tissue regions and/or anatomical regions of the subject.
55. The system of any of claims 48-54, wherein the machine learning module generates the list of hotspots as an output.
56. The system of any of claims 48-55, wherein the machine learning module generates the 3D hotspot graph as an output.
57. The system of any one of claims 48 to 56, wherein the instructions cause the processor to:
(d) For each hotspot of at least a portion of the hotspots, a lesion likelihood classification is determined that corresponds to a likelihood that the hotspot represents a lesion within the subject.
58. The system of claim 57, wherein in step (d), the instructions cause the processor to determine the lesion likelihood classification for each hotspot of the portion using the machine learning module.
59. The system of claim 57, wherein in step (d) the instructions cause the processor to use a second machine learning module to determine the lesion likelihood classification for each hotspot.
60. The system of claim 59, wherein the instructions cause the processor to determine a set of one or more hotspot features for each hotspot and use the set of one or more hotspot features as input to the second machine learning module.
61. The system of any one of claims 57-60, wherein the instructions cause the processor to:
(e) A subset of the one or more hotspots corresponding to hotspots having a high likelihood of corresponding to cancerous lesions is selected based at least in part on the lesion likelihood classification of the hotspots.
62. The system of any one of claims 48 to 61, wherein the instructions cause the processor to:
(f) The intensities of voxels of the 3D functional image are adjusted by the processor to correct intensity exudation from one or more high intensity volumes of the 3D functional image, each of the one or more high intensity volumes corresponding to a high uptake tissue region within the subject associated with normally high radiopharmaceutical uptake.
63. The system of claim 62, wherein in step (f), the instructions cause the processor to correct intensity bleed from a plurality of high intensity volumes one at a time in a sequential manner.
64. The system of claim 62 or 63, wherein the one or more high intensity volumes correspond to one or more high uptake tissue regions selected from the group consisting of kidney, liver, and bladder.
65. The system of any one of claims 48 to 64, wherein the instructions cause the processor to:
(g) For each of at least a portion of the one or more hotspots, a corresponding lesion index is determined that indicates a level of radiopharmaceutical uptake within a potential lesion corresponding to the hotspot and/or a size of the potential lesion.
66. The system of claim 65, wherein in step (g), the instructions cause the processor to compare intensity(s) of one or more voxels associated with the hotspot to one or more reference values, each reference value associated with a particular reference tissue region within the subject and determined based on intensities of reference volumes corresponding to the reference tissue region.
67. The system of claim 66, wherein the one or more reference values comprise one or more members selected from the group consisting of an aortic reference value associated with an aortic portion of the subject and a liver reference value associated with a liver of the subject.
68. The system of claim 66 or 67, wherein for at least one particular reference value associated with a particular reference tissue region, the instructions cause the processor to determine the particular reference value by fitting intensities of voxels within a particular reference volume corresponding to the particular reference tissue region to a multicomponent hybrid model.
69. The system of any one of claims 65-68, wherein the instructions cause the processor to calculate an overall risk index for the subject using the determined lesion index value, which is indicative of a cancer status and/or risk for the subject.
70. The system of any of claims 48-69, wherein the instructions cause the processor to determine, for each hotspot, an anatomical classification corresponding to a particular anatomical region and/or group of anatomical regions within the subject, the potential cancerous lesions represented by the hotspot determined to be located in the particular anatomical region and/or group of anatomical regions.
71. The system of any one of claims 48-70, wherein the instructions cause the processor to:
(h) Causing a graphical representation of at least a portion of the one or more hotspots to be rendered for display within a Graphical User Interface (GUI) for viewing by a user.
72. The system of claim 71, wherein the instructions cause the processor to:
(i) Via the GUI, a user selection of a subset of the one or more hotspots identified as likely to represent potential cancerous lesions within the subject via user viewing is received.
73. The system of any one of claims 48-72, wherein the 3D functional image comprises a PET or SPECT image obtained after administration of an agent to the subject.
74. The system of claim 73, wherein the agent comprises a PSMA binding agent.
75. The system of claim 73 or 74, wherein the reagent comprises 18F.
76. The system of claim 74, wherein the reagent comprises [18F ] DCFPyL.
77. The system of claim 74 or 75, wherein the agent comprises PSMA-11.
78. The system of claim 73 or 74A system wherein the reagent comprises a reagent selected from the group consisting of 99m Tc、 68 Ga、 177 Lu、 225 Ac、 111 In、 123 I、 124 I, I 131 One or more members of the group of I.
79. The system of any one of claims 48-78, wherein the machine learning module implements a neural network.
80. The system of any one of claims 48-79, wherein the processor is a processor of a cloud-based system.
81. A system for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the system comprising:
a processor of a computing device; a kind of electronic device with high-pressure air-conditioning system
A memory having instructions stored thereon, wherein the instructions when executed by the processor cause the processor to:
(a) Receiving a 3D functional image of the subject obtained using a functional imaging modality;
(b) Receiving a 3D anatomical image of the subject obtained using an anatomical imaging modality, wherein the 3D anatomical image comprises a graphical representation of tissue within the subject;
(c) Automatically detecting, using a machine learning module, one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potential cancerous lesion within the subject, thereby establishing one or both of (i) and (ii) as follows: (i) A list of hotspots identifying, for each hotspot, a location of the hotspot, and (ii) a 3D hotspot graph identifying, for each hotspot, a corresponding 3D hotspot volume within the 3D functional image,
wherein the machine learning module receives at least two input channels, the input channels including a first input channel corresponding to at least a portion of the 3D anatomical image and a second input channel corresponding to at least a portion of the 3D functional image, and/or anatomical information derived therefrom; a kind of electronic device with high-pressure air-conditioning system
(d) Storing and/or providing the list of hotspots and/or the 3D hotspot graph for display and/or further processing.
82. A system for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the system comprising:
a processor of a computing device; a kind of electronic device with high-pressure air-conditioning system
A memory having instructions stored thereon, wherein the instructions when executed by the processor cause the processor to:
(a) Receiving a 3D functional image of the subject obtained using a functional imaging modality;
(b) Automatically detecting, using a first machine learning module, one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potential cancerous lesion within the subject, thereby establishing a list of hotspots identifying a location of the hotspot for each hotspot;
(c) Automatically determining, for each of the one or more hotspots, a corresponding 3D hotspot volume within the 3D functional image using a second machine learning module and the hotspot list, thereby establishing a 3D hotspot graph; a kind of electronic device with high-pressure air-conditioning system
(d) Storing and/or providing the list of hotspots and/or the 3D hotspot graph for display and/or further processing.
83. The system of claim 82, wherein the instructions cause the processor to:
(e) For each hotspot of at least a portion of the hotspots, a lesion likelihood classification is determined that corresponds to a likelihood that the hotspot represents a lesion within the subject.
84. The system of claim 83, wherein in step (e) the instructions cause the processor to use a third machine learning module to determine the lesion likelihood classification for each hotspot.
85. The system of any one of claims 82-84, wherein the instructions cause the processor to:
(f) A subset of the one or more hotspots corresponding to hotspots having a high likelihood of corresponding to cancerous lesions is selected based at least in part on the lesion likelihood classification of the hotspots.
86. A system for measuring intensity values within a reference volume corresponding to a reference tissue region in order to avoid effects from tissue regions associated with low radiopharmaceutical uptake, the system comprising:
a processor of a computing device; a kind of electronic device with high-pressure air-conditioning system
A memory having instructions stored thereon, wherein the instructions when executed by the processor cause the processor to:
(a) Receiving a 3D functional image of a subject, the 3D functional image being obtained using a functional imaging modality;
(b) Identifying the reference volume within the 3D functional image;
(c) Fitting a multicomponent hybrid model to intensities of voxels within the reference volume;
(d) Identifying a dominant mode of the multicomponent model;
(e) Determining a measure of intensity corresponding to the primary mode, thereby determining a reference intensity value corresponding to a measure of intensity of a voxel that is (i) within the reference tissue volume and (ii) associated with the primary mode;
(f) Detecting one or more hotspots corresponding to potentially cancerous lesions within the 3D functional image; a kind of electronic device with high-pressure air-conditioning system
(g) For each hotspot of at least a portion of the detected hotspots, a lesion index value is determined using at least the reference intensity value.
87. A system for correcting intensity exudation from a high-uptake tissue region within a subject due to associated high-radiopharmaceutical uptake under normal conditions, the method comprising:
(a) Receiving a 3D functional image of the subject, the 3D functional image being obtained using a functional imaging modality;
(b) Identifying a high intensity volume within the 3D functional image, the high intensity volume corresponding to a particular high uptake tissue region in which high radiopharmaceutical uptake occurs under normal conditions;
(c) Identifying a suppression volume within the 3D functional image based on the identified high intensity volume, the suppression volume corresponding to a volume that is outside a boundary of the identified high intensity volume and within a predetermined decay distance from the boundary;
(d) Determining a background image corresponding to the 3D functional image, wherein intensities of voxels within the high intensity volume are replaced with an interpolation value determined based on intensities of voxels of the 3D functional image within the suppression volume;
(e) Determining an estimated image by subtracting intensities of voxels of the background image from intensities of voxels from the 3D functional image;
(f) Inhibition maps were determined by the following operations:
extrapolating intensities of voxels of the estimated image corresponding to the high intensity volume to locations of voxels within the suppression volume to determine intensities of voxels of the suppression map corresponding to the suppression volume; a kind of electronic device with high-pressure air-conditioning system
Setting intensities of voxels of the inhibition map corresponding to locations outside the inhibition volume to zero; a kind of electronic device with high-pressure air-conditioning system
(g) The intensities of voxels of the 3D functional image are adjusted based on the inhibition map, thereby correcting intensity bleed from the high intensity volume.
88. The system of claim 87, wherein the instructions cause the processor to perform steps (b) through (g) for each of a plurality of high intensity volumes in a sequential manner, correcting intensity bleed from each of the plurality of high intensity volumes.
89. The system of claim 88, wherein the plurality of high intensity volumes comprises one or more members selected from the group consisting of kidneys, liver and bladder.
90. A system for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the system comprising:
a processor of a computing device; a kind of electronic device with high-pressure air-conditioning system
A memory having instructions stored thereon, wherein the instructions when executed by the processor cause the processor to:
(a) Receiving a 3D functional image of the subject obtained using a functional imaging modality;
(b) Automatically detecting one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potential cancerous lesion within the subject;
(c) Causing a graphical representation of the one or more hotspots to be rendered for display within an interactive Graphical User Interface (GUI);
(d) Receiving, via the interactive GUI, a user selection of a final set of hotspots including at least a portion of the one or more automatically detected hotspots; a kind of electronic device with high-pressure air-conditioning system
(e) The final set of hotspots is stored and/or provided for display and/or further processing.
91. The system of claim 90, wherein the instructions cause the processor to:
(f) Receive, via the GUI, a user selection of one or more additional, user-identified hotspots for inclusion in the final set of hotspots; a kind of electronic device with high-pressure air-conditioning system
(g) The final set of hotspots is updated to include the one or more additional, user-identified hotspots.
92. The system of claim 90 or 91, wherein in step (b), the instructions cause the processor to use one or more machine learning modules.
93. A system for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the system comprising:
a processor of a computing device; a kind of electronic device with high-pressure air-conditioning system
A memory having instructions stored thereon, wherein the instructions when executed by the processor cause the processor to:
(a) Receiving a 3D functional image of the subject obtained using a functional imaging modality;
(b) Automatically detecting one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potential cancerous lesion within the subject;
(c) For each of at least a portion of the one or more hotspots, automatically determining an anatomical classification corresponding to a particular anatomical region and/or group of anatomical regions within the subject, the potential cancerous lesions represented by the hotspots determined to be located in the particular anatomical region and/or group of anatomical regions; a kind of electronic device with high-pressure air-conditioning system
(d) The identification of the one or more hotspots is stored and/or provided for display and/or further processing along with the anatomical classification for each hotspot corresponding to the hotspot.
94. The system of claim 93, wherein the instructions cause the processor to perform step (b) using one or more machine learning modules.
95. A method for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the method comprising:
(a) Receiving, by a processor of a computing device, a 3D functional image of the subject obtained using a functional imaging modality;
(b) Receiving, by the processor, a 3D anatomical image of the subject obtained using an anatomical imaging modality;
(c) Receiving, by the processor, a 3D segmentation map, the 3D segmentation map identifying one or more specific tissue regions or groups of tissue regions within the 3D functional image and/or within the 3D anatomical image;
(d) Automatically detecting and/or segmenting, by the processor, using one or more machine learning modules, a set of one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potentially cancerous lesion within the subject, thereby establishing one or both of (i) and (ii) as follows: (i) A list of hotspots identifying, for each hotspot, a location of the hotspot, and (ii) a 3D hotspot graph identifying, for each hotspot, a corresponding 3D hotspot volume within the 3D functional image,
wherein at least one of the one or more machine learning modules receives as input (i) the 3D functional image, (ii) the 3D anatomical image, and (iii) the 3D segmentation map; a kind of electronic device with high-pressure air-conditioning system
(e) Storing and/or providing the list of hotspots and/or the 3D hotspot graph for display and/or further processing.
96. The method of claim 95, comprising:
Receiving, by the processor, an initial 3D segmentation map, the initial 3D segmentation map identifying one or more specific tissue regions within the 3D anatomical image and/or the 3D functional image; a kind of electronic device with high-pressure air-conditioning system
Identifying, by the processor, at least a portion of the one or more particular tissue regions as belonging to a particular one of one or more tissue groups, and updating, by the processor, the 3D segmentation map to indicate that the identified particular region belongs to the particular tissue group; a kind of electronic device with high-pressure air-conditioning system
The updated 3D segmentation map is used by the processor as input to at least one of the one or more machine learning modules.
97. The method of claim 96, wherein the one or more tissue groups comprise a soft tissue group such that a particular tissue region representing soft tissue is identified as belonging to the soft tissue group.
98. The method of claim 96 or 97, wherein the one or more tissue groups comprise a bone tissue group,
such that a specific tissue region representing bone is identified as belonging to the bone tissue group.
99. The method of any of claims 96-98, wherein the one or more tissue groups comprise a high-uptake organ group such that one or more organs associated with high-radiopharmaceutical uptake are identified as belonging to the high-uptake group.
100. The method of any of claims 95-99, comprising, for each detected and/or segmented hotspot, determining, by the processor, a classification of the hotspot.
101. The method of claim 100, comprising using at least one of the one or more machine learning modules to determine the classification of the hotspot for each detected and/or segmented lesion.
102. The method of any one of claims 95-101, wherein the one or more machine learning modules comprise:
(A) A whole body lesion detection module that detects and/or segments hot spots throughout the body; a kind of electronic device with high-pressure air-conditioning system
(B) A prostate lesion module that detects and/or segments hot spots within the prostate.
103. The method of claim 102, comprising generating a list and/or map of hotspots using each of (a) and (B) and merging results.
104. The method of any one of claims 95-103, wherein:
step (d) comprises:
partitioning and classifying the set of one or more hotspots to create a labeled 3D hotspot graph, the labeled 3D hotspot graph identifying, for each hotspot, a corresponding 3D hotspot volume within the 3D functional image, and wherein each hotspot volume is labeled as belonging to a particular hotspot category of a plurality of hotspot categories:
Using a first machine learning module to segment a first initial set of one or more hotspots within the 3D functional image to establish a first initial 3D hotspot graph identifying a first initial set of hotspot volumes, wherein the first machine learning module segments hotspots of the 3D functional image according to a single hotspot category;
using a second machine learning module to segment a second initial set of one or more hotspots within the 3D functional image to create a second initial 3D hotspot graph identifying a second initial set of hotspot volumes, wherein the second machine learning module segments the 3D functional image according to the plurality of different hotspot categories such that the second initial 3D hotspot graph is a multi-category 3D hotspot graph, wherein each hotspot volume is labeled as belonging to a particular one of the plurality of different hotspot categories; a kind of electronic device with high-pressure air-conditioning system
Combining, by the processor, the first initial 3D hotspot graph and the second initial 3D hotspot graph for at least a portion of the hotspot volume identified by the first initial 3D hotspot graph by:
identifying a matching hotspot volume of the second initial 3D hotspot graph, the matching hotspot volume of the second 3D hotspot graph having been marked as belonging to a particular hotspot category of the plurality of different hotspot categories; a kind of electronic device with high-pressure air-conditioning system
Marking the particular hotspot volume of the first initial 3D hotspot graph as belonging to the particular hotspot category, thereby creating a merged 3D hotspot graph comprising segmented hotspot volumes of the first 3D hotspot graph, the segmented hotspot volumes having been marked according to the category to which the matching hotspot volumes of the second 3D hotspot graph are identified; and is also provided with
Step (e) includes storing and/or providing the merged 3D heatmap for display and/or further processing.
105. The method of claim 104, wherein the plurality of different hotspot categories comprises one or more members selected from the group consisting of:
(i) A bone hotspot determined to represent a lesion located in a bone;
(ii) A lymphatic hotspot determined to represent a lesion located in a lymph node, an
(iii) A prostate hotspot determined to represent a lesion located in the prostate.
106. The method of any one of claims 95-105, further comprising:
(f) Receiving and/or accessing the hotspot list; a kind of electronic device with high-pressure air-conditioning system
(g) For each hotspot in the hotspot list, the hotspot is partitioned using an analytical model.
107. The method of any one of claims 95-105, further comprising:
(h) Receiving and/or accessing the hotspot graph; a kind of electronic device with high-pressure air-conditioning system
(i) For each hotspot in the hotspot graph, the hotspot is segmented using an analytical model.
108. The method of claim 107, wherein the analytical model is an adaptive thresholding method, and step (i) comprises:
determining one or more reference values, each reference value being based on a measure of intensities of voxels of the 3D functional image that lie within a particular reference volume corresponding to a particular reference tissue region; a kind of electronic device with high-pressure air-conditioning system
For each particular hotspot volume of the 3D hotspot graph:
determining, by the processor, a corresponding hotspot intensity based on intensities of voxels within the particular hotspot volume; a kind of electronic device with high-pressure air-conditioning system
A hotspot-specific threshold is determined, by the processor, for the particular hotspot based on at least one of (i) the corresponding hotspot intensity and (ii) the one or more reference values.
109. The method of claim 108, wherein the hotspot-specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function selected based on a comparison of the corresponding hotspot intensity to the at least one reference value.
110. The method of claim 108 or 109, wherein the hotspot-specific threshold is determined as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as hotspot intensity increases.
111. A method for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the method comprising:
(a) Receiving, by a processor of a computing device, a 3D functional image of the subject obtained using a functional imaging modality;
(b) Automatically segmenting, by the processor, a first initial set of one or more hotspots within the 3D functional image using a first machine learning module, thereby establishing a first initial 3D hotspot graph identifying a first initial set of hotspot volumes, wherein the first machine learning module segments the hotspots of the 3D functional image according to a single hotspot category;
(c) Automatically segmenting, by the processor, a second initial set of one or more hotspots within the 3D functional image using a second machine learning module, thereby creating a second initial 3D hotspot graph identifying a second initial set of hotspot volumes, wherein the second machine learning module segments the 3D functional image according to a plurality of different hotspot categories such that the second initial 3D hotspot graph is a multi-category 3D hotspot graph, wherein each hotspot volume is labeled as belonging to a particular one of the plurality of different hotspot categories;
(d) Combining, by the processor, the first initial 3D hotspot graph and the second initial 3D hotspot graph for each particular hotspot volume of at least a portion of the first set of initial hotspot volumes identified by the first initial 3D hotspot graph:
identifying a matching hotspot volume of the second initial 3D hotspot graph, the matching hotspot volume of the second 3D hotspot graph having been marked as belonging to a particular hotspot category of the plurality of different hotspot categories; a kind of electronic device with high-pressure air-conditioning system
Marking the specific hotspot volume of the first initial 3D hotspot graph as belonging to the specific hotspot category, thereby creating a merged 3D hotspot graph comprising a segmented hotspot volume of the first 3D hotspot graph, the segmented hotspot volume having been marked according to the category to which the matching hotspot of the second 3D hotspot graph is identified; a kind of electronic device with high-pressure air-conditioning system
(e) The merged 3D heatmap is stored and/or provided for display and/or further processing.
112. The method of claim 111, wherein the plurality of different hotspot categories comprises one or more members selected from the group consisting of:
(i) A bone hotspot determined to represent a lesion located in a bone;
(ii) A lymphatic hotspot determined to represent a lesion located in a lymph node, an
(iii) A prostate hotspot determined to represent a lesion located in the prostate.
113. A method for automatically processing a 3D image of a subject via an adaptive thresholding method to identify and/or characterize cancerous lesions within the subject, the method comprising:
(a) Receiving, by a processor of a computing device, a 3D functional image of the subject obtained using a functional imaging modality;
(b) Receiving, by the processor, a preliminary 3D hotspot graph identifying one or more preliminary hotspot volumes within the 3D functional image;
(c) Determining, by the processor, one or more reference values, each reference value being based on a measure of intensities of voxels of the 3D functional image that lie within a particular reference volume corresponding to a particular reference tissue region;
(d) For each particular preliminary hot spot volume of at least a portion of the one or more preliminary hot spot volumes identified by the preliminary 3D hot spot map, establishing, by the processor, a refined 3D hot spot map based on the preliminary hot spot volume and using segmentation based on an adaptive threshold:
determining a corresponding hotspot intensity based on intensities of voxels within the particular preliminary hotspot volume;
Determining, for the particular preliminary hot spot volume, a hot spot particular threshold based on at least one of (i) the corresponding hot spot intensity and (ii) the one or more reference values;
segmenting at least a portion of the 3D functional image using a threshold-based segmentation algorithm that performs image segmentation using the hotspot-specific threshold determined for the specific preliminary hotspot volume, thereby determining a refined, analytically segmented hotspot volume corresponding to the specific preliminary hotspot volume; and including the refined hotspot volume in the refined 3D hotspot graph; a kind of electronic device with high-pressure air-conditioning system
(e) The refined 3D heatmap is stored and/or provided for display and/or further processing.
114. The method of claim 113, wherein the hotspot-specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function selected based on a comparison of the corresponding hotspot intensity to the at least one reference value.
115. The method of claim 113 or 114, wherein the hotspot-specific threshold is determined as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as hotspot intensity increases.
116. A method for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the method comprising:
(a) Receiving, by a processor of a computing device, a 3D anatomical image of the subject obtained using an anatomical imaging modality, wherein the 3D anatomical image comprises a graphical representation of tissue within the subject;
(b) Automatically segmenting, by the processor, the 3D anatomical image to create a 3D segmentation map, the 3D segmentation map identifying a plurality of volumes of interest (VOIs) in the 3D anatomical image, including a liver volume corresponding to a liver of the subject and an aortic volume corresponding to an aortic portion;
(c) Receiving, by the processor, a 3D functional image of the subject obtained using a functional imaging modality;
(d) Automatically segmenting, by the processor, one or more hotspots within the 3D functional image, each segmented hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potential cancerous lesion within the subject, thereby identifying one or more automatically segmented hotspot volumes;
(e) Causing, by the processor, a graphical representation of the one or more automatically segmented hotspot volumes to be rendered for display within an interactive Graphical User Interface (GUI);
(f) Receiving, by the processor, via the interactive GUI, a user selection of a final set of hot spots comprising at least a portion of the one or more automatically segmented hot spot volumes;
(g) Determining, by the processor, for each hotspot volume of the final set, a lesion index value based on (i) intensities of voxels of the functional image corresponding to the hotspot volume and (ii) one or more reference values determined using intensities of voxels of the functional image corresponding to the liver volume and the aortic volume; a kind of electronic device with high-pressure air-conditioning system
(e) The final set of hotspots and/or lesion index values are stored and/or provided for display and/or further processing.
117. The method according to claim 116, wherein:
step (b) includes segmenting the anatomical image such that the 3D segmentation map identifies one or more bone volumes corresponding to one or more bones of the subject, and
step (d) includes identifying a bone volume within the functional image using the one or more bone volumes and segmenting one or more bone hotspot volumes located within the bone volume.
118. The method of claim 116 or 117, wherein:
step (b) includes segmenting the anatomical image such that the 3D segmentation map identifies one or more organ volumes corresponding to a soft tissue organ of the subject, and
Step (d) includes identifying one or more soft tissue volumes within the functional image using the one or more segmented organ volumes and segmenting one or more lymph and/or prostate hotspot volumes located within the soft tissue volumes.
119. The method of claim 118, wherein step (d) further comprises adjusting the intensity of the functional image to suppress intensity from one or more high uptake tissue regions prior to segmenting the one or more lymph and/or prostate hotspot volumes.
120. The method of any one of claims 116-119, wherein step (g) comprises determining a liver reference value using intensities of voxels of the functional image corresponding to the liver volume.
121. The method of claim 120, comprising fitting a bi-component gaussian mixture model to a histogram of intensities of functional image voxels corresponding to the liver volume using a bi-component gaussian mixture model (Gaussian mixture model) fit to identify and exclude voxels from the liver volume having intensities associated with regions of abnormally low uptake and determining the liver reference value using intensities of remaining voxels.
122. A system for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the system comprising:
A processor of a computing device; a kind of electronic device with high-pressure air-conditioning system
A memory having instructions stored thereon, wherein the instructions when executed by the processor cause the processor to:
(a) Receiving a 3D functional image of the subject obtained using a functional imaging modality;
(b) Receiving a 3D anatomical image of the subject obtained using an anatomical imaging modality;
(c) Receiving a 3D segmentation map, the 3D segmentation map identifying one or more specific tissue regions or groups of tissue regions within the 3D functional image and/or within the 3D anatomical image;
(d) Automatically detecting and/or segmenting, using one or more machine learning modules, a set of one or more hotspots within the 3D functional image, each hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potentially cancerous lesion within the subject, thereby establishing one or both of (i) and (ii) as follows: (i) A list of hotspots identifying, for each hotspot, a location of the hotspot, and (ii) a 3D hotspot graph identifying, for each hotspot, a corresponding 3D hotspot volume within the 3D functional image,
wherein at least one of the one or more machine learning modules receives as input (i) the 3D functional image, (ii) the 3D anatomical image, and (iii) the 3D segmentation map; a kind of electronic device with high-pressure air-conditioning system
(e) Storing and/or providing the list of hotspots and/or the 3D hotspot graph for display and/or further processing.
123. The system of claim 122, wherein the instructions cause the processor to:
receiving an initial 3D segmentation map, the initial 3D segmentation map identifying one or more specific tissue regions within the 3D anatomical image and/or the 3D functional image;
identifying at least a portion of the one or more particular tissue regions as belonging to a particular one of one or more tissue groups and updating the 3D segmentation map to indicate that the identified particular region belongs to the particular tissue group; a kind of electronic device with high-pressure air-conditioning system
The updated 3D segmentation map is used as input to at least one of the one or more machine learning modules.
124. The system of claim 123, wherein the one or more tissue groups comprise a soft tissue group such that a particular tissue region representing soft tissue is identified as belonging to the soft tissue group.
125. The system of claim 123 or 124, wherein the one or more tissue groups comprise a bone tissue group such that a particular tissue region representing bone is identified as belonging to the bone tissue group.
126. The system of any one of claims 123-125, wherein the one or more tissue groups comprise a high-uptake organ group such that one or more organs associated with high-radiopharmaceutical uptake are identified as belonging to the high-uptake group.
127. The system of any of claims 122-126, wherein the instructions cause the processor to determine a classification of the hotspot for each detected and/or partitioned hotspot.
128. The system of claim 127, wherein the instructions cause the processor to use at least one of the one or more machine learning modules to determine the classification of the hotspot for each detected and/or segmented hotspot.
129. The system of any one of claims 122-128, wherein the one or more machine learning modules comprise:
(A) A whole body lesion detection module that detects and/or segments hot spots throughout the body; a kind of electronic device with high-pressure air-conditioning system
(B) A prostate lesion module that detects and/or segments hot spots within the prostate.
130. The system of claim 129, wherein the instructions cause the processor to generate the hotspot list and/or map and merge results using each of (a) and (B).
131. The system of any one of claims 122-130, wherein:
at step (D), the instructions cause the processor to segment and classify the set of one or more hotspots to create a labeled 3D hotspot graph, the labeled 3D hotspot graph identifying, for each hotspot, a corresponding 3D hotspot volume within the 3D functional image, and wherein each hotspot is labeled as belonging to a particular hotspot category of a plurality of hotspot categories:
Using a first machine learning module to segment a first initial set of one or more hotspots within the 3D functional image to establish a first initial 3D hotspot graph identifying a first initial set of hotspot volumes, wherein the first machine learning module segments hotspots of the 3D functional image according to a single hotspot category;
using a second machine learning module to segment a second initial set of one or more hotspots within the 3D functional image to create a second initial 3D hotspot graph identifying a second initial set of hotspot volumes, wherein the second machine learning module segments the 3D functional image according to the plurality of different hotspot categories such that the second initial 3D hotspot graph is a multi-category 3D hotspot graph, wherein each hotspot volume is labeled as belonging to a particular one of the plurality of different hotspot categories; a kind of electronic device with high-pressure air-conditioning system
Merging the first initial 3D hotspot graph with the second initial 3D hotspot graph by, for at least a portion of the hotspot volume identified by the first initial 3D hotspot graph:
identifying a matching hotspot volume of the second initial 3D hotspot graph, the matching hotspot volume of the second 3D hotspot graph having been marked as belonging to a particular hotspot category of the plurality of different hotspot categories; a kind of electronic device with high-pressure air-conditioning system
Marking the specific hotspot volume of the first initial 3D hotspot graph as belonging to the specific hotspot category, thereby creating a merged 3D hotspot graph comprising a segmented hotspot volume of the first 3D hotspot graph, the segmented hotspot volume having been marked according to the category to which the matching hotspot of the second 3D hotspot graph is identified; and is also provided with
In step (e), the instructions cause the processor to store and/or provide the merged 3D heatmap for display and/or further processing.
132. The system of claim 131, wherein the plurality of different hotspot categories comprises one or more members selected from the group consisting of:
(i) A bone hotspot determined to represent a lesion located in a bone;
(ii) A lymphatic hotspot determined to represent a lesion located in a lymph node, an
(iii) A prostate hotspot determined to represent a lesion located in the prostate.
133. The system of any one of claims 122-132, wherein the instructions further cause the processor to:
(f) Receiving and/or accessing the hotspot list; a kind of electronic device with high-pressure air-conditioning system
(g) For each hotspot in the hotspot list, the hotspot is partitioned using an analytical model.
134. The system of any one of claims 122-133 wherein the instructions further cause the processor to:
(h) Receiving and/or accessing the hotspot graph;
(i) For each hotspot in the hotspot graph, the hotspot is segmented using an analytical model.
135. The system of claim 134, wherein the analytical model is an adaptive thresholding method, and in step (i), the instructions cause the processor to:
determining one or more reference values, each reference value being based on a measure of intensities of voxels of the 3D functional image that lie within a particular reference volume corresponding to a particular reference tissue region; a kind of electronic device with high-pressure air-conditioning system
For each particular hotspot volume of the 3D hotspot graph:
determining a corresponding hotspot intensity based on intensities of voxels within the particular hotspot volume; a kind of electronic device with high-pressure air-conditioning system
For the particular hotspot, a hotspot-specific threshold is determined based on at least one of (i) the corresponding hotspot intensity and (ii) the one or more reference values.
136. The system of claim 135, wherein the hotspot-specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function selected based on a comparison of the corresponding hotspot intensity to the at least one reference value.
137. The system of claim 135 or 136, wherein the hotspot-specific threshold is determined as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as hotspot intensity increases.
138. A system for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the system comprising:
a processor of a computing device; a kind of electronic device with high-pressure air-conditioning system
A memory having instructions stored thereon, wherein the instructions when executed by the processor cause the processor to:
(a) Receiving a 3D functional image of the subject obtained using a functional imaging modality;
(b) Automatically segmenting a first initial set of one or more hotspots within the 3D functional image using a first machine learning module, thereby establishing a first initial 3D hotspot graph identifying a first initial set of hotspot volumes, corresponding 3D hotspot volumes within the 3D functional image, wherein the first machine learning module segments the hotspots of the 3D functional image according to a single hotspot category;
(c) Automatically segmenting a second initial set of one or more hotspots within the 3D functional image using a second machine learning module, thereby creating a second initial 3D hotspot graph identifying a second initial set of hotspot volumes, wherein the second machine learning module segments the 3D functional image according to a plurality of different hotspot categories such that the second initial 3D hotspot graph is a multi-category 3D hotspot graph, wherein each hotspot volume is labeled as belonging to a particular one of the plurality of different hotspot categories;
(d) Merging the first initial 3D hotspot graph with the second initial 3D hotspot graph by, for each particular hotspot volume of at least a portion of the first set of initial hotspot volumes identified by the first initial 3D hotspot graph:
identifying a matching hotspot volume of the second initial 3D hotspot graph, the matching hotspot volume of the second 3D hotspot graph having been marked as belonging to a particular hotspot category of the plurality of different hotspot categories; a kind of electronic device with high-pressure air-conditioning system
Marking the specific hotspot volume of the first initial 3D hotspot graph as belonging to the specific hotspot category, thereby creating a merged 3D hotspot graph comprising a segmented hotspot volume of the first 3D hotspot graph, the segmented hotspot volume having been marked according to the category to which the matching hotspot of the second 3D hotspot graph is identified; a kind of electronic device with high-pressure air-conditioning system
(e) The merged 3D heatmap is stored and/or provided for display and/or further processing.
139. The system of claim 138, wherein the plurality of different hotspot categories comprises one or more members selected from the group consisting of:
(i) A bone hotspot determined to represent a lesion located in a bone;
(ii) A lymphatic hotspot determined to represent a lesion located in a lymph node, an
(iii) A prostate hotspot determined to represent a lesion located in the prostate.
140. A system for automatically processing a 3D image of a subject via an adaptive thresholding method to identify and/or characterize cancerous lesions within the subject, the system comprising:
a processor of a computing device; a kind of electronic device with high-pressure air-conditioning system
A memory having instructions stored thereon, wherein the instructions when executed by the processor cause the processor to:
(a) Receiving a 3D functional image of the subject obtained using a functional imaging modality;
(b) Receiving a preliminary 3D hotspot graph identifying one or more preliminary hotspot volumes within the 3D functional image;
(c) Determining one or more reference values, each reference value being based on a measure of intensities of voxels of the 3D functional image that lie within a particular reference volume corresponding to a particular reference tissue region;
(d) A refined 3D hotspot graph is established based on the preliminary hotspot volumes and using an adaptive threshold-based segmentation by, for each particular preliminary hotspot volume of at least a portion of the one or more preliminary hotspot volumes identified by the preliminary 3D hotspot graph:
Determining a corresponding hotspot intensity based on intensities of voxels within the particular preliminary hotspot volume; a kind of electronic device with high-pressure air-conditioning system
Determining, for the particular preliminary hot spot volume, a hot spot particular threshold based on at least one of (i) the corresponding hot spot intensity and (ii) the one or more reference values;
segmenting at least a portion of the 3D functional image using a threshold-based segmentation algorithm that performs image segmentation using the hotspot-specific threshold determined for the specific preliminary hotspot, thereby determining a refined, analytically segmented hotspot volume corresponding to the specific preliminary hotspot volume; a kind of electronic device with high-pressure air-conditioning system
Including the refined hotspot volume in the refined 3D hotspot graph; a kind of electronic device with high-pressure air-conditioning system
(e) The refined 3D heatmap is stored and/or provided for display and/or further processing.
141. The system of claim 140, wherein the hotspot-specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function selected based on a comparison of the corresponding hotspot intensity to the at least one reference value.
142. The system of claim 140 or 141, wherein the hotspot-specific threshold is determined as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as hotspot intensity increases.
143. A system for automatically processing a 3D image of a subject to identify and/or characterize cancerous lesions within the subject, the system comprising:
a processor of a computing device; a kind of electronic device with high-pressure air-conditioning system
A memory having instructions stored thereon, wherein the instructions when executed by the processor cause the processor to:
(a) Receiving a 3D anatomical image of the subject obtained using an anatomical imaging modality, wherein the 3D anatomical image comprises a graphical representation of tissue within the subject;
(b) Automatically segmenting the 3D anatomical image to create a 3D segmentation map, the 3D segmentation map identifying a plurality of volumes of interest (VOIs) in the 3D anatomical image, including a liver volume corresponding to a liver of the subject and an aortic volume corresponding to an aortic portion;
(c) Receiving a 3D functional image of the subject obtained using a functional imaging modality;
(d) Automatically segmenting one or more hotspots within the 3D functional image, each segmented hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potential cancerous lesion within the subject, identifying one or more automatically segmented hotspot volumes;
(e) Causing a graphical representation of the one or more automatically segmented hotspot volumes to be rendered for display within an interactive Graphical User Interface (GUI);
(f) Receiving, via the interactive GUI, a user selection of a final set of hot spots comprising at least a portion of the one or more automatically segmented hot spot volumes;
(g) For each hotspot volume of the final set, determining a lesion index value based on (i) an intensity of a voxel of the functional image corresponding to the hotspot volume and (ii) one or more reference values determined using intensities of voxels of the functional image corresponding to the liver volume and the aortic volume; a kind of electronic device with high-pressure air-conditioning system
(e) The final set of hotspots and/or lesion index values are stored and/or provided for display and/or further processing.
144. The system claimed in claim 143 and wherein:
in step (b), the instructions cause the processor to segment the anatomical image such that the 3D segmentation map identifies one or more bone volumes corresponding to one or more bones of the subject, and
in step (d), the instructions cause the processor to identify a bone volume within the functional image using the one or more bone volumes and segment one or more bone hotspot volumes located within the bone volume.
145. The system of claim 143 or 144, wherein:
in step (b), the instructions cause the processor to segment the anatomical image such that the 3D segmentation map identifies one or more organ volumes corresponding to a soft tissue organ of the subject, and
In step (d), the instructions cause the processor to identify a soft tissue volume within the functional image using the one or more segmented organ volumes and segment one or more lymph and/or prostate hotspot volumes located within the soft tissue volume.
146. The system of claim 145, wherein in step (d), the instructions cause the processor to adjust the intensity of the functional image to suppress intensity from one or more high uptake tissue regions prior to segmenting the one or more lymph and/or prostate hotspot volumes.
147. The system of any one of claims 143 to 146, wherein in step (g), the instructions cause the processor to determine a liver reference value using intensities of voxels of the functional image corresponding to the liver volume.
148. The system of claim 147, wherein the instructions cause the processor to:
fitting a bi-component gaussian mixture model to a histogram of intensities of functional image voxels corresponding to the liver volume,
using the two-component gaussian mixture model fit to identify and exclude voxels from the liver volume having intensities associated with regions of abnormally low uptake, an
The liver reference value is determined using the intensities of the remaining voxels.
CN202180050119.8A 2020-07-06 2021-07-02 Artificial intelligence-based image analysis system and method for detecting and characterizing lesions Pending CN116134479A (en)

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
US202063048436P 2020-07-06 2020-07-06
US63/048,436 2020-07-06
US17/008,411 2020-08-31
US17/008,411 US11721428B2 (en) 2020-07-06 2020-08-31 Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions
US202063127666P 2020-12-18 2020-12-18
US63/127,666 2020-12-18
US202163209317P 2021-06-10 2021-06-10
US63/209,317 2021-06-10
PCT/EP2021/068337 WO2022008374A1 (en) 2020-07-06 2021-07-02 Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions

Publications (1)

Publication Number Publication Date
CN116134479A true CN116134479A (en) 2023-05-16

Family

ID=79552821

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180050119.8A Pending CN116134479A (en) 2020-07-06 2021-07-02 Artificial intelligence-based image analysis system and method for detecting and characterizing lesions

Country Status (10)

Country Link
EP (1) EP4176377A1 (en)
JP (1) JP2023532761A (en)
KR (1) KR20230050319A (en)
CN (1) CN116134479A (en)
AU (1) AU2021305935A1 (en)
BR (1) BR112022026642A2 (en)
CA (1) CA3163190A1 (en)
MX (1) MX2022016373A (en)
TW (1) TW202207241A (en)
WO (1) WO2022008374A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274244A (en) * 2023-11-17 2023-12-22 艾迪普科技股份有限公司 Medical imaging inspection method, system and medium based on three-dimensional image recognition processing

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018081354A1 (en) 2016-10-27 2018-05-03 Progenics Pharmaceuticals, Inc. Network for medical image analysis, decision support system, and related graphical user interface (gui) applications
EP3646240A4 (en) 2017-06-26 2021-03-17 The Research Foundation for The State University of New York System, method, and computer-accessible medium for virtual pancreatography
JP2022516316A (en) 2019-01-07 2022-02-25 エクシーニ ディアグノスティクス アーべー Systems and methods for platform-independent whole-body image segmentation
US11948283B2 (en) 2019-04-24 2024-04-02 Progenics Pharmaceuticals, Inc. Systems and methods for interactive adjustment of intensity windowing in nuclear medicine images
JP2022530039A (en) 2019-04-24 2022-06-27 プロジェニクス ファーマシューティカルズ, インコーポレイテッド Systems and methods for automated interactive analysis of bone scintigraphy images to detect metastases
US11900597B2 (en) 2019-09-27 2024-02-13 Progenics Pharmaceuticals, Inc. Systems and methods for artificial intelligence-based image analysis for cancer assessment
US11721428B2 (en) 2020-07-06 2023-08-08 Exini Diagnostics Ab Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions
AU2022361659A1 (en) 2021-10-08 2024-03-21 Exini Diagnostics Ab Systems and methods for automated identification and classification of lesions in local lymph and distant metastases
CN114767268B (en) * 2022-03-31 2023-09-22 复旦大学附属眼耳鼻喉科医院 Anatomical structure tracking method and device suitable for endoscope navigation system
WO2023232067A1 (en) * 2022-05-31 2023-12-07 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for lesion region identification
US20230410985A1 (en) 2022-06-08 2023-12-21 Exini Diagnostics Ab Systems and methods for assessing disease burden and progression
CN116309585B (en) * 2023-05-22 2023-08-22 山东大学 Method and system for identifying breast ultrasound image target area based on multitask learning

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7876938B2 (en) * 2005-10-06 2011-01-25 Siemens Medical Solutions Usa, Inc. System and method for whole body landmark detection, segmentation and change quantification in digital images
JP5588441B2 (en) 2008-08-01 2014-09-10 ザ ジョンズ ホプキンズ ユニヴァーシティー PSMA binder and use thereof
WO2010065899A2 (en) 2008-12-05 2010-06-10 Molecular Insight Pharmaceuticals, Inc. Technetium-and rhenium-bis(heteroaryl)complexes and methods of use thereof
AU2009322164B2 (en) 2008-12-05 2014-12-18 Molecular Insight Pharmaceuticals, Inc. Technetium-and rhenium-bis(heteroaryl) complexes and methods of use thereof
WO2018081354A1 (en) 2016-10-27 2018-05-03 Progenics Pharmaceuticals, Inc. Network for medical image analysis, decision support system, and related graphical user interface (gui) applications
WO2019136349A2 (en) 2018-01-08 2019-07-11 Progenics Pharmaceuticals, Inc. Systems and methods for rapid neural network-based image segmentation and radiopharmaceutical uptake determination
JP2022516316A (en) 2019-01-07 2022-02-25 エクシーニ ディアグノスティクス アーべー Systems and methods for platform-independent whole-body image segmentation

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274244A (en) * 2023-11-17 2023-12-22 艾迪普科技股份有限公司 Medical imaging inspection method, system and medium based on three-dimensional image recognition processing
CN117274244B (en) * 2023-11-17 2024-02-20 艾迪普科技股份有限公司 Medical imaging inspection method, system and medium based on three-dimensional image recognition processing

Also Published As

Publication number Publication date
AU2021305935A1 (en) 2023-02-02
EP4176377A1 (en) 2023-05-10
CA3163190A1 (en) 2022-01-13
WO2022008374A1 (en) 2022-01-13
KR20230050319A (en) 2023-04-14
JP2023532761A (en) 2023-07-31
TW202207241A (en) 2022-02-16
MX2022016373A (en) 2023-03-06
BR112022026642A2 (en) 2023-01-24

Similar Documents

Publication Publication Date Title
CN116134479A (en) Artificial intelligence-based image analysis system and method for detecting and characterizing lesions
US11941817B2 (en) Systems and methods for platform agnostic whole body image segmentation
US10973486B2 (en) Systems and methods for rapid neural network-based image segmentation and radiopharmaceutical uptake determination
US11937962B2 (en) Systems and methods for automated and interactive analysis of bone scan images for detection of metastases
US11721428B2 (en) Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions
US11321844B2 (en) Systems and methods for deep-learning-based segmentation of composite images
CN111602174A (en) System and method for rapidly segmenting images and determining radiopharmaceutical uptake based on neural network
US20210335480A1 (en) Systems and methods for deep-learning-based segmentation of composite images
US20230115732A1 (en) Systems and methods for automated identification and classification of lesions in local lymph and distant metastases
US20230351586A1 (en) Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions
TWI835768B (en) Systems and methods for rapid neural network-based image segmentation and radiopharmaceutical uptake determination
US20230410985A1 (en) Systems and methods for assessing disease burden and progression

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination