TW202207241A - Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions - Google Patents

Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions Download PDF

Info

Publication number
TW202207241A
TW202207241A TW110124481A TW110124481A TW202207241A TW 202207241 A TW202207241 A TW 202207241A TW 110124481 A TW110124481 A TW 110124481A TW 110124481 A TW110124481 A TW 110124481A TW 202207241 A TW202207241 A TW 202207241A
Authority
TW
Taiwan
Prior art keywords
hotspot
processor
volume
intensity
subject
Prior art date
Application number
TW110124481A
Other languages
Chinese (zh)
Inventor
裘漢 馬汀 布利諾夫森
克斯汀 艾爾莎 馬里亞 強森
漢尼卡 瑪麗亞 艾雷歐諾拉 薩爾斯泰特
真思 菲利浦 安德魯斯 理屈特
Original Assignee
瑞典商艾西尼診斷公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/008,411 external-priority patent/US11721428B2/en
Application filed by 瑞典商艾西尼診斷公司 filed Critical 瑞典商艾西尼診斷公司
Publication of TW202207241A publication Critical patent/TW202207241A/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • G06V20/653Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/803Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/69Microscopic objects, e.g. biological cells or cellular parts
    • G06V20/698Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10108Single photon emission computed tomography [SPECT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.

Abstract

Presented herein are systems and methods that provide for improved detection and characterization of lesions within a subject via automated analysis of nuclear medicine images, such as positron emission tomography (PET) and single photon emission computed tomography (SPECT) images. In particular, in certain embodiments, the approaches described herein leverage artificial intelligence (AI) to detect regions of 3D nuclear medicine images corresponding to hotspots that represent potential cancerous lesions in the subject. The machine learning modules may be used not only to detect presence and locations of such regions within an image, but also to segment the region corresponding to the lesion and/or classify such hotspots based on the likelihood that they are indicative of a true, underlying cancerous lesion. This AI-based lesion detection, segmentation, and classification can provide a basis for further characterization of lesions, overall tumor burden, and estimation of disease severity and risk.

Description

用於偵測及表徵化病變之基於人工智慧的影像分析系統與方法Artificial intelligence-based image analysis system and method for detection and characterization of lesions

本發明大體上係關於用於產生、分析及/或呈現醫學影像資料之系統及方法。更特定言之,在某些實施例中,本發明係關於用於用以識別及/或表徵化癌性病變之醫學影像之自動化分析之系統及方法。The present invention generally relates to systems and methods for generating, analyzing and/or presenting medical imaging data. More particularly, in certain embodiments, the present invention relates to systems and methods for automated analysis of medical images used to identify and/or characterize cancerous lesions.

核醫學成像涉及使用放射性標記之化合物(被稱為放射性藥物)。放射性藥物被投予給患者且以取決於且因此指示體內之各個區域中之組織之生物物理及/或生化性質(諸如受諸如癌症之疾病之存在及/或狀態影響之彼等)的方式累積於體內之各個區域中。例如,特定放射性藥物在投予給患者之後累積於與指示轉移之惡性骨病變相關聯的異常成骨區域中。其他放射性藥物可結合至體內之在疾病演變期間改變之特定受體、酶及蛋白質。在投予給患者之後,此等分子在血液中循環直至其等找到其等預期目標。經結合之放射性藥物保留在疾病部位,而試劑之其餘部分自身體清除。Nuclear medicine imaging involves the use of radiolabeled compounds (called radiopharmaceuticals). Radiopharmaceuticals are administered to a patient and accumulate in a manner that depends on, and is therefore indicative of, the biophysical and/or biochemical properties of tissues in various regions of the body, such as those affected by the presence and/or state of a disease such as cancer in various regions of the body. For example, certain radiopharmaceuticals accumulate in abnormal osteogenic areas associated with malignant bone lesions indicative of metastasis after administration to a patient. Other radiopharmaceuticals can bind to specific receptors, enzymes and proteins in the body that are altered during disease evolution. After administration to a patient, these molecules circulate in the blood until they find their intended target. The bound radiopharmaceutical remains at the disease site while the remainder of the agent is cleared from the body.

核醫學成像技術藉由偵測自放射性藥物之放射性部分發射之輻射來擷取影像。經累積之放射性藥物用作信標使得可使用常用的核醫學模態獲得描繪疾病位置及濃度之影像。核醫學成像模態之實例包含骨掃描成像(亦被稱為閃爍攝影)、單光子發射電腦斷層掃描攝影術(SPECT)及正電子發射斷層掃描攝影術(PET)。骨掃描、SPECT及PET成像系統在全世界大多數醫院中找到。特定成像模態之選擇取決於及/或規定所使用之特定放射性藥物。例如,鍀99m (99m Tc)標記之化合物與骨掃描成像及SPECT成像相容,而PET成像通常使用用18F標記之氟化化合物。化合物99m Tc亞甲基二膦酸鹽(99m Tc MDP)係用於骨掃描成像以便偵測轉移性癌症的流行放射性藥物。放射性標記之前列腺特異性膜抗原(PSMA)靶向化合物(諸如99m Tc標記之1404及PyLTM (亦被稱為[18F]DCFPyL))可分別與SPECT及PET成像一起使用,且提供高度特異性前列腺癌偵測的潛力。Nuclear medicine imaging techniques acquire images by detecting radiation emitted from the radioactive portion of a radiopharmaceutical. The use of the accumulated radiopharmaceuticals as beacons allows images depicting disease location and concentration to be obtained using common nuclear medicine modalities. Examples of nuclear medicine imaging modalities include bone scan imaging (also known as scintigraphy), single photon emission computed tomography (SPECT), and positron emission tomography (PET). Bone scan, SPECT and PET imaging systems are found in most hospitals around the world. The choice of a particular imaging modality depends on and/or dictates the particular radiopharmaceutical used. For example, Xun 99m ( 99m Tc)-labeled compounds are compatible with bone scan imaging and SPECT imaging, while PET imaging typically uses 18F-labeled fluorinated compounds. The compound99mTc methylenediphosphonate (99mTc MDP ) is a popular radiopharmaceutical for bone scan imaging to detect metastatic cancer. Radiolabeled prostate-specific membrane antigen (PSMA) targeting compounds such as99mTc -labeled 1404 and PyL (also known as [18F]DCFPyL) can be used with SPECT and PET imaging, respectively, and provide high specificity Potential for prostate cancer detection.

因此,核醫學成像係用於向醫師提供可用於判定患者中疾病之存在及程度之資訊的有價值技術。醫師可使用此資訊以向患者提供建議之治療過程及追蹤疾病進展。Thus, nuclear medicine imaging is a valuable technique for providing physicians with information that can be used to determine the presence and extent of disease in a patient. Physicians can use this information to advise patients on the course of treatment and to track disease progression.

例如,腫瘤科醫師可使用來自患者之研究之核醫學影像作為她評估患者是否具有特定疾病(例如,前列腺癌)、疾病之什麼階段係明顯的、建議之治療過程(若有)為什麼、是否指示外科手術介入及可能預後的輸入。腫瘤科醫師可在此評估中使用放射科醫師報告。放射科醫師報告係由放射科醫師為要求成像研究之醫師準備之核醫學影像之技術評估,且包含(例如)經執行之研究之類型、臨床病史、影像之間的比較、用於執行研究之技術、放射科醫師之觀察及發現,以及放射科醫師可基於成像研究結果具有之整體印象及建議。經簽名之放射科醫師報告經發送至安排研究之醫師以供醫師查看,接下來醫師與患者之間關於治療之結果及建議進行討論。For example, an oncologist may use nuclear medicine images from a patient's study as her assessment of whether a patient has a particular disease (eg, prostate cancer), what stage of the disease is apparent, why the recommended course of treatment (if any), whether it is indicated Input of surgical intervention and possible prognosis. The oncologist can use the radiologist report in this evaluation. A radiologist's report is a technical evaluation of nuclear medicine images prepared by a radiologist for a physician requesting an imaging study and includes, for example, the type of study performed, clinical history, comparisons between images, data used to perform the study. technique, the radiologist's observations and findings, and the overall impression and recommendation that the radiologist may have based on the results of the imaging study. The signed radiologist's report is sent to the physician who arranged the study for review by the physician, followed by a discussion between the physician and the patient regarding treatment results and recommendations.

因此,程序涉及讓放射科醫師對患者執行成像研究,分析所獲得的影像,建立放射科醫師報告,將該報告轉送給提出請求的醫師,讓醫師制訂評估及治療建議及讓醫師向患者傳達結果、建議及風險。程序亦可涉及歸因於不確定結果而重複成像研究,或基於初始結果安排進一步檢測。若成像研究展示患者具有特定疾病或病症(例如,癌症),則醫師討論各種治療選項,包含外科手術,以及無所作為或採取觀察等待或主動監測方法而非具有外科手術之風險。Thus, the procedure involves having the radiologist perform an imaging study on the patient, analyzing the images obtained, creating a radiologist report, forwarding the report to the requesting physician, having the physician make evaluation and treatment recommendations, and having the physician communicate the results to the patient , Recommendations and Risks. Procedures may also involve repeating imaging studies due to inconclusive results, or arranging further testing based on initial results. If imaging studies show that the patient has a particular disease or condition (eg, cancer), the physician discusses various treatment options, including surgery, and doing nothing or taking watchful waiting or active surveillance approaches rather than risking surgery.

因此,隨時間查看及分析多個患者影像之程序在癌症之診斷及治療中發揮關鍵作用。顯著需要促進及改良用於癌症診斷及治療之影像查看及分析之準確度之經改良工具。以此方式改良醫師、放射科醫師及其他健康照護專業人員所利用之工具箱以用於顯著改善照護標準及患者體驗。Thus, the process of viewing and analyzing multiple patient images over time plays a critical role in the diagnosis and treatment of cancer. There is a significant need for improved tools to facilitate and improve the accuracy of image viewing and analysis for cancer diagnosis and treatment. In this way, the toolbox utilized by physicians, radiologists, and other health care professionals is improved for significantly improving standards of care and the patient experience.

本文中提出經由核醫學影像,諸如正電子發射斷層掃描攝影術(PET)及單光子發射電腦斷層掃描攝影術(SPECT)影像之自動化分析提供一受試者內之病變之經改良偵測及表徵化的系統及方法。特定言之,在某些實施例中,本文中所描述之方法利用人工智慧(AI)技術以偵測對應於表示該受試者中之潛在癌性病變之熱點之3D核醫學影像之區域。在某些實施例中,此等區域對應於歸因於病變內之放射性藥物之經增加攝取而相對於其等周圍提高強度之局部化區域(熱點)。本文中所描述之系統及方法可使用一或多個機器學習模組,該一或多個機器學習模組不僅用以偵測影像內之此等熱點之存在及位置,而且用以分割對應於熱點之區域及/或基於熱點確實對應於真實、潛在性癌性病變之可能性對此等熱點進行分類。此等基於AI之病變偵測、分割及分類方法可針對病變、整體腫瘤負荷之進一步表徵化及疾病嚴重性及風險之估計提供基礎。It is proposed herein to provide improved detection and characterization of lesions within a subject through automated analysis of nuclear medicine images, such as positron emission tomography (PET) and single photon emission computed tomography (SPECT) images system and method. In particular, in certain embodiments, the methods described herein utilize artificial intelligence (AI) techniques to detect regions of 3D nuclear medicine images corresponding to hot spots representing potential cancerous lesions in the subject. In certain embodiments, these regions correspond to localized regions (hot spots) of increased intensity relative to their surroundings due to increased uptake of the radiopharmaceutical within the lesion. The systems and methods described herein may employ one or more machine learning modules that are used not only to detect the presence and location of such hotspots within an image, but also to segment the corresponding The area of hotspots and/or the hotspots are classified based on the likelihood that the hotspots do correspond to real, potentially cancerous lesions. These AI-based lesion detection, segmentation and classification methods can provide a basis for further characterization of lesions, overall tumor burden, and estimation of disease severity and risk.

例如,一旦表示病變之影像熱點經偵測、分割及分類,便可運算病變指數值以提供潛在性病變內之放射性藥物攝取及/或潛在性病變之大小(例如,體積)之量度。經運算之病變指數值繼而可經彙總以提供受試者之腫瘤負荷、疾病嚴重性、轉移風險及類似者之整體估計。在某些實施例中,藉由比較經分割熱點體積內之強度與特定參考器官(諸如肝臟及主動脈部分)之強度之量度來運算病變指數值。以此方式使用參考器官容許在正規化尺度上量測可在不同受試者之影像之間比較之病變指數值。在某些實施例中,本文中所描述之方法包含用於抑制來自對應於其中放射性藥物在正常情況下以高位準累積之器官及組織區域(諸如腎臟、肝臟及膀胱(例如,尿膀胱))的多個影像區域之強度滲出(intensity bleed)的技術。對應於此等器官之核醫學影像之區域中之強度通常很高(即使針對正常、健康的受試者),且不一定指示癌症。此外,此等器官中之高放射性藥物累積導致高位準之經發射輻射。增加之經發射輻射可散射,從而不僅導致對應於器官本身之核醫學影像之區域內之高強度,而且導致附近外部體素之高強度。進入對應於與高攝取相關聯的器官之區域外部及周圍之影像之區域中的此強度滲出可阻礙附近病變之偵測且引起量測其中之攝取之不準確度。因此,校正此等強度滲出效應提高病變偵測及量化之準確度。For example, once image hot spots representing lesions are detected, segmented, and classified, lesion index values can be computed to provide a measure of radiopharmaceutical uptake within the underlying lesion and/or the size (eg, volume) of the underlying lesion. The computed lesion index values can then be aggregated to provide an overall estimate of the subject's tumor burden, disease severity, risk of metastasis, and the like. In some embodiments, the lesion index value is calculated by comparing the intensity within the segmented hotspot volume with a measure of the intensity of a particular reference organ, such as the liver and aortic portions. Using a reference organ in this way allows measuring lesion index values on a normalized scale that can be compared between images of different subjects. In certain embodiments, the methods described herein include methods for inhibiting radiation from regions corresponding to organs and tissues in which radiopharmaceuticals normally accumulate at high levels, such as the kidneys, liver, and bladder (eg, urinary bladder) The technique of intensity bleed of multiple image regions. Intensities in the regions of nuclear medicine images corresponding to these organs are often high (even for normal, healthy subjects) and are not necessarily indicative of cancer. In addition, high levels of radiopharmaceutical accumulation in these organs result in high levels of emitted radiation. The increased emitted radiation can scatter, resulting not only in high intensities within the region corresponding to the nuclear medicine image of the organ itself, but also in nearby external voxels. This intensity exudation into regions of the image corresponding to regions outside and around organs associated with high uptake can hinder detection of nearby lesions and cause inaccuracies in measuring uptake therein. Therefore, correcting for these intensity exudation effects improves the accuracy of lesion detection and quantification.

在某些實施例中,本文中所描述的基於AI之病變偵測技術用自解剖影像(諸如x射線電腦斷層掃描攝影術(CT)影像)獲得的解剖資訊來擴增自核醫學影像獲得的功能資訊。例如,本文中所描述之方法中利用之機器學習模組可接收多個輸入通道,包含對應於功能、核醫學、影像(例如,PET影像;例如,SPECT影像)之部分之第一通道,以及對應於共同對準之解剖(例如,CT)影像及/或自其導出之解剖資訊之部分之額外通道。以此方式添加解剖背景內容可提高病變偵測方法之準確度。解剖資訊亦可併入至偵測之後應用之病變分類方法中。例如,除了基於經偵測熱點之強度運算病變指數值之外,亦可基於熱點之位置對熱點指派解剖標記。例如,可基於經偵測熱點之位置是否對應於前列腺、骨盆淋巴結、非骨盆淋巴結、骨,或前列腺及淋巴結外部之軟組織區域內之位置而對經偵測熱點自動指派標記(例如,文數字標記)。In certain embodiments, the AI-based lesion detection techniques described herein augment data obtained from nuclear medicine images with anatomical information obtained from anatomical images, such as x-ray computed tomography (CT) images. Feature information. For example, a machine learning module utilized in the methods described herein may receive multiple input channels, including a first channel corresponding to a portion of functional, nuclear medicine, images (eg, PET images; eg, SPECT images), and Additional channels corresponding to portions of co-aligned anatomical (eg, CT) images and/or anatomical information derived therefrom. Adding anatomical context in this way can improve the accuracy of the lesion detection method. Anatomical information can also be incorporated into lesion classification methods applied after detection. For example, in addition to computing lesion index values based on the intensity of the detected hotspots, anatomical markers may also be assigned to hotspots based on their location. For example, a detected hot spot can be automatically assigned a marker (eg, an alphanumeric marker based on whether the location of the detected hot spot corresponds to the prostate, pelvic lymph nodes, non-pelvic lymph nodes, bone, or locations within the soft tissue region outside the prostate and lymph nodes). ).

在某些實施例中,經偵測熱點及相關聯資訊(諸如經運算之病變指數值及解剖標記)係用互動式圖形使用者介面(GUI)顯示,以便容許醫學專業人員(諸如醫師、放射科醫師、技術人員等)進行查看。因此,醫學專業人員可使用GUI來查看及確認經偵測熱點以及對應指數值及/或解剖標記之準確度。在某些實施例中,GUI亦可容許使用者識別及分割(例如,手動地)醫學影像內之額外熱點,從而容許醫學專業人員識別他/她認為自動化偵測程序可能遺漏之額外潛在病變。一旦經識別,亦可針對此等手動識別及分割之病變判定病變指數值及/或解剖標記。一旦使用者對經偵測熱點及自其運算之資訊之集合感到滿意,他們便可確認他們的核可並產生最終、經簽名報告,該最終、經簽名報告可(例如)被查看且用於與患者討論結果及診斷,並評估預後及治療選項。In certain embodiments, detected hotspots and associated information (such as computed lesion index values and anatomical markers) are displayed with an interactive graphical user interface (GUI) to allow medical professionals (such as physicians, radiologists, physicians, technicians, etc.) to check. Thus, a medical professional can use the GUI to view and confirm the accuracy of detected hot spots and corresponding index values and/or anatomical landmarks. In some embodiments, the GUI may also allow the user to identify and segment (eg, manually) additional hot spots within the medical image, thereby allowing the medical professional to identify additional potential lesions that he/she believes may be missed by the automated detection process. Once identified, lesion index values and/or anatomical landmarks may also be determined for these manually identified and segmented lesions. Once users are satisfied with the collection of detected hotspots and information computed from them, they can confirm their approval and generate a final, signed report that can be viewed, for example, and used for Discuss the results and diagnosis with the patient, and evaluate prognosis and treatment options.

以此方式,本文中所描述之方法提供用於病變偵測及分析的基於AI之工具,該等基於AI之工具可提高受試者之疾病(例如,癌症)狀態及進展之準確度並簡化其評估。此促進診斷、預後及對治療反應之評估,從而改善患者結果。In this way, the methods described herein provide AI-based tools for lesion detection and analysis that can improve the accuracy and simplify the status and progression of a subject's disease (eg, cancer) its evaluation. This facilitates diagnosis, prognosis, and assessment of response to treatment, thereby improving patient outcomes.

在一項態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級)該受試者內之癌性病變之方法,該方法包括:(a)藉由運算裝置之處理器接收(例如,及/或存取)使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得的該受試者之3D功能影像[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值(例如,標準攝取值(SUV)),其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(b)藉由該處理器使用機器學習模組[例如,預訓練之機器學習模組(例如,具有已經由訓練程序判定之預定(例如,及固定)參數)]自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域且表示(例如,指示)該受試者內之潛在癌性病變,從而建立如下(i)及(ii)之一或兩者:(i)熱點清單[例如,座標(例如,影像座標;例如,實體空間座標)之清單;例如,識別對應於經偵測熱點之位置(例如,質心)之該3D功能影像之體素之遮罩],其針對各熱點識別該熱點之位置,及(ii) 3D熱點圖,其針對各熱點識別該3D功能影像內之對應3D熱點體積{例如,其中該3D熱點圖係分割圖(例如,包括一或多個分割遮罩),其針對各熱點識別對應於各熱點之該3D熱點體積之該3D功能影像內之體素[例如,其中該3D熱點圖係經由對該功能影像的基於人工智慧之分割而獲得(例如,使用接收至少該3D功能影像作為輸入且產生該3D熱點圖作為輸出從而分割熱點之機器學習模組)];例如,其中該3D熱點圖針對各熱點描繪該熱點之3D邊界(例如,不規則邊界) (例如,該3D邊界圍封該3D熱點體積,例如,及將該3D功能影像之構成該3D熱點體積之體素與該3D功能影像之其他體素進行區分)};及(c)儲存及/或提供該熱點清單及/或該3D熱點圖以供顯示及/或進一步處理。In one aspect, the present invention relates to a method for automatically processing 3D images of a subject to identify and/or characterize (eg, grade) cancerous lesions in the subject, the method comprising: (a) receiving (eg, and/or accessing) by the processor of the computing device using a functional imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography ( SPECT)] of the subject's 3D functional image [eg, wherein the 3D functional image includes a plurality of voxels, each voxel representing a specific physical volume within the subject and having a channel representing an emission from the specific physical volume detecting an intensity value of radiation (eg, a standard uptake value (SUV)), wherein at least a portion of the plurality of voxels of the 3D functional image represents a physical volume within the target tissue region]; (b) using a machine by the processor A learning module [eg, a pre-trained machine learning module (eg, with predetermined (eg, and fixed) parameters that has been determined by a training program)] automatically detects one or more hot spots within the 3D functional image, each hot spot Corresponds to a local area of increased intensity relative to its surroundings and represents (eg, indicates) a potential cancerous lesion within the subject, thereby establishing one or both of the following (i) and (ii): (i) a list of hot spots [eg, a list of coordinates (eg, image coordinates; eg, physical space coordinates); eg, a mask identifying the voxels of the 3D-enabled image corresponding to the locations (eg, centroids) of detected hotspots], which For each hotspot, the location of the hotspot is identified, and (ii) a 3D heatmap that identifies, for each hotspot, the corresponding 3D hotspot volume within the 3D functional image {eg, wherein the 3D heatmap is a segmentation map (eg, includes one or more a segmentation mask) that identifies, for each hotspot, voxels within the 3D functional image corresponding to the 3D hotspot volume for each hotspot [eg, where the 3D heatmap is created through artificial intelligence-based segmentation of the functional image] obtained (eg, using a machine learning module that receives at least the 3D functional image as input and generates the 3D heatmap as output to segment hotspots)]; eg, wherein the 3D heatmap depicts the 3D boundary of the hotspot for each hotspot (eg, , an irregular boundary) (eg, the 3D boundary encloses the 3D hotspot volume, eg, and distinguishes the voxels of the 3D functional image that make up the 3D hotspot volume from other voxels of the 3D functional image)}; and (c) storing and/or providing the hotspot list and/or the 3D heatmap for display and/or further processing.

在某些實施例中,該機器學習模組接收至少一部分該3D功能影像作為輸入且至少部分基於該3D功能影像之該經接收部分之體素之強度自動偵測該一或多個熱點。在某些實施例中,該機器學習模組接收3D分割圖作為輸入,該3D分割圖識別該3D功能影像內之一或多個所關注體積(VOI),各VOI對應於該受試者內之特定目標組織區域及/或特定解剖區域[例如,軟組織區域(例如,前列腺、淋巴結、肺、乳房);例如,一或多個特定骨;例如,整個骨骼區域]。In some embodiments, the machine learning module receives at least a portion of the 3D-enabled image as input and automatically detects the one or more hot spots based at least in part on intensities of voxels in the received portion of the 3D-enabled image. In certain embodiments, the machine learning module receives as input a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image, each VOI corresponding to a volume of interest within the subject Specific target tissue regions and/or specific anatomical regions [eg, soft tissue regions (eg, prostate, lymph nodes, lung, breast); eg, one or more specific bones; eg, entire skeletal region].

在某些實施例中,該方法包括藉由該處理器接收(例如,及/或存取)使用解剖成像模態[例如,x射線電腦斷層掃描攝影術(CT);例如,磁共振成像(MRI);例如,超聲波]獲得的該受試者之3D解剖影像,其中該3D解剖影像包括該受試者內之組織(例如,軟組織及/或骨)之圖形表示,且該機器學習模組接收至少兩個輸入通道,該等輸入通道包括對應於至少一部分該3D解剖影像之第一輸入通道及對應於至少一部分該3D功能影像之第二輸入通道[例如,其中該機器學習模組接收PET影像及CT影像作為分離通道(例如,表示相同體積之分離通道) (例如,類似於藉由機器學習模組接收攝影彩色影像之兩個色彩通道(RGB))]。In certain embodiments, the method includes receiving (eg, and/or accessing), by the processor, using an anatomical imaging modality [eg, x-ray computed tomography (CT); eg, magnetic resonance imaging ( MRI); e.g., ultrasound] obtained 3D anatomical image of the subject, wherein the 3D anatomical image includes a graphical representation of tissue (e.g., soft tissue and/or bone) within the subject, and the machine learning module receiving at least two input channels, the input channels including a first input channel corresponding to at least a portion of the 3D anatomical image and a second input channel corresponding to at least a portion of the 3D functional image [eg, wherein the machine learning module receives PET Image and CT image as separate channels (eg, separate channels representing the same volume) (eg, similar to receiving two color channels (RGB) of photographic color images by a machine learning module)].

在某些實施例中,該機器學習模組接收3D分割圖作為輸入,該3D分割圖在該3D功能影像及/或該3D解剖影像內識別一或多個所關注體積(VOI),各VOI對應於特定目標組織區域及/或特定解剖區域。在某些實施例中,方法包括藉由該處理器自動分割該3D解剖影像,從而建立該3D分割圖。In some embodiments, the machine learning module receives as input a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image and/or the 3D anatomical image, each VOI corresponding to in specific target tissue areas and/or specific anatomical areas. In some embodiments, the method includes automatically segmenting, by the processor, the 3D anatomical image, thereby creating the 3D segmentation map.

在某些實施例中,該機器學習模組係接收該3D功能影像之對應於該受試者之一或多個特定組織區域及/或解剖區域之特定部分作為輸入之區域特定機器學習模組。In certain embodiments, the machine learning module is a region-specific machine learning module that receives as input specific portions of the 3D functional image corresponding to one or more specific tissue regions and/or anatomical regions of the subject .

在某些實施例中,該機器學習模組產生該熱點清單作為輸出[例如,其中該機器學習模組實施經訓練以基於至少一部分該3D功能影像之體素之強度判定一或多個位置(例如,3D座標)之機器學習演算法(例如,人工神經網路(ANN)),各位置對應於該一或多個熱點之一者之位置]。In some embodiments, the machine learning module generates the hotspot list as an output [eg, wherein the machine learning module implements training to determine one or more locations based on intensities of at least a portion of the voxels of the 3D functional image ( For example, a machine learning algorithm (eg, an artificial neural network (ANN)) for 3D coordinates, each location corresponds to the location of one of the one or more hotspots].

在某些實施例中,該機器學習模組產生該3D熱點圖作為輸出[例如,其中該機器學習模組實施經訓練以分割該3D功能影像(例如,至少部分基於該3D功能影像之體素之強度)以識別該3D熱點圖(例如,該3D熱點圖針對各熱點描繪該熱點之3D邊界(例如,不規則邊界),從而識別該等3D熱點體積(例如,藉由該等3D熱點邊界圍封))之該等3D熱點體積之機器學習演算法(例如,人工神經網路(ANN));例如,其中該機器學習模組實施經訓練以針對至少一部分該3D功能影像之各體素判定表示該體素對應於熱點之可能性之熱點可能性值的機器學習演算法(例如,且步驟(b)包括執行諸如定限之一或多個後續後處理步驟,以使用該等熱點可能性值識別該3D熱點圖之該等3D熱點體積(例如,該3D熱點圖針對各熱點描繪該熱點之3D邊界(例如,不規則邊界),從而識別該等3D熱點體積(例如,藉由該等3D熱點邊界圍封)))]。In some embodiments, the machine learning module generates the 3D heat map as output [eg, wherein the machine learning module implements training to segment the 3D functional image (eg, based at least in part on voxels of the 3D functional image) intensity) to identify the 3D heat map (e.g., the 3D heat map delineates the 3D boundaries (e.g., irregular boundaries) of the hotspot for each hotspot), thereby identifying the 3D hotspot volumes (e.g., by the 3D hotspot boundaries) A machine learning algorithm (eg, an artificial neural network (ANN)) of the 3D hotspot volumes that encloses)); for example, wherein the machine learning module implementation is trained for each voxel of at least a portion of the 3D functional image A machine learning algorithm that determines a hotspot likelihood value representing the likelihood that the voxel corresponds to a hotspot (eg, and step (b) includes performing one or more subsequent post-processing steps, such as qualifying, to use the hotspot likelihood The property value identifies the 3D hotspot volumes of the 3D heatmap (eg, the 3D heatmap delineates the 3D boundaries (eg, irregular boundaries) of the hotspot for each hotspot, thereby identifying the 3D hotspot volumes (eg, by the etc. 3D hotspot boundary enclosure )))].

在某些實施例中,方法包括:(d)藉由該處理器針對至少一部分該等熱點之各熱點判定對應於該熱點表示該受試者內之病變之可能性之病變可能性分類[例如,指示熱點是否係真實病變之二元分類;例如,在表示熱點表示真實病變之可能性之尺度(例如,在零至壹之範圍內之浮點值)上之可能性值]。In certain embodiments, the method includes: (d) determining, by the processor for each of at least a portion of the hotspots, a lesion likelihood classification corresponding to the likelihood that the hotspot represents a lesion within the subject [eg, , a binary classification indicating whether the hot spot is a true lesion; eg, a likelihood value on a scale (eg, a floating point value in the range zero to one) representing the likelihood that the hot spot represents a true lesion].

在某些實施例中,步驟(d)包括使用第二機器學習模組以針對該部分之各熱點判定該病變可能性分類[例如,其中機器學習模組實施經訓練以偵測熱點(例如,產生熱點清單及/或3D熱點圖作為輸出)及針對各熱點判定該熱點之病變可能性分類之機器學習演算法]。在某些實施例中,步驟(d)包括使用第二機器學習模組(例如,熱點分類模組)以針對各熱點判定病變可能性分類[例如,至少部分基於選自由3D功能影像之強度、熱點清單、3D熱點圖、3D解剖影像之強度及3D分割圖組成之群組之一或多個成員;例如,其中第二機器學習模組接收對應於選自由3D功能影像之強度、熱點清單、3D熱點圖、3D解剖影像之強度及3D分割圖組成之群組之一或多個成員之一或多個輸入通道]。In certain embodiments, step (d) includes using a second machine learning module to determine the lesion likelihood classification for each hotspot of the portion [eg, wherein the machine learning module implements training to detect hotspots (eg, A hotspot list and/or 3D heatmap is generated as output) and a machine learning algorithm for each hotspot to determine the lesion likelihood classification for that hotspot]. In some embodiments, step (d) includes using a second machine learning module (eg, a hotspot classification module) to determine a lesion likelihood classification for each hotspot [eg, based at least in part on an intensity selected from the 3D functional image, one or more members of the group consisting of the hotspot list, the 3D heatmap, the intensity of the 3D anatomical image, and the 3D segmentation map; for example, wherein the second machine learning module receives the intensity, hotspot list, One or more members of the group consisting of the 3D heat map, the intensity of the 3D anatomical image, and the 3D segmentation map, one or more input channels].

在某些實施例中,方法包括藉由該處理器針對各熱點判定一或多個熱點特徵之集合及使用該一或多個熱點特徵之該集合作為送至該第二機器學習模組之輸入。In certain embodiments, the method includes determining, by the processor for each hotspot, a set of one or more hotspot features and using the set of the one or more hotspot features as input to the second machine learning module .

在某些實施例中,方法包括:(e)藉由該處理器至少部分基於該等熱點之該等病變可能性分類選擇對應於具有對應於癌性病變之高可能性的熱點之該一或多個熱點之子集(例如,用於包含於報告中;例如,用於運算受試者之一或多個風險指數值)。In certain embodiments, the method includes: (e) selecting, by the processor, the one or the other corresponding to hotspots having a high probability of corresponding to cancerous lesions based at least in part on the lesion likelihood classifications of the hotspots A subset of multiple hotspots (eg, for inclusion in a report; eg, for computing one or more risk index values for a subject).

在某些實施例中,方法包括:(f) [例如,在步驟(b)之前]藉由該處理器調整該3D功能影像之體素之強度以校正來自該3D功能影像之一或多個高強度體積之強度滲出(例如,串擾),該一或多個高強度體積之各者對應於與正常情況下(例如,不一定指示癌症)之高放射性藥物攝取相關聯的在該受試者內之高攝取組織區域。在某些實施例中,步驟(f)包括以循序方式一次一個地校正來自複數個高強度體積之強度滲出[例如,首先調整3D功能影像之體素之強度以校正來自第一高強度體積之強度滲出以產生第一經校正影像,接著調整該第一經校正影像之體素之強度以校正來自第二高強度體積之強度滲出等等]。在某些實施例中,該一或多個高強度體積對應於選自由腎臟、肝臟及膀胱(例如,尿膀胱)組成之群組之一或多個高攝取組織區域。In certain embodiments, the method includes: (f) [eg, prior to step (b)] adjusting, by the processor, intensities of voxels of the 3D functional image to correct one or more from the 3D functional image Intensity exudation (e.g., crosstalk) of high intensity volumes, each of the one or more high intensity volumes corresponding to high radiopharmaceutical uptake in the subject associated with normal conditions (e.g., not necessarily indicative of cancer) high uptake tissue areas within. In some embodiments, step (f) includes correcting the intensity bleeds from the plurality of high-intensity volumes in a sequential manner one at a time [eg, first adjusting the intensities of the voxels of the 3D functional image to correct for the intensities from the first high-intensity volume Intensity bleed to generate a first corrected image, then the intensities of voxels of the first corrected image are adjusted to correct for intensity bleed from a second high-intensity volume, etc.]. In certain embodiments, the one or more high-intensity volumes correspond to one or more high-uptake tissue regions selected from the group consisting of kidney, liver, and bladder (eg, urinary bladder).

在某些實施例中,方法包括:(g)藉由該處理器針對至少一部分該一或多個熱點之各者判定指示該熱點所對應之潛在性病變內之放射性藥物攝取之位準及/或該潛在性病變之大小(例如,體積)的對應病變指數。在某些實施例中,步驟(g)包括比較與該熱點相關聯(例如,在該熱點之位置處及/或附近;例如,在該熱點之體積內)之一或多個體素之(若干)強度(例如,對應於標準攝取值(SUV))與一或多個參考值,各參考值與受試者內之特定參考組織區域(例如,肝臟;例如,主動脈部分)相關聯且基於對應於該參考組織區域之參考體積之強度(例如,SUV值)判定[例如,作為平均值(例如,穩健平均值,諸如四分位數間距內之值之平均數)]。在某些實施例中,該一或多個參考值包括選自由與該受試者之主動脈部分相關聯的主動脈參考值及與該受試者之肝臟相關聯的肝臟參考值組成之群組之一或多個成員。In certain embodiments, the method includes: (g) determining, by the processor, for each of at least a portion of the one or more hotspots, a level indicative of radiopharmaceutical uptake within the underlying lesion corresponding to the hotspot and/or Or the corresponding lesion index for the size (eg, volume) of the underlying lesion. In certain embodiments, step (g) includes comparing (eg, at and/or near the location of the hotspot; eg, within the volume of the hotspot) one or more voxels (several ) intensity (eg, corresponding to a standard uptake value (SUV)) and one or more reference values, each reference value being associated with a particular reference tissue region (eg, liver; eg, aortic portion) within a subject and based on The intensity (eg, SUV value) of the reference volume corresponding to the reference tissue region is determined [eg, as a mean (eg, a robust mean, such as the mean of values within an interquartile range)]. In certain embodiments, the one or more reference values comprise selected from the group consisting of an aortic reference value associated with an aortic portion of the subject and a liver reference value associated with the subject's liver One or more members of the group.

在某些實施例中,針對與特定參考組織區域相關聯的至少一個特定參考值,判定該特定參考值包括將對應於該特定參考組織區域之特定參考體積內之體素之強度擬合[例如,擬合體素之強度之分佈(例如,擬合體素強度之直方圖)]至多組分混合模型(例如,雙組分高斯模型) [例如,及識別體素強度之分佈中之一或多個次要峰值,該等次要峰值對應於與異常攝取相關聯的體素,且自參考值判定排除彼等體素(例如,從而考量到諸如肝臟之部分之參考組織區域之特定部分中之異常低放射性藥物攝取之效應)]。In certain embodiments, for at least one particular reference value associated with a particular reference tissue region, determining the particular reference value comprises fitting intensities of voxels within a particular reference volume corresponding to the particular reference tissue region [eg, , fitting a distribution of voxel intensities (e.g., fitting a histogram of voxel intensities)] to a multicomponent mixture model (e.g., a two-component Gaussian model) [e.g., and identifying one of the distributions of voxel intensities or A plurality of secondary peaks that correspond to voxels associated with abnormal uptake and that are excluded from reference value determinations (eg, to be considered in a particular portion of a reference tissue region such as a portion of the liver) the effect of abnormally low radiopharmaceutical uptake)].

在某些實施例中,方法包括使用該等經判定病變指數值運算(例如,藉由該處理器自動地)指示受試者之癌症狀態及/或風險之受試者之整體風險指數。In certain embodiments, the method includes computing (eg, automatically by the processor) a subject's overall risk index that indicates the subject's cancer status and/or risk using the determined lesion index values.

在某些實施例中,方法包括藉由該處理器針對各熱點(例如,自動地)判定對應於其中熱點表示之潛在癌性病變經判定[例如,藉由處理器(例如,基於經接收及/或判定之3D分割圖)]為定位[例如,於前列腺、骨盆淋巴結、非骨盆淋巴結、骨(例如,骨轉移性區域)及不位於前列腺或淋巴結中之軟組織區域內]之受試者內之特定解剖區域及/或解剖區域群組的解剖分類。In certain embodiments, the method includes determining, by the processor, for each hot spot (eg, automatically) corresponding to a potential cancerous lesion represented by the hot spot [eg, by the processor (eg, based on received and 3D segmentation map)] for localization [eg, within the prostate, pelvic lymph nodes, non-pelvic lymph nodes, bone (eg, areas of bone metastases), and soft tissue regions not located in the prostate or lymph nodes] within subjects An anatomical classification of a specific anatomical region and/or group of anatomical regions.

在某些實施例中,方法包括:(h)藉由該處理器引起至少一部分該一或多個熱點之圖形表示在圖形使用者介面(GUI)內顯示以供使用者查看。在某些實施例中,方法包括:(i)藉由該處理器經由該GUI接收經由使用者查看確認為有可能表示該受試者內之潛在性癌性病變的該一或多個熱點之子集的使用者選擇。In some embodiments, the method includes: (h) causing, by the processor, at least a portion of a graphical representation of the one or more hot spots to be displayed within a graphical user interface (GUI) for viewing by a user. In certain embodiments, the method includes: (i) receiving, by the processor, via the GUI, children of the one or more hotspots identified by the user for viewing as likely to represent a potential cancerous lesion in the subject User selection of the set.

在某些實施例中,該3D功能影像包括在向該受試者投予試劑(例如,放射性藥物;例如,成像劑)之後獲得的PET或SPECT影像。在某些實施例中,該試劑包括PSMA結合劑。在某些實施例中,試劑包括18 F。在某些實施例中,試劑包括[18F]DCFPyL。在某些實施例中,試劑包括PSMA-11 (例如,68 Ga-PSMA-11)。在某些實施例中,試劑包括選自由99m Tc、68 Ga、177 Lu、225 Ac、111 In、123 I、124 I及131 I組成之群組之一或多個成員。In certain embodiments, the 3D functional image comprises a PET or SPECT image obtained after administration of an agent (eg, a radiopharmaceutical; eg, an imaging agent) to the subject. In certain embodiments, the agent comprises a PSMA binding agent. In certain embodiments, the agent includes18F. In certain embodiments, the agent comprises [18F]DCFPyL. In certain embodiments, the agent includes PSMA-11 (eg, 68Ga -PSMA-11). In certain embodiments, the reagent comprises one or more members selected from the group consisting of99mTc , 68Ga , 177Lu ,225Ac, 111In , 123I , 124I , and131I .

在某些實施例中,機器學習模組實施神經網路[例如,人工神經網路(ANN);例如,廻旋神經網路(CNN)]。In certain embodiments, the machine learning module implements a neural network [eg, artificial neural network (ANN); eg, convolutional neural network (CNN)].

在某些實施例中,該處理器係基於雲端之系統之處理器。In some embodiments, the processor is a processor of a cloud-based system.

在另一態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級)該受試者內之癌性病變之方法,該方法包括:(a)藉由運算裝置之處理器接收(例如,及/或存取)使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得的該受試者之3D功能影像[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值,其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(b)藉由該處理器接收(例如,及/或存取)使用解剖成像模態[例如,x射線電腦斷層掃描攝影術(CT);例如,磁共振成像(MRI);例如,超聲波]獲得的該受試者之3D解剖影像,其中該3D解剖影像包括該受試者內之組織(例如,軟組織及/或骨)之圖形表示;(c)藉由該處理器使用機器學習模組自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域且表示(例如,指示)該受試者內之潛在癌性病變,從而建立如下(i)及(ii)之一或兩者:(i)熱點清單,其針對各熱點識別該熱點之位置,及(ii) 3D熱點圖,其針對各熱點識別該3D功能影像內之對應3D熱點體積{例如,其中該3D熱點圖係分割圖(例如,包括一或多個分割遮罩),其針對各熱點識別對應於各熱點之該3D熱點體積之該3D功能影像內之體素[例如,其中該3D熱點圖係經由對該功能影像的基於人工智慧之分割而獲得(例如,使用接收至少該3D功能影像作為輸入且產生該3D熱點圖作為輸出從而分割熱點之機器學習模組)];例如,其中該3D熱點圖針對各熱點描繪該熱點之3D邊界(例如,不規則邊界) (例如,該3D邊界圍封該3D熱點體積,例如,及將該3D功能影像之構成該3D熱點體積之體素與該3D功能影像之其他體素進行區分)},其中該機器學習模組接收至少兩個輸入通道,該等輸入通道包括對應於至少一部分該3D解剖影像之第一輸入通道及對應於至少一部分該3D功能影像之第二輸入通道[例如,其中該機器學習模組接收PET影像及CT影像作為分離通道(例如,表示相同體積之分離通道) (例如,類似於藉由機器學習模組接收攝影彩色影像之兩個色彩通道(RGB))]及/或自其等導出之解剖資訊[例如,3D分割圖,其識別該3D功能影像內之一或多個所關注體積(VOI),各VOI對應於特定目標組織區域及/或特定解剖區域];及(d)儲存及/或提供該熱點清單及/或該3D熱點圖以供顯示及/或進一步處理。In another aspect, the invention relates to a method for automatically processing 3D images of a subject to identify and/or characterize (eg, grade) cancerous lesions in the subject, the method comprising: (a) receiving (eg, and/or accessing) by the processor of the computing device using a functional imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography ( SPECT)] of the subject's 3D functional image [eg, wherein the 3D functional image includes a plurality of voxels, each voxel representing a specific physical volume within the subject and having a channel representing an emission from the specific physical volume detecting intensity values of radiation, wherein at least a portion of the plurality of voxels of the 3D functional image represents a physical volume within the target tissue region]; (b) receiving (eg, and/or accessing) by the processor using anatomy A 3D anatomical image of the subject obtained by an imaging modality [eg, x-ray computed tomography (CT); eg, magnetic resonance imaging (MRI); eg, ultrasound], wherein the 3D anatomical image includes the subject A graphical representation of tissue (eg, soft tissue and/or bone) within the image; (c) using a machine learning module by the processor to automatically detect one or more hot spots within the 3D functional image, each hot spot corresponding to a relative A local area of increased intensity around it and representing (eg, indicative of) a potentially cancerous lesion within the subject, thereby establishing one or both of the following (i) and (ii): (i) a list of hot spots for Each hotspot identifies the location of the hotspot, and (ii) a 3D heatmap that, for each hotspot, identifies the corresponding 3D hotspot volume within the 3D functional image {eg, where the 3D heatmap is a segmentation map (eg, includes one or more segmentation mask) that identifies, for each hotspot, the voxels within the 3D functional image corresponding to the 3D hotspot volume for each hotspot [eg, where the 3D heatmap is obtained via artificial intelligence-based segmentation of the functional image] (eg, using a machine learning module that receives at least the 3D functional image as input and generates the 3D heatmap as output to segment hotspots)]; eg, where the 3D heatmap depicts the 3D boundary of the hotspot for each hotspot (eg, irregular boundary) (eg, the 3D boundary encloses the 3D hotspot volume, eg, and distinguishes the voxels of the 3D functional image that make up the 3D hotspot volume from other voxels of the 3D functional image)}, where the The machine learning module receives at least two input channels, the input channels include a first input channel corresponding to at least a portion of the 3D anatomical image and a second input channel corresponding to at least a portion of the 3D functional image [eg, wherein the machine learning The module receives PET images and CT images as separate channels (eg, separate channels representing the same volume) (eg, similar to receiving two color channels (RGB) of photographic color images by a machine learning module)] and/or from Its derived anatomical information [eg, a 3D segmentation map that identifies one or more volumes of interest within the 3D functional image (V OI), each VOI corresponds to a specific target tissue region and/or a specific anatomical region]; and (d) storing and/or providing the hotspot list and/or the 3D heatmap for display and/or further processing.

在另一態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級)該受試者內之癌性病變之方法,該方法包括:(a)藉由運算裝置之處理器接收(例如,及/或存取)使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得的該受試者之3D功能影像[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值,其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(b)藉由該處理器使用第一機器學習模組自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域且表示(例如,指示)該受試者內之潛在癌性病變,從而建立針對各熱點識別該熱點之位置之熱點清單[例如,其中該機器學習模組實施經訓練以基於至少一部分該3D功能影像之體素之強度判定一或多個位置(例如,3D座標)之機器學習演算法(例如,人工神經網路(ANN)),各位置對應於該一或多個熱點之一者之位置];(c)藉由該處理器使用第二機器學習模組及該熱點清單針對該一或多個熱點之各者自動判定該3D功能影像內之對應3D熱點體積,從而建立3D熱點圖[例如,其中該第二機器學習模組實施經訓練以至少部分基於該熱點清單以及該3D功能影像之體素之強度分割該3D功能影像以識別該3D熱點圖之該等3D熱點體積之機器學習演算法(例如,人工神經網路(ANN));例如,其中該機器學習模組實施經訓練以針對至少一部分該3D功能影像之各體素判定表示該體素對應於熱點之可能性之熱點可能性值的機器學習演算法(例如,且步驟(b)包括執行諸如定限之一或多個後續後處理步驟,以使用該等熱點可能性值識別該3D熱點圖之該等3D熱點體積] [例如,其中該3D熱點圖係使用(例如,基於及/或對應於來自)第二機器學習模組(之輸出)產生之分割圖(例如,包括一或多個分割遮罩),該3D熱點圖針對各熱點識別對應於各熱點之該3D熱點體積之該3D功能影像內之體素;例如,其中該3D熱點圖針對各熱點描繪該熱點之3D邊界(例如,不規則邊界) (例如,該3D邊界圍封該3D熱點體積,例如,及將該3D功能影像之構成該3D熱點體積之體素與該3D功能影像之其他體素進行區分)];及(d)儲存及/或提供該熱點清單及/或該3D熱點圖以供顯示及/或進一步處理。In another aspect, the invention relates to a method for automatically processing 3D images of a subject to identify and/or characterize (eg, grade) cancerous lesions in the subject, the method comprising: (a) receiving (eg, and/or accessing) by the processor of the computing device using a functional imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography ( SPECT)] of the subject's 3D functional image [eg, wherein the 3D functional image includes a plurality of voxels, each voxel representing a specific physical volume within the subject and having a channel representing an emission from the specific physical volume Detecting intensity values of radiation, wherein at least part of the plurality of voxels of the 3D functional image represents a physical volume within the target tissue region]; (b) automatically detecting the 3D by the processor using a first machine learning module One or more hotspots within the functional image, each hotspot corresponding to a local area of increased intensity relative to its surroundings and representing (eg, indicative of) a potential cancerous lesion within the subject, thereby establishing a method for identifying the hotspot for each hotspot. A list of hotspots of locations [eg, where the machine learning module implements a machine learning algorithm (eg, artificial Neural Network (ANN)), each location corresponding to the location of one of the one or more hotspots]; (c) using a second machine learning module and the hotspot list by the processor for the one or more hotspots Each of the hotspots automatically determines the corresponding 3D hotspot volume within the 3D-enabled image, thereby creating a 3D heatmap [eg, wherein the second machine learning module implementation is trained to be based at least in part on the hotspot list and the volume of the 3D-enabled image A machine learning algorithm (eg, an artificial neural network (ANN)) that segments the 3D functional image to identify the 3D hotspot volumes of the 3D heatmap; for example, wherein the machine learning module implementation is trained for Each voxel of at least a portion of the 3D functional image determines a machine learning algorithm for a hotspot likelihood value representing the likelihood that the voxel corresponds to a hotspot (eg, and step (b) includes performing one or more subsequent steps such as defining post-processing step to identify the 3D hotspot volumes of the 3D heatmap using the hotspot likelihood values] [eg, wherein the 3D heatmap is performed using (eg, based on and/or corresponding to from) a second machine learning model A set of (the output of) generated segmentation maps (e.g., including one or more segmentation masks) that identify, for each hotspot, voxels within the 3D functional image of the 3D hotspot volume corresponding to each hotspot; e.g. , where the 3D heat map depicts, for each hotspot, the 3D boundary (eg, an irregular boundary) of the hotspot (eg, the 3D boundary enclosing the 3D hotspot volume, eg, and the 3D functional image of the 3D hotspot volume that constitutes the 3D hotspot volume). voxels are distinguished from other voxels of the 3D functional image)]; and (d) storing and/or providing the hotspot list and/or the 3D heatmap for display and/or or further processing.

在某些實施例中,方法包括:(e)藉由該處理器針對至少一部分該等熱點之各熱點判定對應於該熱點表示該受試者內之病變之可能性之病變可能性分類。在某些實施例中,步驟(e)包括使用第三機器學習模組(例如,熱點分類模組)以針對各熱點判定該病變可能性分類[例如,至少部分基於選自由3D功能影像之強度、熱點清單、3D熱點圖、3D解剖影像之強度及3D分割圖組成之群組之一或多個成員;例如,其中該第三機器學習模組接收對應於選自由3D功能影像之強度、熱點清單、3D熱點圖、3D解剖影像之強度及3D分割圖組成之群組之一或多個成員之一或多個輸入通道]。In certain embodiments, the method includes: (e) determining, by the processor, for each of at least a portion of the hotspots, a lesion likelihood classification corresponding to the likelihood that the hotspot represents a lesion in the subject. In certain embodiments, step (e) includes using a third machine learning module (eg, a hotspot classification module) to determine the lesion likelihood classification for each hotspot [eg, based at least in part on an intensity selected from a 3D functional image] , a list of hot spots, a 3D heat map, one or more members of the group consisting of the intensity of the 3D anatomical image and the 3D segmentation map; List, 3D heat map, intensity of 3D anatomical image, and one or more members of the group consisting of the 3D segmentation map, one or more input channels].

在某些實施例中,方法包括:(f)藉由該處理器至少部分基於該等熱點之該等病變可能性分類選擇對應於具有對應於癌性病變之高可能性的熱點之該一或多個熱點之子集(例如,用於包含於報告中;例如,用於運算受試者之一或多個風險指數值)。In certain embodiments, the method includes: (f) selecting, by the processor, the one or the other corresponding to hotspots having a high probability of corresponding to cancerous lesions based at least in part on the lesion likelihood classifications of the hotspots A subset of multiple hotspots (eg, for inclusion in a report; eg, for computing one or more risk index values for a subject).

在另一態樣中,本發明係關於一種量測對應於參考組織區域之參考體積(例如,與受試者之肝臟相關聯的肝臟體積)內之強度值以便避免來自與低(例如,異常低)放射性藥物攝取(例如,歸因於無示蹤劑攝取之腫瘤)相關聯的組織區域之影響的方法,該方法包括:(a)藉由運算裝置之處理器接收(例如,及/或存取)受試者之3D功能影像,該3D功能影像使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值,其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(b)藉由該處理器識別該3D功能影像內之該參考體積;(c)藉由該處理器將多組分混合模型(例如,雙組分高斯混合模型)擬合至該參考體積內之體素之強度[例如,將該多組分混合模型擬合至該參考體積內之體素之強度之分佈(例如,直方圖)];(d)藉由該處理器識別該多組分模型之主要模式;(e)藉由該處理器判定對應於該主要模式之(例如,平均數、最大值、眾數(mode)、中值等)強度之量度,從而判定對應於體素之強度之量度之參考強度值,該等體素(i)在該參考組織體積內且(ii)與該主要模式相關聯(例如,及自參考值計算排除具有與次要模式相關聯的強度之體素) (例如,從而避免來自與低放射性藥物攝取相關聯的組織區域之影響);(f)藉由該處理器在該功能影像內偵測對應於潛在癌性病變之一或多個熱點;及(g)藉由該處理器針對至少一部分該等經偵測熱點之各熱點使用至少該參考強度值判定病變指數值[例如,該病變指數值基於(i)對應於該經偵測熱點之體素之強度之量度及(ii)該參考強度值]。In another aspect, the invention pertains to a measure of intensity values within a reference volume (eg, a liver volume associated with a subject's liver) corresponding to a reference tissue region in order to avoid errors from low (eg, abnormal) A method for the effect of a tissue region associated with low) radiopharmaceutical uptake (eg, due to a tumor without tracer uptake), the method comprising: (a) receiving by a processor of a computing device (eg, and/or access) a 3D functional image of the subject obtained using a functional imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography (SPECT)] [ For example, wherein the 3D functional image includes a plurality of voxels, each voxel representing a specific physical volume within the subject and having an intensity value representing detected radiation emitted from the specific physical volume, wherein the 3D functional image at least a portion of the plurality of voxels represent a solid volume within the target tissue region]; (b) identify, by the processor, the reference volume within the 3D functional image; (c) combine, by the processor, a multicomponent mixture model ( For example, a two-component Gaussian mixture model is fitted to the intensities of voxels within the reference volume [eg, the multi-component mixture model is fitted to a distribution (eg, a histogram) of the intensities of voxels within the reference volume )]; (d) identifying, by the processor, the dominant mode of the multicomponent model; (e) determining, by the processor, the (e.g., mean, maximum, mode) corresponding to the dominant mode , median, etc.) measure of intensity to determine a reference intensity value corresponding to a measure of intensity for voxels that are (i) within the reference tissue volume and (ii) associated with the dominant mode (e.g., and exclude voxels with intensities associated with minor patterns from reference value calculations (eg, to avoid effects from tissue regions associated with low radiopharmaceutical uptake); (f) by the processor in the function Intra-image detection of one or more hot spots corresponding to potentially cancerous lesions; and (g) determining, by the processor, a lesion index value using at least the reference intensity value for each of at least a portion of the detected hot spots [eg, , the lesion index value is based on (i) a measure of the intensity of the voxel corresponding to the detected hot spot and (ii) the reference intensity value].

在另一態樣中,本發明係關於一種校正來自歸因於與正常情況下(例如,且不一定指示癌症)之高放射性藥物攝取相關聯的在受試者內之高攝取組織區域之強度滲出(例如,串擾)的方法,該方法包括:(a)藉由運算裝置之處理器接收(例如,及/或存取)受試者之3D功能影像,該3D功能影像使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值,其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(b)藉由該處理器識別該3D功能影像內之高強度體積,該高強度體積對應於其中在正常情況下發生高放射性藥物攝取之特定高攝取組織區域(例如,腎臟;例如,肝臟;例如,膀胱);(c)藉由該處理器基於該經識別之高強度體積識別該3D功能影像內之抑制體積,該抑制體積對應於位於該經識別之高強度體積之邊界之外且在距該經識別之高強度體積之該邊界之預定衰減距離內的體積;(d)藉由該處理器判定對應於該3D功能影像之背景影像,其中用基於該抑制體積內之該3D功能影像之體素之強度判定之內插值來取代該高強度體積內之體素之強度;(e)藉由該處理器藉由自來自該3D功能影像之體素之強度減去該背景影像之體素之強度(例如,執行逐體素減法)來判定估計影像;(f)藉由該處理器藉由以下來判定抑制圖:將對應於該高強度體積之該估計影像之體素之強度外推至該抑制體積內之體素之位置以判定對應於該抑制體積之該抑制圖之體素之強度;及將對應於該抑制體積之外之位置的該抑制圖之體素之強度設定為零;及(g)藉由該處理器基於該抑制圖來調整該3D功能影像之體素之強度(例如,藉由自該3D功能影像之體素之強度減去該抑制圖之體素之強度),從而校正來自該高強度體積之強度滲出。In another aspect, the invention pertains to a correction for intensities derived from regions of high uptake tissue within a subject associated with high radiopharmaceutical uptake under normal conditions (eg, and not necessarily indicative of cancer) A method of exuding (eg, crosstalk), the method comprising: (a) receiving (eg, and/or accessing), by a processor of a computing device, a 3D functional image of a subject, the 3D functional image using a functional imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography (SPECT)] obtains [eg, wherein the 3D functional image includes a plurality of voxels, each voxel representing the subject within a specific physical volume and having an intensity value representing the detected radiation emitted from the specific physical volume, wherein at least a portion of the plurality of voxels of the 3D functional image represent the physical volume within the target tissue region]; (b) by A high-intensity volume within the 3D functional image is identified by the processor, the high-intensity volume corresponding to a particular high-uptake tissue region (eg, kidney; eg, liver; eg, bladder) in which high radiopharmaceutical uptake normally occurs (c) identifying, by the processor, a suppression volume within the 3D functional image based on the identified high-intensity volume, the suppression volume corresponding to outside the boundaries of the identified high-intensity volume and at a distance from the identified high-intensity volume a volume within a predetermined attenuation distance of the boundary of the high-intensity volume; (d) determining, by the processor, a background image corresponding to the 3D functional image, wherein using a voxel based on the 3D functional image within the suppression volume (e) by the processor by subtracting the intensities of the voxels of the background image from the intensities of the voxels from the 3D functional image ( For example, performing voxel-wise subtraction) to determine an estimated image; (f) determining, by the processor, a suppression map by extrapolating to the suppression the intensities of voxels of the estimated image corresponding to the high-intensity volume the position of the voxels within the volume to determine the intensity of the voxels of the suppression map corresponding to the suppression volume; and setting the intensity of the voxels of the suppression map corresponding to the position outside the suppression volume to zero; and ( g) adjusting, by the processor, based on the suppression map, the intensities of voxels of the 3D functional image (eg, by subtracting the intensities of voxels of the suppression map from the intensities of voxels of the 3D functional images), thereby Correction for intensity bleed from the high intensity volume.

在某些實施例中,方法包括以循序方式針對複數個高強度體積之各者執行步驟(b)至步驟(g),從而校正來自該複數個高強度體積之各者之強度滲出。In certain embodiments, the method includes performing steps (b) through (g) for each of the plurality of high-intensity volumes in a sequential manner, thereby correcting for intensity bleed from each of the plurality of high-intensity volumes.

在某些實施例中,該複數個高強度體積包括選自由腎臟、肝臟及膀胱(例如,尿膀胱)組成之群組之一或多個成員。In certain embodiments, the plurality of high-intensity volumes comprise one or more members selected from the group consisting of kidney, liver, and bladder (eg, urinary bladder).

在另一態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級)該受試者內之癌性病變之方法,該方法包括:(a)藉由運算裝置之處理器接收(例如,及/或存取)使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得的該受試者之3D功能影像[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值,其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(b)藉由該處理器自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域且表示(例如,指示)該受試者內之潛在癌性病變;(c)藉由該處理器引起轉列該一或多個熱點之圖形表示以用於在互動式圖形使用者介面(GUI) (例如,品質控制及報告GUI)內顯示;(d)藉由該處理器經由該互動式GUI接收包括至少一部分(至多為全部)該一或多個經自動偵測之熱點之最終熱點集合的使用者選擇(例如,以用於包含於報告中);及(e)儲存及/或提供該最終熱點集合以供顯示及/或進一步處理。In another aspect, the invention relates to a method for automatically processing 3D images of a subject to identify and/or characterize (eg, grade) cancerous lesions in the subject, the method comprising: (a) receiving (eg, and/or accessing) by the processor of the computing device using a functional imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography ( SPECT)] of the subject's 3D functional image [eg, wherein the 3D functional image includes a plurality of voxels, each voxel representing a specific physical volume within the subject and having a channel representing an emission from the specific physical volume Detecting intensity values of radiation, wherein at least part of the plurality of voxels of the 3D functional image represents a physical volume within the target tissue region]; (b) automatically detecting one or more of the 3D functional images by the processor a hotspot, each hotspot corresponding to a local area of increased intensity relative to its surroundings and representing (eg, indicative of) a potential cancerous lesion within the subject; (c) causing, by the processor, to list the one or more a graphical representation of hotspots for display within an interactive graphical user interface (GUI) (eg, a quality control and reporting GUI); (d) received by the processor via the interactive GUI including at least a portion (at most all) ) user selection of the final set of hotspots of the one or more auto-detected hotspots (eg, for inclusion in reports); and (e) storing and/or providing the final set of hotspots for display and/or or further processing.

在某些實施例中,方法包括:(f)藉由該處理器經由該GUI接收一或多個額外、使用者識別之熱點之使用者選擇以用於包含於該最終熱點集合中;及(g)藉由該處理器更新該最終熱點集合以包含該一或多個額外使用者識別之熱點。In certain embodiments, the method includes: (f) receiving, by the processor, via the GUI, user selections of one or more additional, user-identified hotspots for inclusion in the final set of hotspots; and ( g) updating, by the processor, the final set of hotspots to include the one or more additional user-identified hotspots.

在某些實施例中,步驟(b)包括使用一或多個機器學習模組。In certain embodiments, step (b) includes using one or more machine learning modules.

在另一態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級)該受試者內之癌性病變之方法,該方法包括:(a)藉由運算裝置之處理器接收(例如,及/或存取)使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得的該受試者之3D功能影像[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值,其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(b)藉由該處理器自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域且表示(例如,指示)該受試者內之潛在癌性病變;(c)藉由該處理器針對至少一部分該一或多個熱點之各者自動判定對應於其中熱點表示之潛在癌性病變經判定[例如,藉由處理器(例如,基於經接收及/或判定之3D分割圖)]為定位[例如,於前列腺、骨盆淋巴結、非骨盆淋巴結、骨(例如,骨轉移性區域)及不位於前列腺或淋巴結中之軟組織區域內]之受試者內之特定解剖區域及/或解剖區域群組的解剖分類;及(d)儲存及/或提供該一或多個熱點之識別以及針對各熱點之對應於該熱點之解剖分類以供顯示及/或進一步處理。In another aspect, the invention relates to a method for automatically processing 3D images of a subject to identify and/or characterize (eg, grade) cancerous lesions in the subject, the method comprising: (a) receiving (eg, and/or accessing) by the processor of the computing device using a functional imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography ( SPECT)] of the subject's 3D functional image [eg, wherein the 3D functional image includes a plurality of voxels, each voxel representing a specific physical volume within the subject and having a channel representing an emission from the specific physical volume Detecting intensity values of radiation, wherein at least part of the plurality of voxels of the 3D functional image represents a physical volume within the target tissue region]; (b) automatically detecting one or more of the 3D functional images by the processor (c) targeting, by the processor, at least a portion of the one or more Each of the hotspots is automatically determined to correspond to a potential cancerous lesion in which the hotspot represents is determined [eg, by the processor (eg, based on the received and/or determined 3D segmentation map)] as localized [eg, in the prostate, pelvis, Anatomical classification of specific anatomical regions and/or groups of anatomical regions within a subject of lymph nodes, non-pelvic lymph nodes, bone (eg, areas of bone metastases), and soft tissue regions not located in the prostate or lymph nodes]; and (d ) stores and/or provides the identification of the one or more hotspots and, for each hotspot, an anatomical classification corresponding to the hotspot for display and/or further processing.

在某些實施例中,步驟(b)包括使用一或多個機器學習模組。In certain embodiments, step (b) includes using one or more machine learning modules.

在另一態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級)該受試者內之癌性病變之系統,該系統包括:運算裝置之處理器;及具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器:(a)接收(例如,及/或存取)使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得的該受試者之3D功能影像[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值(例如,標準攝取值(SUV)),其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(b)使用機器學習模組[例如,預訓練之機器學習模組(例如,具有已經由訓練程序判定之預定(例如,及固定)參數)]自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域且表示(例如,指示)該受試者內之潛在癌性病變,從而建立如下(i)及(ii)之一或兩者:(i)熱點清單[例如,座標(例如,影像座標;例如,實體空間座標)之清單;例如,識別該3D功能影像之體素之遮罩,各體素對應於經偵測熱點之位置(例如,質心)],其針對各熱點識別該熱點之位置,及(ii) 3D熱點圖,其針對各熱點識別該3D功能影像內之對應3D熱點體積{例如,其中該3D熱點圖係分割圖(例如,包括一或多個分割遮罩),其針對各熱點識別對應於各熱點之該3D熱點體積之該3D功能影像內之體素[例如,其中該3D熱點圖係經由對該功能影像的基於人工智慧之分割而獲得(例如,使用接收至少該3D功能影像作為輸入且產生該3D熱點圖作為輸出從而分割熱點之機器學習模組)];例如,其中該3D熱點圖針對各熱點描繪該熱點之3D邊界(例如,不規則邊界) (例如,該3D邊界圍封該3D熱點體積,例如,及將該3D功能影像之構成該3D熱點體積之體素與該3D功能影像之其他體素進行區分)};及(c)儲存及/或提供該熱點清單及/或該3D熱點圖以供顯示及/或進一步處理。In another aspect, the invention relates to a system for automatically processing 3D images of a subject to identify and/or characterize (eg, grade) cancerous lesions in the subject, the system comprising: A processor of a computing device; and memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) receive (eg, and/or access) use functions A 3D functional image of the subject obtained by an imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography (SPECT)] [eg, wherein the 3D functional image includes a plurality of voxels, each voxel representing a specific physical volume within the subject and having an intensity value (eg, a standard uptake value (SUV)) representing the detected radiation emitted from the specific physical volume, wherein the 3D functional image at least a portion of the plurality of voxels represent a physical volume within the target tissue region]; (b) using a machine learning module [e.g., a pre-trained machine learning module (e.g., with a predetermined value that has been determined by a training program (e.g., and fixed) parameters)] automatically detects one or more hotspots within the 3D functional image, each hotspot corresponding to a local area of increased intensity relative to its surroundings and representing (eg, indicative of) a potential cancerousness within the subject lesions, thereby creating one or both of the following (i) and (ii): (i) a list of hot spots [eg, a list of coordinates (eg, image coordinates; eg, physical space coordinates); eg, identifying the 3D functional image A mask of voxels, each voxel corresponding to the location of a detected hotspot (eg, centroid)], which identifies the location of the hotspot for each hotspot, and (ii) a 3D heat map, which identifies the 3D hotspot for each hotspot The corresponding 3D hotspot volume within the functional image {eg, where the 3D heatmap is a segmentation map (eg, including one or more segmentation masks) that identifies, for each hotspot, the 3D feature corresponding to the 3D hotspot volume for each hotspot Voxels within an image [eg, where the 3D heat map is obtained via artificial intelligence-based segmentation of the functional image (eg, segmenting hot spots using receiving at least the 3D functional image as input and generating the 3D heat map as output) machine learning module)]; for example, wherein the 3D heat map depicts, for each hotspot, a 3D boundary (eg, an irregular boundary) of the hotspot (eg, the 3D boundary encloses the 3D hotspot volume, eg, and the 3D distinguish the voxels of the functional image that make up the 3D hotspot volume from other voxels of the 3D functional image)}; and (c) store and/or provide the hotspot list and/or the 3D heatmap for display and/or further processing.

在某些實施例中,該機器學習模組接收至少一部分該3D功能影像作為輸入且至少部分基於該3D功能影像之該經接收部分之體素之強度自動偵測該一或多個熱點。In some embodiments, the machine learning module receives at least a portion of the 3D-enabled image as input and automatically detects the one or more hot spots based at least in part on intensities of voxels in the received portion of the 3D-enabled image.

在某些實施例中,該機器學習模組接收3D分割圖作為輸入,該3D分割圖識別該3D功能影像內之一或多個所關注體積(VOI),各VOI對應於該受試者內之特定目標組織區域及/或特定解剖區域[例如,軟組織區域(例如,前列腺、淋巴結、肺、乳房);例如,一或多個特定骨;例如,整個骨骼區域]。In certain embodiments, the machine learning module receives as input a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image, each VOI corresponding to a volume of interest within the subject Specific target tissue regions and/or specific anatomical regions [eg, soft tissue regions (eg, prostate, lymph nodes, lung, breast); eg, one or more specific bones; eg, entire skeletal region].

在某些實施例中,該等指令引起該處理器:接收(例如,及/或存取)使用解剖成像模態[例如,x射線電腦斷層掃描攝影術(CT);例如,磁共振成像(MRI);例如,超聲波]獲得的該受試者之3D解剖影像,其中該3D解剖影像包括該受試者內之組織(例如,軟組織及/或骨)之圖形表示,且該機器學習模組接收至少兩個輸入通道,該等輸入通道包括對應於至少一部分該3D解剖影像之第一輸入通道及對應於至少一部分該3D功能影像之第二輸入通道[例如,其中該機器學習模組接收PET影像及CT影像作為分離通道(例如,表示相同體積之分離通道) (例如,類似於藉由機器學習模組接收攝影彩色影像之兩個色彩通道(RGB))]。In certain embodiments, the instructions cause the processor to: receive (eg, and/or access) the use of an anatomical imaging modality [eg, x-ray computed tomography (CT); eg, magnetic resonance imaging ( MRI); e.g., ultrasound] obtained 3D anatomical image of the subject, wherein the 3D anatomical image includes a graphical representation of tissue (e.g., soft tissue and/or bone) within the subject, and the machine learning module receiving at least two input channels, the input channels including a first input channel corresponding to at least a portion of the 3D anatomical image and a second input channel corresponding to at least a portion of the 3D functional image [eg, wherein the machine learning module receives PET Image and CT image as separate channels (eg, separate channels representing the same volume) (eg, similar to receiving two color channels (RGB) of photographic color images by a machine learning module)].

在某些實施例中,該機器學習模組接收3D分割圖作為輸入,該3D分割圖在該3D功能影像及/或該3D解剖影像內識別一或多個所關注體積(VOI),各VOI對應於特定目標組織區域及/或特定解剖區域。In some embodiments, the machine learning module receives as input a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image and/or the 3D anatomical image, each VOI corresponding to in specific target tissue areas and/or specific anatomical areas.

在某些實施例中,該等指令引起該處理器自動分割該3D解剖影像,從而建立該3D分割圖。In some embodiments, the instructions cause the processor to automatically segment the 3D anatomical image, thereby creating the 3D segmentation map.

在某些實施例中,該機器學習模組係接收該3D功能影像之對應於該受試者之一或多個特定組織區域及/或解剖區域之特定部分作為輸入之區域特定機器學習模組。In certain embodiments, the machine learning module is a region-specific machine learning module that receives as input specific portions of the 3D functional image corresponding to one or more specific tissue regions and/or anatomical regions of the subject .

在某些實施例中,該機器學習模組產生該熱點清單作為輸出[例如,其中該機器學習模組實施經訓練以基於至少一部分該3D功能影像之體素之強度判定一或多個位置(例如,3D座標)之機器學習演算法(例如,人工神經網路(ANN)),各位置對應於該一或多個熱點之一者之位置]。In some embodiments, the machine learning module generates the hotspot list as an output [eg, wherein the machine learning module implements training to determine one or more locations based on intensities of at least a portion of the voxels of the 3D functional image ( For example, a machine learning algorithm (eg, an artificial neural network (ANN)) for 3D coordinates, each location corresponds to the location of one of the one or more hotspots].

在某些實施例中,該機器學習模組產生該3D熱點圖作為輸出[例如,其中該機器學習模組實施經訓練以分割該3D功能影像(例如,至少部分基於該3D功能影像之體素之強度)以識別該3D熱點圖(例如,該3D熱點圖針對各熱點描繪該熱點之3D邊界(例如,不規則邊界),從而識別該等3D熱點體積(例如,藉由該等3D熱點邊界圍封))之該等3D熱點體積之機器學習演算法(例如,人工神經網路(ANN));例如,其中該機器學習模組實施經訓練以針對至少一部分該3D功能影像之各體素判定表示該體素對應於熱點之可能性之熱點可能性值的機器學習演算法(例如,且步驟(b)包括執行諸如定限之一或多個後續後處理步驟,以使用該等熱點可能性值識別該3D熱點圖之該等3D熱點體積(例如,該3D熱點圖針對各熱點描繪該熱點之3D邊界(例如,不規則邊界),從而識別該等3D熱點體積(例如,藉由該等3D熱點邊界圍封)))]。In some embodiments, the machine learning module generates the 3D heat map as output [eg, wherein the machine learning module implements training to segment the 3D functional image (eg, based at least in part on voxels of the 3D functional image) intensity) to identify the 3D heat map (e.g., the 3D heat map delineates the 3D boundaries (e.g., irregular boundaries) of the hotspot for each hotspot), thereby identifying the 3D hotspot volumes (e.g., by the 3D hotspot boundaries) A machine learning algorithm (eg, an artificial neural network (ANN)) of the 3D hotspot volumes that encloses)); for example, wherein the machine learning module implementation is trained for each voxel of at least a portion of the 3D functional image A machine learning algorithm that determines a hotspot likelihood value representing the likelihood that the voxel corresponds to a hotspot (eg, and step (b) includes performing one or more subsequent post-processing steps, such as qualifying, to use the hotspot likelihood The property value identifies the 3D hotspot volumes of the 3D heatmap (eg, the 3D heatmap delineates the 3D boundaries (eg, irregular boundaries) of the hotspot for each hotspot, thereby identifying the 3D hotspot volumes (eg, by the etc. 3D hotspot boundary enclosure )))].

在某些實施例中,該等指令引起該處理器:(d)針對至少一部分該等熱點之各熱點判定對應於該熱點表示該受試者內之病變之可能性之病變可能性分類[例如,指示熱點是否係真實病變之二元分類;例如,在表示熱點表示真實病變之可能性之尺度(例如,在零至壹之範圍內之浮點值)上之可能性值]。In certain embodiments, the instructions cause the processor to: (d) determine, for each of at least a portion of the hotspots, a lesion likelihood classification corresponding to the likelihood that the hotspot represents a lesion within the subject [eg, , a binary classification indicating whether the hot spot is a true lesion; eg, a likelihood value on a scale (eg, a floating point value in the range zero to one) representing the likelihood that the hot spot represents a true lesion].

在某些實施例中,在步驟(d),指令引起處理器使用機器學習模組以針對該部分之各熱點判定該病變可能性分類[例如,其中機器學習模組實施經訓練以偵測熱點(例如,產生熱點清單及/或3D熱點圖作為輸出)及針對各熱點判定該熱點之病變可能性分類之機器學習演算法]。In certain embodiments, at step (d), the instructions cause the processor to use a machine learning module to determine the lesion likelihood classification for each hot spot of the portion [eg, wherein the machine learning module implements training to detect hot spots (eg, generating a list of hotspots and/or a 3D heatmap as output) and a machine learning algorithm for each hotspot to determine the lesion likelihood classification for that hotspot].

在某些實施例中,在步驟(d),指令引起處理器使用第二機器學習模組(例如,熱點分類模組)以針對各熱點判定病變可能性分類[例如,至少部分基於選自由3D功能影像之強度、熱點清單、3D熱點圖、3D解剖影像之強度及3D分割圖組成之群組之一或多個成員;例如,其中第二機器學習模組接收對應於選自由3D功能影像之強度、熱點清單、3D熱點圖、3D解剖影像之強度及3D分割圖組成之群組之一或多個成員之一或多個輸入通道]。In certain embodiments, in step (d), the instructions cause the processor to use a second machine learning module (eg, a hotspot classification module) to determine a lesion likelihood classification for each hotspot [eg, based at least in part on a 3D One or more members of the group consisting of the intensity of the functional image, the hotspot list, the 3D heat map, the intensity of the 3D anatomical image, and the 3D segmentation map; One or more members of the group consisting of intensity, hotspot list, 3D heat map, intensity of 3D anatomical image, and 3D segmentation map, one or more input channels].

在某些實施例中,指令引起處理器針對各熱點判定一或多個熱點特徵之集合及使用該一或多個熱點特徵之該集合作為送至該第二機器學習模組之輸入。In some embodiments, the instructions cause the processor to determine, for each hotspot, a set of one or more hotspot features and use the set of the one or more hotspot features as input to the second machine learning module.

在某些實施例55至58中,其中指令引起處理器:(e)針對至少一部分該等熱點之各熱點,判定對應於該熱點表示該受試者內之病變之可能性之病變可能性分類。In certain embodiments 55-58, wherein the instructions cause the processor to: (e) for each of at least a portion of the hotspots, determine a lesion likelihood classification corresponding to the likelihood that the hotspot represents a lesion within the subject .

在某些實施例中,指令引起處理器:(f) [例如,在步驟(b)之前]藉由該處理器調整該3D功能影像之體素之強度以校正來自該3D功能影像之一或多個高強度體積之強度滲出(例如,串擾),該一或多個高強度體積之各者對應於與正常情況下(例如,不一定指示癌症)之高放射性藥物攝取相關聯的在該受試者內之高攝取組織區域。In some embodiments, the instructions cause the processor to: (f) [eg, prior to step (b)] adjusting, by the processor, the intensities of voxels of the 3D-enabled image to correct one of the 3D-enabled images or Intensity exudation (eg, crosstalk) of a plurality of high-intensity volumes, each of the one or more high-intensity volumes corresponding to high radiopharmaceutical uptake associated with normal conditions (eg, not necessarily indicative of cancer) in the subject. Areas of high uptake tissue within the subject.

在某些實施例中,在步驟(f),指令引起處理器以循序方式一次一個地校正來自複數個高強度體積之強度滲出[例如,首先調整3D功能影像之體素之強度以校正來自第一高強度體積之強度滲出以產生第一經校正影像,接著調整該第一經校正影像之體素之強度以校正來自第二高強度體積之強度滲出等等]。In some embodiments, at step (f), the instructions cause the processor to correct the intensity bleeds from the plurality of high-intensity volumes one at a time in a sequential manner [eg, first adjust the intensities of the voxels of the 3D functional image to correct for the intensities from the The intensity of a high-intensity volume bleeds out to generate a first corrected image, then the intensities of the voxels of the first corrected image are adjusted to correct for the intensity bleed-out from a second high-intensity volume, etc.].

在某些實施例中,該一或多個高強度體積對應於選自由腎臟、肝臟及膀胱(例如,尿膀胱)組成之群組之一或多個高攝取組織區域。In certain embodiments, the one or more high-intensity volumes correspond to one or more high-uptake tissue regions selected from the group consisting of kidney, liver, and bladder (eg, urinary bladder).

在某些實施例中,指令引起處理器:(g)針對至少一部分該一或多個熱點之各者判定指示該熱點所對應之潛在性病變內之放射性藥物攝取之位準及/或該潛在性病變之大小(例如,體積)的對應病變指數。In certain embodiments, the instructions cause the processor to: (g) determine, for each of at least a portion of the one or more hotspots, a level indicative of radiopharmaceutical uptake within the underlying lesion to which the hotspot corresponds and/or the potential The corresponding lesion index for the size (eg, volume) of a sexual lesion.

在某些實施例中,在步驟(g),指令引起處理器比較與該熱點相關聯(例如,在該熱點之位置處及/或附近;例如,在該熱點之體積內)之一或多個體素之(若干)強度(例如,對應於標準攝取值(SUV))與一或多個參考值,各參考值與受試者內之特定參考組織區域(例如,肝臟;例如,主動脈部分)相關聯且基於對應於該參考組織區域之參考體積之強度(例如,SUV值)判定[例如,作為平均值(例如,穩健平均值,諸如四分位數間距內之值之平均數)]。In certain embodiments, at step (g), the instructions cause the processor to compare one or more of the ones associated with the hotspot (eg, at and/or near the location of the hotspot; eg, within the volume of the hotspot). The intensity(s) of a voxel (eg, corresponding to a standard uptake value (SUV)) and one or more reference values, each reference value being associated with a specific reference tissue region within a subject (eg, liver; eg, aortic portion) ) is associated and determined based on the intensity (eg, SUV value) of the reference volume corresponding to the reference tissue region [eg, as a mean (eg, a robust mean, such as the mean of values within an interquartile range)] .

在某些實施例中,該一或多個參考值包括選自由與該受試者之主動脈部分相關聯的主動脈參考值及與該受試者之肝臟相關聯的肝臟參考值組成之群組之一或多個成員。In certain embodiments, the one or more reference values comprise selected from the group consisting of an aortic reference value associated with an aortic portion of the subject and a liver reference value associated with the subject's liver One or more members of the group.

在某些實施例中,針對與特定參考組織區域相關聯的至少一個特定參考值,指令引起處理器藉由下列方式判定該特定參考值:將對應於該特定參考組織區域之特定參考體積內之體素之強度擬合[例如,藉由擬合體素之強度之分佈(例如,擬合體素強度之直方圖)]至多組分混合模型(例如,雙組分高斯模型) [例如,及識別體素強度之分佈中之一或多個次要峰值,該等次要峰值對應於與異常攝取相關聯的體素,且(自參考值判定中)將自參考值判定排除彼等體素(例如,從而考量到諸如肝臟之部分之參考組織區域之特定部分中之異常低放射性藥物攝取之效應)]。In some embodiments, for at least one particular reference value associated with a particular reference tissue region, the instructions cause the processor to determine the particular reference value by: placing the values within the particular reference volume corresponding to the particular reference tissue region Intensity fitting of voxels [e.g., by fitting a distribution of voxel intensities (e.g., fitting a histogram of voxel intensities)] to a multicomponent mixture model (e.g., a two-component Gaussian model) [e.g., and Identify one or more secondary peaks in the distribution of voxel intensities that correspond to voxels associated with abnormal uptake, and exclude those voxels (from the reference value determination) from the reference value determination (eg, to account for the effect of abnormally low radiopharmaceutical uptake in specific parts of a reference tissue region such as parts of the liver)].

在某些實施例中,指令引起處理器使用該等經判定病變指數值運算(例如,自動地)指示受試者之癌症狀態及/或風險之受試者之整體風險指數。In certain embodiments, the instructions cause the processor to compute (eg, automatically) a subject's overall risk index that indicates the subject's cancer status and/or risk using the determined lesion index values.

在某些實施例中,指令引起處理器針對各熱點(例如,自動地)判定對應於其中熱點表示之潛在癌性病變經判定[例如,藉由處理器(例如,基於經接收及/或判定之3D分割圖)]為定位[例如,於前列腺、骨盆淋巴結、非骨盆淋巴結、骨(例如,骨轉移性區域)及不位於前列腺或淋巴結中之軟組織區域內]之受試者內之特定解剖區域及/或解剖區域群組的解剖分類。In certain embodiments, the instructions cause the processor to determine for each hot spot (eg, automatically) that a potential cancerous lesion corresponding to the hot spot is represented is determined [eg, by the processor (eg, based on received and/or determined) 3D segmentation map)] is a specific anatomy within a subject located [eg, within the prostate, pelvic lymph nodes, non-pelvic lymph nodes, bone (eg, areas of bone metastases), and soft tissue regions not located in the prostate or lymph nodes] Anatomical classification of regions and/or groups of anatomical regions.

在某些實施例中,指令引起處理器:(h)引起轉列至少一部分該一或多個熱點之圖形表示以用於在圖形使用者介面(GUI)內顯示以供使用者查看。In some embodiments, the instructions cause the processor to: (h) cause at least a portion of the graphical representation of the one or more hot spots to be listed for display within a graphical user interface (GUI) for viewing by a user.

在某些實施例中,指令引起處理器:(i)經由該GUI接收經由使用者查看確認為有可能表示該受試者內之潛在性癌性病變的該一或多個熱點之子集的使用者選擇。In certain embodiments, the instructions cause the processor to: (i) receive, via the GUI, use of a subset of the one or more hotspots identified by the user for viewing that are likely to represent potential cancerous lesions in the subject user's choice.

在某些實施例中,該3D功能影像包括在向該受試者投予試劑(例如,放射性藥物;例如,成像劑)之後獲得的PET或SPECT影像。在某些實施例中,該試劑包括PSMA結合劑。在某些實施例中,試劑包括18 F。在某些實施例中,試劑包括[18F]DCFPyL。在某些實施例中,試劑包括PSMA-11 (例如,68 Ga-PSMA-11)。在某些實施例中,試劑包括選自由99m Tc、68 Ga、177 Lu、225 Ac、111 In、123 I、124 I及131 I組成之群組之一或多個成員。In certain embodiments, the 3D functional image comprises a PET or SPECT image obtained after administration of an agent (eg, a radiopharmaceutical; eg, an imaging agent) to the subject. In certain embodiments, the agent comprises a PSMA binding agent. In certain embodiments, the agent includes18F. In certain embodiments, the agent comprises [18F]DCFPyL. In certain embodiments, the agent includes PSMA-11 (eg, 68Ga -PSMA-11). In certain embodiments, the reagent comprises one or more members selected from the group consisting of99mTc , 68Ga , 177Lu ,225Ac, 111In , 123I , 124I , and131I .

在某些實施例中,機器學習模組實施神經網路[例如,人工神經網路(ANN);例如,廻旋神經網路(CNN)]。In certain embodiments, the machine learning module implements a neural network [eg, artificial neural network (ANN); eg, convolutional neural network (CNN)].

在某些實施例中,該處理器係基於雲端之系統之處理器。In some embodiments, the processor is a processor of a cloud-based system.

在另一態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級)該受試者內之癌性病變之系統,該系統包括:運算裝置之處理器;及具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器:(a)接收(例如,及/或存取)使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得的該受試者之3D功能影像[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值,其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(b)接收(例如,及/或存取)使用解剖成像模態[例如,x射線電腦斷層掃描攝影術(CT);例如,磁共振成像(MRI);例如,超聲波]獲得的該受試者之3D解剖影像,其中該3D解剖影像包括該受試者內之組織(例如,軟組織及/或骨)之圖形表示;(c)使用機器學習模組自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域且表示(例如,指示)該受試者內之潛在癌性病變,從而建立如下(i)及(ii)之一或兩者:(i)熱點清單,其針對各熱點識別該熱點之位置,及(ii) 3D熱點圖,其針對各熱點識別該3D功能影像內之對應3D熱點體積{例如,其中該3D熱點圖係分割圖(例如,包括一或多個分割遮罩),其針對各熱點識別對應於各熱點之該3D熱點體積之該3D功能影像內之體素[例如,其中該3D熱點圖係經由對該功能影像的基於人工智慧之分割而獲得(例如,使用接收至少該3D功能影像作為輸入且產生該3D熱點圖作為輸出從而分割熱點之機器學習模組)];例如,其中該3D熱點圖針對各熱點描繪該熱點之3D邊界(例如,不規則邊界) (例如,該3D邊界圍封該3D熱點體積,例如,及將該3D功能影像之構成該3D熱點體積之體素與該3D功能影像之其他體素進行區分)},其中該機器學習模組接收至少兩個輸入通道,該等輸入通道包括對應於至少一部分該3D解剖影像之第一輸入通道及對應於至少一部分該3D功能影像之第二輸入通道[例如,其中該機器學習模組接收PET影像及CT影像作為分離通道(例如,表示相同體積之分離通道) (例如,類似於藉由機器學習模組接收攝影彩色影像之兩個色彩通道(RGB))]及/或自其等導出之解剖資訊[例如,3D分割圖,其識別該3D功能影像內之一或多個所關注體積(VOI),各VOI對應於特定目標組織區域及/或特定解剖區域];及(d)儲存及/或提供該熱點清單及/或該3D熱點圖以供顯示及/或進一步處理。In another aspect, the invention relates to a system for automatically processing 3D images of a subject to identify and/or characterize (eg, grade) cancerous lesions in the subject, the system comprising: A processor of a computing device; and memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) receive (eg, and/or access) use functions A 3D functional image of the subject obtained by an imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography (SPECT)] [eg, wherein the 3D functional image includes a plurality of voxels, each voxel representing a specific physical volume within the subject and having an intensity value representing detected radiation emitted from the specific physical volume, wherein at least a portion of the plurality of voxels of the 3D functional image represents a target a solid volume within a tissue region]; (b) receive (eg, and/or access) using anatomical imaging modalities [eg, x-ray computed tomography (CT); eg, magnetic resonance imaging (MRI); eg , ultrasound] a 3D anatomical image of the subject obtained, wherein the 3D anatomical image includes a graphical representation of tissue (eg, soft tissue and/or bone) within the subject; (c) using a machine learning module to automatically detect Detecting one or more hot spots within the 3D functional image, each hot spot corresponding to a local area of increased intensity relative to its surroundings and representing (eg, indicative of) a potential cancerous lesion within the subject, establishing (i) as follows and (ii) one or both of: (i) a list of hotspots, which identifies the location of the hotspot for each hotspot, and (ii) a 3D heatmap, which identifies, for each hotspot, the corresponding 3D hotspot volume within the 3D functional image { For example, where the 3D heat map is a segmentation map (eg, including one or more segmentation masks) that identifies, for each hotspot, voxels within the 3D functional image corresponding to the 3D hotspot volume for each hotspot [eg, where The 3D heat map is obtained through artificial intelligence-based segmentation of the functional image (eg, using a machine learning module that receives at least the 3D functional image as input and generates the 3D heat map as output to segment hot spots); for example , where the 3D heat map depicts, for each hotspot, the 3D boundary (eg, an irregular boundary) of the hotspot (eg, the 3D boundary enclosing the 3D hotspot volume, eg, and the 3D functional image of the 3D hotspot volume that constitutes the 3D hotspot volume). voxels are distinguished from other voxels in the 3D functional image)}, wherein the machine learning module receives at least two input channels, the input channels include a first input channel corresponding to at least a portion of the 3D anatomical image and a first input channel corresponding to at least a portion of the 3D anatomical image A second input channel of at least a portion of the 3D functional image [eg, wherein the machine learning module receives PET images and CT images as separate channels (eg, separate channels representing the same volume) (eg, similar to that obtained by a machine learning module) receive two color channels (RGB) of photographic color images] and/or anatomical information derived therefrom [e.g., 3D a cut map that identifies one or more volumes of interest (VOIs) within the 3D functional image, each VOI corresponding to a specific target tissue region and/or a specific anatomical region]; and (d) storing and/or providing the hotspot list and /or the 3D heat map for display and/or further processing.

在另一態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級)該受試者內之癌性病變之系統,該系統包括:運算裝置之處理器;及具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器:(a)接收(例如,及/或存取)使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得的該受試者之3D功能影像[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值,其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(b)使用第一機器學習模組自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域且表示(例如,指示)該受試者內之潛在癌性病變,從而建立針對各熱點識別該熱點之位置之熱點清單[例如,其中該機器學習模組實施經訓練以基於至少一部分該3D功能影像之體素之強度判定一或多個位置(例如,3D座標)之機器學習演算法(例如,人工神經網路(ANN)),各位置對應於該一或多個熱點之一者之位置];(c)使用第二機器學習模組及該熱點清單針對該一或多個熱點之各者自動判定該3D功能影像內之對應3D熱點體積,從而建立3D熱點圖[例如,其中該第二機器學習模組實施經訓練以至少部分基於該熱點清單以及該3D功能影像之體素之強度分割該3D功能影像以識別該3D熱點圖之該等3D熱點體積之機器學習演算法(例如,人工神經網路(ANN));例如,其中該機器學習模組實施經訓練以針對至少一部分該3D功能影像之各體素判定表示該體素對應於熱點之可能性之熱點可能性值的機器學習演算法(例如,且步驟(b)包括執行諸如定限之一或多個後續後處理步驟,以使用該等熱點可能性值識別該3D熱點圖之該等3D熱點體積] [例如,其中該3D熱點圖係使用(例如,基於及/或對應於來自)第二機器學習模組(之輸出)產生之分割圖(例如,包括一或多個分割遮罩),該3D熱點圖針對各熱點識別對應於各熱點之該3D熱點體積之該3D功能影像內之體素;例如,其中該3D熱點圖針對各熱點描繪該熱點之3D邊界(例如,不規則邊界) (例如,該3D邊界圍封該3D熱點體積,例如,及將該3D功能影像之構成該3D熱點體積之體素與該3D功能影像之其他體素進行區分)];及(d)儲存及/或提供該熱點清單及/或該3D熱點圖以供顯示及/或進一步處理。In another aspect, the invention relates to a system for automatically processing 3D images of a subject to identify and/or characterize (eg, grade) cancerous lesions in the subject, the system comprising: A processor of a computing device; and memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) receive (eg, and/or access) use functions A 3D functional image of the subject obtained by an imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography (SPECT)] [eg, wherein the 3D functional image includes a plurality of voxels, each voxel representing a specific physical volume within the subject and having an intensity value representing detected radiation emitted from the specific physical volume, wherein at least a portion of the plurality of voxels of the 3D functional image represents a target (b) use a first machine learning module to automatically detect one or more hot spots within the 3D functional image, each hot spot corresponding to a local area of increased intensity relative to its surroundings and representing (e.g. , indicating) potential cancerous lesions within the subject, thereby establishing a list of hotspots identifying the location of the hotspot for each hotspot [eg, wherein the machine learning module implements a voxel trained to base at least a portion of the 3D functional image The strength of a machine learning algorithm (eg, an artificial neural network (ANN)) that determines one or more locations (eg, 3D coordinates), each location corresponding to the location of one of the one or more hotspots]; (c ) use a second machine learning module and the hotspot list to automatically determine the corresponding 3D hotspot volume within the 3D functional image for each of the one or more hotspots, thereby creating a 3D heatmap [eg, wherein the second machine learning module A group implements a machine learning algorithm (eg, an artificial neural network) trained to segment the 3D functional image based at least in part on the hotspot list and the intensities of voxels of the 3D functional image to identify the 3D hotspot volumes of the 3D heat map (ANN)); for example, wherein the machine learning module implements a machine learning algorithm trained to determine, for each voxel of at least a portion of the 3D functional image, a hotspot likelihood value representing the likelihood that the voxel corresponds to a hotspot ( For example, and step (b) includes performing one or more subsequent post-processing steps, such as defining, to identify the 3D hotspot volumes of the 3D heatmap using the hotspot likelihood values] [eg, wherein the 3D heatmap Using (eg, based on and/or corresponding to an output from) a second machine learning module generated segmentation maps (eg, including one or more segmentation masks), the 3D heat map for each hot spot identified corresponds to voxels within the 3D functional image of the 3D hotspot volume for each hotspot; eg, where the 3D heatmap depicts the 3D boundary (eg, an irregular boundary) of the hotspot for each hotspot (eg, the 3D boundary encloses the 3D hotspot volume, for example, and distinguish the voxels of the 3D functional image that make up the 3D hotspot volume from other voxels of the 3D functional image)]; and (d) storage The hotspot list and/or the 3D heatmap are stored and/or provided for display and/or further processing.

在某些實施例中,指令引起處理器:(e)針對至少一部分該等熱點之各熱點判定對應於該熱點表示該受試者內之病變之可能性之病變可能性分類。In certain embodiments, the instructions cause the processor to: (e) determine, for each of at least a portion of the hotspots, a lesion likelihood classification corresponding to the likelihood that the hotspot represents a lesion within the subject.

在某些實施例中,在步驟(e),指令引起處理器使用第三機器學習模組(例如,熱點分類模組)以針對各熱點判定該病變可能性分類[例如,至少部分基於選自由3D功能影像之強度、熱點清單、3D熱點圖、3D解剖影像之強度及3D分割圖組成之群組之一或多個成員;例如,其中該第三機器學習模組接收對應於選自由3D功能影像之強度、熱點清單、3D熱點圖、3D解剖影像之強度及3D分割圖組成之群組之一或多個成員之一或多個輸入通道]。In certain embodiments, at step (e), the instructions cause the processor to use a third machine learning module (eg, a hotspot classification module) to determine the lesion likelihood classification for each hotspot [eg, based at least in part on selected from one or more members of the group consisting of the intensity of the 3D functional image, the hotspot list, the 3D heat map, the intensity of the 3D anatomical image, and the 3D segmentation map; Image intensity, hotspot list, 3D heat map, 3D anatomical image intensity, and one or more members of the group consisting of 3D segmentation maps, one or more input channels].

在某些實施例中,指令引起處理器:(f)至少部分基於該等熱點之該等病變可能性分類選擇對應於具有對應於癌性病變之高可能性的熱點之該一或多個熱點之子集(例如,用於包含於報告中;例如,用於運算受試者之一或多個風險指數值)。In certain embodiments, the instructions cause the processor to: (f) select the one or more hotspots corresponding to hotspots having a high likelihood corresponding to cancerous lesions based at least in part on the lesion likelihood classifications of the hotspots A subset of (eg, for inclusion in a report; eg, for computing one or more risk index values for a subject).

在另一態樣中,本發明係關於一種用於量測對應於參考組織區域之參考體積(例如,與受試者之肝臟相關聯的肝臟體積)內之強度值以便避免來自與低(例如,異常低)放射性藥物攝取(例如,歸因於無示蹤劑攝取之腫瘤)相關聯的組織區域之影響的系統,該系統包括:運算裝置之處理器;及具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器:(a)接收(例如,及/或存取)受試者之3D功能影像,該3D功能影像使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值,其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(b)識別該3D功能影像內之該參考體積;(c)將多組分混合模型(例如,雙組分高斯混合模型)擬合至該參考體積內之體素之強度[例如,將該多組分混合模型擬合至該參考體積內之體素之強度之分佈(例如,直方圖)];(d)識別該多組分模型之主要模式;(e)判定對應於該主要模式之(例如,平均數、最大值、眾數、中值等)強度之量度,從而判定對應於體素之強度之量度之參考強度值,該等體素(i)在該參考組織體積內且(ii)與該主要模式相關聯(例如,及自參考值計算排除具有與次要模式相關聯的強度之體素) (例如,從而避免來自與低放射性藥物攝取相關聯的組織區域之影響);(f)在該3D功能影像內偵測對應於潛在癌性病變之一或多個熱點;及(g)針對至少一部分該等經偵測熱點之各熱點使用至少該參考強度值判定病變指數值[例如,該病變指數值基於(i)對應於該經偵測熱點之體素之強度之量度及(ii)該參考強度值]。In another aspect, the invention relates to a method for measuring intensity values within a reference volume corresponding to a reference tissue region (eg, a liver volume associated with a subject's liver) in order to avoid , abnormally low) radiopharmaceutical uptake (eg, due to tumors without tracer uptake) associated with a tissue region of a system comprising: a processor of an arithmetic device; and a processor having instructions stored thereon memory, wherein the instructions, when executed by the processor, cause the processor to: (a) receive (eg, and/or access) a 3D functional image of the subject, the 3D functional image using a functional imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography (SPECT)] obtains [eg, wherein the 3D functional image includes a plurality of voxels, each voxel representing the subject within a specific physical volume and having intensity values representing detected radiation emitted from the specific physical volume, wherein at least a portion of the plurality of voxels of the 3D functional image represent the physical volume within the target tissue region]; (b) identifying the reference volume within the 3D functional image; (c) fitting a multi-component mixture model (eg, a two-component Gaussian mixture model) to the intensities of voxels within the reference volume [eg, the multi-component mixture fit the model to the distribution (eg, histogram) of the intensities of voxels within the reference volume]; (d) identify the dominant mode of the multicomponent model; (e) determine the number, maximum, mode, median, etc.) intensity to determine a reference intensity value corresponding to a measure of intensity for voxels that are (i) within the reference tissue volume and (ii) associated with the Major mode association (eg, and exclusion of voxels with intensities associated with minor modes from reference value calculations) (eg, to avoid effects from tissue regions associated with low radiopharmaceutical uptake); (f) in and (g) using at least the reference intensity value to determine a lesion index value for each of at least a portion of the detected hotspots [eg, the The lesion index value is based on (i) a measure of the intensity of the voxel corresponding to the detected hot spot and (ii) the reference intensity value].

在另一態樣中,本發明係關於一種用於校正來自歸因於與正常情況下(例如,且不一定指示癌症)之高放射性藥物攝取相關聯的在受試者內之高攝取組織區域之強度滲出(例如,串擾)的系統,該方法包括:(a)接收(例如,及/或存取)受試者之3D功能影像,該3D功能影像使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值,其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(b)識別該3D功能影像內之高強度體積,該高強度體積對應於其中在正常情況下發生高放射性藥物攝取之特定高攝取組織區域(例如,腎臟;例如,肝臟;例如,膀胱);(c)基於該經識別之高強度體積識別該3D功能影像內之抑制體積,該抑制體積對應於位於該經識別之高強度體積之邊界之外且在距該經識別之高強度體積之該邊界之預定衰減距離內的體積;(d)判定對應於該3D功能影像之背景影像,其中用基於該抑制體積內之該3D功能影像之體素之強度判定之內插值來取代該高強度體積內之體素之強度;(e)藉由自來自該3D功能影像之體素之強度減去該背景影像之體素之強度(例如,執行逐體素減法)來判定估計影像;(f)藉由以下來判定抑制圖:將對應於該高強度體積之該估計影像之體素之強度外推至該抑制體積內之體素之位置以判定對應於該抑制體積之該抑制圖之體素之強度;及將對應於該抑制體積之外之位置的該抑制圖之體素之強度設定為零;及(g)基於該抑制圖來調整該3D功能影像之體素之強度(例如,藉由自該3D功能影像之體素之強度減去該抑制圖之體素之強度),從而校正來自該高強度體積之強度滲出。In another aspect, the invention pertains to a method for correcting for tissue regions of high uptake within a subject attributable to high uptake of radiopharmaceuticals associated with normal conditions (eg, and not necessarily indicative of cancer) A system for exuding (eg, crosstalk) of the intensity of Emission tomography (PET); e.g., single photon emission computed tomography (SPECT)] obtained [e.g., wherein the 3D functional image includes a plurality of voxels, each voxel representing a specific physical volume within the subject and has an intensity value representing the detected radiation emitted from the specific physical volume, wherein at least a portion of the plurality of voxels of the 3D functional image represents the physical volume within the target tissue region]; (b) identifying the 3D functional image within the (c) based on the identified high intensity An intensity volume identifies a suppression volume within the 3D functional image, the suppression volume corresponding to a volume located outside the boundary of the identified high-intensity volume and within a predetermined attenuation distance from the boundary of the identified high-intensity volume; (d) determining a background image corresponding to the 3D functional image, wherein the intensities of the voxels within the high intensity volume are replaced by interpolation based on the determination of the intensities of the voxels of the 3D functional image within the suppression volume; (e) ) determine the estimated image by subtracting the intensities of the voxels of the background image from the intensities of the voxels from the 3D functional image (eg, performing a voxel-by-voxel subtraction); (f) determine the suppression map by: Extrapolate the intensities of the voxels of the estimated image corresponding to the high-intensity volume to the positions of the voxels within the suppression volume to determine the intensities of the voxels of the suppression map corresponding to the suppression volume; and will correspond to the suppression The intensity of the voxels of the suppression map at locations outside the volume are set to zero; and (g) the intensity of the voxels of the 3D functional image is adjusted based on the suppression map (eg, by voxels from the 3D functional image) minus the intensities of the voxels of the suppression map) to correct for intensity bleed from the high-intensity volume.

在某些實施例中,指令引起處理器以循序方式針對複數個高強度體積之各者執行步驟(b)至步驟(g),從而校正來自該複數個高強度體積之各者之強度滲出。In certain embodiments, the instructions cause the processor to perform steps (b) through (g) for each of the plurality of high-intensity volumes in a sequential manner, thereby correcting for intensity bleed from each of the plurality of high-intensity volumes.

在某些實施例中,該複數個高強度體積包括選自由腎臟、肝臟及膀胱(例如,尿膀胱)組成之群組之一或多個成員。In certain embodiments, the plurality of high-intensity volumes comprise one or more members selected from the group consisting of kidney, liver, and bladder (eg, urinary bladder).

在另一態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級)該受試者內之癌性病變之系統,該系統包括:運算裝置之處理器;及具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器:(a)接收(例如,及/或存取)使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得的該受試者之3D功能影像[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值,其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(b)自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域且表示(例如,指示)該受試者內之潛在癌性病變;(c)引起轉列該一或多個熱點之圖形表示以用於在互動式圖形使用者介面(GUI) (例如,品質控制及報告GUI)內顯示;(d)經由該互動式GUI接收包括至少一部分(至多為全部)該一或多個經自動偵測之熱點之最終熱點集合的使用者選擇(例如,以用於包含於報告中);及(e)儲存及/或提供該最終熱點集合以供顯示及/或進一步處理。In another aspect, the invention relates to a system for automatically processing 3D images of a subject to identify and/or characterize (eg, grade) cancerous lesions in the subject, the system comprising: A processor of a computing device; and memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) receive (eg, and/or access) use functions A 3D functional image of the subject obtained by an imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography (SPECT)] [eg, wherein the 3D functional image includes a plurality of voxels, each voxel representing a specific physical volume within the subject and having an intensity value representing detected radiation emitted from the specific physical volume, wherein at least a portion of the plurality of voxels of the 3D functional image represents a target Physical volume within tissue region]; (b) automatically detect one or more hot spots within the 3D functional image, each hot spot corresponding to a localized region of increased intensity relative to its surroundings and representing (eg, indicative of) the subject potentially cancerous lesions within the receiving, via the interactive GUI, a user selection (eg, for inclusion in a report) of a final set of hotspots including at least a portion, and at most all, of the one or more auto-detected hotspots; and (e) storing and/or provide the final set of hotspots for display and/or further processing.

在某些實施例中,指令引起處理器:(f)經由該GUI接收一或多個額外、使用者識別之熱點之使用者選擇以用於包含於該最終熱點集合中;及(g)更新該最終熱點集合以包含該一或多個額外使用者識別之熱點。In certain embodiments, the instructions cause the processor to: (f) receive, via the GUI, user selections of one or more additional, user-identified hotspots for inclusion in the final set of hotspots; and (g) update The final set of hotspots includes the one or more additional user-identified hotspots.

在某些實施例中,在步驟(b),指令引起處理器使用一或多個機器學習模組。In certain embodiments, in step (b), the instructions cause the processor to use one or more machine learning modules.

在另一態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級)該受試者內之癌性病變之方法,該方法包括:(a)藉由運算裝置之處理器接收(例如,及/或存取)使用功能成像模態獲得的該受試者之3D功能影像[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)];(b)藉由該處理器接收(例如,及/或存取)使用解剖成像模態獲得的該受試者之3D解剖影像[例如,電腦斷層掃描攝影術(CT)影像;磁共振(MR)影像];(c)藉由該處理器接收(例如,及/或存取) 3D分割圖,該3D分割圖識別該3D功能影像內及/或該3D解剖影像內之一或多個特定組織區域或組織區域群組(例如,對應於特定解剖區域之一組組織區域;例如,包括其中發生高或低放射性藥物攝取之器官之一群組組織區域);(d)藉由該處理器使用一或多個機器學習模組自動偵測及/或分割該3D功能影像內之一或多個熱點之集合,各熱點對應於相對於其周圍提高強度之局部區域且表示受試者內之潛在癌性病變,從而建立如下(i)及(ii)之一或兩者:(i)熱點清單,其針對各熱點識別該熱點之位置[例如,如藉由一或多個機器學習模組偵測],及(ii) 3D熱點圖,其針對各熱點識別該3D功能影像內之對應3D熱點體積[例如,如經由藉由一或多個機器學習模組執行之分割來判定] [例如,其中3D熱點圖係分割圖,其針對各熱點描繪該熱點之3D熱點邊界(例如,不規則邊界) (例如,該3D邊界圍封該3D熱點體積)],其中一或多個機器學習模組之至少一者(例如,多達全部)接收(i) 3D功能影像、(ii) 3D解剖影像及(iii) 3D分割圖作為輸入;及(e)儲存及/或提供熱點清單及/或3D熱點圖以供顯示及/或進一步處理。In another aspect, the invention relates to a method for automatically processing 3D images of a subject to identify and/or characterize (eg, grade) cancerous lesions in the subject, the method comprising: (a) receiving (eg, and/or accessing), by the processor of the computing device, a 3D functional image of the subject obtained using a functional imaging modality [eg, positron emission tomography (PET); eg , Single Photon Emission Computed Tomography (SPECT)]; (b) receiving (eg, and/or accessing), by the processor, a 3D anatomical image of the subject obtained using an anatomical imaging modality [eg, computed tomography (CT) images; magnetic resonance (MR) images]; (c) receiving (eg, and/or accessing), by the processor, a 3D segmentation map that identifies within the 3D functional image and/or one or more specific tissue regions or groups of tissue regions within the 3D anatomical image (e.g., a group of tissue regions corresponding to a specific anatomical region; e.g., including one of the organs in which high or low radiopharmaceutical uptake occurs (d) using one or more machine learning modules by the processor to automatically detect and/or segment a set of one or more hotspots within the 3D functional image, each hotspot corresponding to a relative A local area of increased intensity surrounding it and representing a potential cancerous lesion within the subject, creating one or both of the following (i) and (ii): (i) a list of hotspots that identifies the location of the hotspot for each hotspot [eg, as detected by one or more machine learning modules], and (ii) a 3D heat map that identifies, for each hot spot, the corresponding 3D hot spot volume within the 3D functional image [eg, as detected by one or more Segmentation performed by multiple machine learning modules to determine] [eg, where the 3D heat map is a segmentation map that, for each hotspot, delineates the 3D hotspot boundary (eg, an irregular boundary) of the 3D Hotspot Volume)], wherein at least one (eg, up to all) of the one or more machine learning modules receives as input (i) 3D functional images, (ii) 3D anatomical images, and (iii) 3D segmentation maps; and (e) storing and/or providing a list of hot spots and/or a 3D heat map for display and/or further processing.

在某些實施例中,方法包括:藉由該處理器接收初始3D分割圖,該初始3D分割圖識別該3D解剖影像及/或該3D功能影像內之一或多個(例如,複數個)特定組織區域(例如,器官及/或特定骨);及藉由該處理器將至少一部分該一或多個特定組織區域識別為屬於一或多個組織群組(例如,預定義之群組)之特定者及藉由該處理器更新該3D分割圖以將該等經識別特定區域指示為屬於該特定組織群組;及藉由該處理器使用該經更新之3D分割圖作為送至該一或多個機器學習模組之至少一者之輸入。In certain embodiments, the method includes receiving, by the processor, an initial 3D segmentation map that identifies one or more (eg, a plurality) within the 3D anatomical image and/or the 3D functional image a specific tissue region (eg, an organ and/or a specific bone); and identifying, by the processor, at least a portion of the one or more specific tissue regions as belonging to one or more tissue groups (eg, a predefined group) specifying and updating the 3D segmentation map by the processor to indicate the identified specific regions as belonging to the specific tissue group; and using the updated 3D segmentation map by the processor as sending the one or the Input to at least one of a plurality of machine learning modules.

在某些實施例中,該一或多個組織群組包括軟組織群組,使得表示軟組織之特定組織區域經識別為屬於該軟組織群組。在某些實施例中,該一或多個組織群組包括骨組織群組,使得表示骨之特定組織區域經識別為屬於該骨組織群組。在某些實施例中,該一或多個組織群組包括高攝取器官群組,使得與高放射性藥物攝取(例如,在正常情況下,且不一定歸因於存在病變)相關聯的一或多個器官經識別為屬於該高攝取群組。In certain embodiments, the one or more tissue groups include a soft tissue group such that a particular tissue region representing soft tissue is identified as belonging to the soft tissue group. In certain embodiments, the one or more tissue groups include a bone tissue group such that a particular tissue region representing bone is identified as belonging to the bone tissue group. In certain embodiments, the one or more tissue populations comprise high uptake organ populations such that one or more associated with high radiopharmaceutical uptake (eg, under normal circumstances, and not necessarily due to the presence of lesions) Multiple organs were identified as belonging to this high uptake group.

在某些實施例中,方法包括,針對各經偵測及/或分割之熱點,藉由該處理器判定該熱點之分類[例如,根據解剖位置,例如,將熱點分類為骨、淋巴或前列腺,例如,基於受試者中之熱點之經判定(例如,藉由處理器)位置指派文數字碼,諸如表1中之標記方案]。In certain embodiments, the method includes, for each detected and/or segmented hotspot, determining, by the processor, a classification of the hotspot [eg, according to anatomical location, eg, classifying the hotspot as bone, lymph, or prostate For example, an alphanumeric code is assigned based on the determined (eg, by a processor) location of hotspots in a subject, such as the labeling scheme in Table 1].

在某些實施例中,方法包括使用一或多個機器學習模組之至少一者以針對各經偵測及/或分割之病變判定該熱點之該分類(例如,其中單個機器學習模組執行偵測、分割及分類)。In certain embodiments, the method includes using at least one of one or more machine learning modules to determine the classification of the hot spot for each detected and/or segmented lesion (eg, wherein a single machine learning module performs detection, segmentation and classification).

在某些實施例中,該一或多個機器學習模組包括:(A)偵測及/或分割遍及整個身體之熱點之全身病變偵測模組;及(B)偵測及/或分割前列腺內之熱點之前列腺病變模組。在某些實施例中,方法包括使用(A)及(B)之各者產生熱點清單及/或圖且合併結果。In certain embodiments, the one or more machine learning modules include: (A) a systemic lesion detection module that detects and/or segments hot spots throughout the body; and (B) detects and/or segments Prostate lesions model of hot spots in the prostate. In certain embodiments, the method includes generating a hotspot list and/or map using each of (A) and (B) and combining the results.

在某些實施例中,步驟(d)包括:藉由以下操作對一或多個熱點之集合進行分割及分類以建立經標記之3D熱點圖,該經標記之3D熱點圖針對各熱點識別該3D功能影像內之對應3D熱點體積且其中各熱點體積經標記為屬於複數個熱點類別之特定熱點類別[例如,各熱點類別識別藉由熱點表示之病變經判定所定位之特定解剖及/或組織區域(例如,淋巴、骨、前列腺)]:使用第一機器學習模組分割該3D功能影像內之一或多個熱點之第一初始集合,從而建立識別第一初始熱點體積集合之第一初始3D熱點圖,其中該第一機器學習模組根據單個熱點類別分割該3D功能影像之熱點[例如,將所有熱點識別為屬於單個熱點類別,以便區分背景區域與熱點體積(例如,但不區分不同類型之熱點) (例如,使得藉由該第一3D熱點圖識別之各熱點體積經標記為屬於一單個熱點類別,如與背景相反)];使用第二機器學習模組分割該3D功能影像內之一或多個熱點之第二初始集合,從而建立識別第二初始熱點體積集合之第二初始3D熱點圖,其中該第二機器學習模組根據複數個不同熱點類別分割該3D功能影像,使得該第二初始3D熱點圖係多類別3D熱點圖,其中各熱點體積經標記為屬於複數個不同熱點類別之特定者(例如,以便區分對應於不同熱點類別之熱點體積,以及區分熱點體積與背景區域);及藉由該處理器藉由針對藉由該第一初始3D熱點圖識別之至少一部分該等熱點體積進行以下操作來合併該第一初始3D熱點圖與該第二初始3D熱點圖:識別該第二初始3D熱點圖之匹配熱點體積(例如,藉由識別該等第一及第二初始3D熱點圖之實質上重疊熱點體積),該第二3D熱點圖之該匹配熱點體積已經標記為屬於複數個不同熱點類別之特定熱點類別;及將該第一初始3D熱點圖之該特定熱點體積標記為屬於(例如,該匹配熱點體積經標記為所屬於之)該特定熱點類別,從而建立經合併3D熱點圖,該經合併3D熱點圖包含已根據該第二3D熱點圖之匹配熱點體積經識別所屬類別來標記之該第一3D熱點圖之經分割熱點體積;且步驟(e)包括儲存及/或提供該經合併之3D熱點圖以供顯示及/或進一步處理。In certain embodiments, step (d) comprises: segmenting and classifying the set of one or more hotspots to create a labeled 3D heatmap that identifies for each hotspot the Corresponding 3D hotspot volumes within the 3D functional image and wherein each hotspot volume is marked as belonging to a specific hotspot category of the plurality of hotspot categories [eg, each hotspot category identifies the specific anatomy and/or tissue to which the lesion represented by the hotspot is determined to be located Region (eg, lymph, bone, prostate)]: segment a first initial set of one or more hotspots within the 3D functional image using a first machine learning module to create a first initial set of volumes identifying the first initial hotspot volume 3D heat map, wherein the first machine learning module segments the hot spots of the 3D functional image according to a single hotspot category [eg, identifying all hotspots as belonging to a single hotspot category in order to distinguish background regions from hotspot volumes (eg, but not different type of hotspots) (eg, such that each hotspot volume identified by the first 3D heatmap is marked as belonging to a single hotspot class, as opposed to background)]; segment the 3D functional image using a second machine learning module a second initial set of one or more hotspots, thereby establishing a second initial 3D heat map identifying the second initial set of hotspot volumes, wherein the second machine learning module segments the 3D functional image according to a plurality of different hotspot categories such that The second initial 3D heat map is a multi-class 3D heat map in which each hotspot volume is marked as belonging to a particular one of a plurality of different hotspot classes (eg, to distinguish between hotspot volumes corresponding to different hotspot classes, and to distinguish between hotspot volumes and backgrounds) area); and combining, by the processor, the first initial 3D heat map and the second initial 3D heat map by performing the following operations on at least a portion of the hot spot volumes identified by the first initial 3D heat map: Identifying the matching hotspot volume of the second initial 3D heat map (eg, by identifying the substantially overlapping hotspot volumes of the first and second initial 3D heatmap), the matching hotspot volume of the second 3D heatmap has been marked is a specific hotspot category belonging to a plurality of different hotspot categories; and marking the specific hotspot volume of the first initial 3D heat map as belonging to (eg, the matching hotspot volume is marked as belonging to) the specific hotspot category, thereby establishing a merged 3D heat map that includes segmented hotspot volumes of the first 3D heatmap that have been labeled according to the identified class to which the matching hotspot volumes of the second 3D heatmap belong; and step (e) includes The merged 3D heat map is stored and/or provided for display and/or further processing.

在某些實施例中,該複數個不同熱點類別包括選自由以下各者組成之群組之一或多個成員:(i)骨熱點,其等經判定(例如,藉由第二機器學習模組)以表示定位於骨中之病變;(ii)淋巴熱點,其等經判定(例如,藉由第二機器學習模組)以表示定位於淋巴結中之病變;及(iii)前列腺熱點,其等經判定(例如,藉由第二機器學習模組)以表示定位於前列腺中之病變。In some embodiments, the plurality of different hotspot categories include one or more members selected from the group consisting of: (i) bone hotspots, the others of which are determined (eg, by a second machine learning model) group) to represent lesions localized in bone; (ii) lymphatic hotspots, which were determined (eg, by a second machine learning module) to represent lesions localized in lymph nodes; and (iii) prostate hotspots, which Equals are determined (eg, by a second machine learning module) to represent lesions localized in the prostate.

在某些實施例中,方法進一步包括:(f)接收及/或存取該熱點清單;及(g)針對該熱點清單中之各熱點,使用分析模型分割該熱點[例如,從而建立經分析分割之熱點之3D圖(例如,該3D圖針對各熱點識別藉由該經分割熱點區域圍封之包括該3D解剖影像及/或功能影像之體素的熱點體積)]。In certain embodiments, the method further comprises: (f) receiving and/or accessing the hotspot list; and (g) for each hotspot in the hotspot list, segmenting the hotspot using an analytical model [eg, to create an analyzed A 3D map of the segmented hotspots (eg, the 3D map identifies, for each hotspot, a hotspot volume that includes voxels of the 3D anatomical and/or functional images enclosed by the segmented hotspot region)].

在某些實施例中,方法進一步包括:(h)接收及/或存取該熱點圖;及(i)針對該熱點圖中之各熱點,使用分析模型分割該熱點[例如,從而建立經分析分割之熱點之3D圖(例如,該3D圖針對各熱點識別藉由該經分割熱點區域圍封之包括該3D解剖影像及/或功能影像之體素的熱點體積)]。In certain embodiments, the method further comprises: (h) receiving and/or accessing the heat map; and (i) for each hot spot in the heat map, segmenting the hot spot using an analytical model [eg, to create an analyzed A 3D map of the segmented hotspots (eg, the 3D map identifies, for each hotspot, a hotspot volume that includes voxels of the 3D anatomical and/or functional images enclosed by the segmented hotspot region)].

在某些實施例中,該分析模型係自適應定限方法,且步驟(i)包括:判定一或多個參考值,各參考值基於定位於對應於特定參考組織區域之特定參考體積內之該3D功能影像之體素之強度的量度(例如,血池參考值基於對應於受試者之主動脈之部分之主動脈體積內之強度來判定;例如,肝臟參考值基於對應於受試者之肝臟之肝臟體積內之強度來判定);及針對該3D熱點圖之各特定熱點體積:藉由該處理器基於該特定熱點體積內之體素之強度來判定對應熱點強度[例如,其中該熱點強度係該特定熱點體積內之體素之強度(例如,表示SUV)之最大值];及藉由該處理器針對該特定熱點基於(i)該對應熱點強度及(ii)至少一個該一或多個參考值來判定熱點特定臨限值。In certain embodiments, the analytical model is an adaptive bounding method, and step (i) includes determining one or more reference values, each reference value based on a reference value located within a specific reference volume corresponding to a specific reference tissue region A measure of the intensity of the voxels of the 3D functional image (eg, blood pool reference values are determined based on intensities within the aortic volume corresponding to portions of the subject's aorta; eg, liver reference values are based on and for each specific hotspot volume of the 3D heat map: by the processor to determine the corresponding hotspot intensity based on the intensities of voxels within the specific hotspot volume [eg, where the the hotspot intensity is the maximum value of the intensities of the voxels (eg, representing SUVs) within the particular hotspot volume]; and by the processor for the particular hotspot based on (i) the corresponding hotspot intensity and (ii) at least one of the or multiple reference values to determine hotspot specific thresholds.

在某些實施例中,該熱點特定臨限值係使用選自複數個定限函數之特定定限函數來判定,該特定定限函數基於該對應熱點強度與該至少一個參考值之比較來選擇[例如,其中該複數個定限函數之各者與強度(例如,SUV)值之特定範圍相關聯,且該特定定限函數係根據熱點強度及/或其之(例如,預定)百分比落入之該特定範圍來選擇(例如,且其中強度值之各特定範圍係至少部分由該至少一個參考值的倍數來定界)]。In some embodiments, the hotspot specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function being selected based on a comparison of the corresponding hotspot intensity with the at least one reference value [For example, where each of the plurality of bounding functions is associated with a particular range of intensity (eg, SUV) values, and the particular bounding function falls within the hotspot intensity and/or its (eg, predetermined) percentage (eg, and wherein each particular range of intensity values is at least partially bounded by a multiple of the at least one reference value)].

在某些實施例中,該熱點特定臨限值係(例如,藉由該特定定限函數)判定為該對應熱點強度之可變百分比,其中該可變百分比隨著熱點強度增加而減小[例如,其中該可變百分比自身係對應熱點強度之函數(例如,遞減函數)]。In some embodiments, the hotspot specific threshold is determined (eg, by the specific threshold function) as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as the hotspot intensity increases [ For example, where the variable percentage itself corresponds to a function of hot spot intensity (eg, a decreasing function)].

在另一態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級;例如,分類;例如,如表示特定病變類型)該受試者內之癌性病變之方法,該方法包括:(a)藉由運算裝置之處理器接收(例如,及/或存取)使用功能成像模態獲得的該受試者之3D功能影像[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)];(b)藉由該處理器使用第一機器學習模組自動分割該3D功能影像內之一或多個熱點之第一初始集合,從而建立識別第一初始熱點體積集合之第一初始3D熱點圖,其中該第一機器學習模組根據單個熱點類別分割該3D功能影像之熱點[例如,將所有熱點識別為屬於單個熱點類別,以便區分背景區域與熱點體積(例如,但不區分不同類型之熱點) (例如,使得藉由該第一3D熱點圖識別之各熱點體積經標記為屬於一單個熱點類別,如與背景相反)];(c)藉由該處理器使用第二機器學習模組自動分割該3D功能影像內之一或多個熱點之第二初始集合,從而建立識別第二初始熱點體積集合之第二初始3D熱點圖,其中該第二機器學習模組根據複數個不同熱點類別[例如,各熱點類別識別藉由熱點表示之病變經判定所定位之特定解剖及/或組織區域(例如,淋巴、骨、前列腺)]分割該3D功能影像,使得該第二初始3D熱點圖係多類別3D熱點圖,其中各熱點體積經標記為屬於該複數個不同熱點類別之特定者(例如,以便區分對應於不同熱點類別之熱點體積,以及區分熱點體積與背景區域);(d)藉由該處理器藉由針對藉由該第一初始3D熱點圖識別之至少一部分該第一初始熱點體積集合之各特定熱點體積進行以下操作來合併該第一初始3D熱點圖與該第二初始3D熱點圖:識別該第二初始3D熱點圖之匹配熱點體積(例如,藉由識別該等第一及第二初始3D熱點圖之實質上重疊熱點體積),該第二3D熱點圖之該匹配熱點體積已經標記為屬於複數個不同熱點類別之特定熱點類別;及將該第一初始3D熱點圖之該特定熱點體積標記為屬於(該匹配熱點體積經標記為所屬於之)該特定熱點類別,從而建立經合併3D熱點圖,該經合併3D熱點圖包含已根據該第二3D熱點圖之匹配熱點經識別所屬類別來標記之該第一3D熱點圖之經分割熱點體積;及(e)儲存及/或提供該經合併之3D熱點圖以供顯示及/或進一步處理。In another aspect, the invention relates to a method for automatically processing a 3D image of a subject to identify and/or characterize (eg, grade; eg, classify; eg, as represent a particular lesion type) the subject A method of a cancerous lesion within, comprising: (a) receiving (eg, and/or accessing), by a processor of a computing device, a 3D functional image of the subject obtained using a functional imaging modality [eg, Positron Emission Tomography (PET); eg, Single Photon Emission Computed Tomography (SPECT)]; (b) automatically segmenting one of the 3D functional images by the processor using a first machine learning module or a first initial set of hotspots, thereby creating a first initial 3D heatmap identifying a first initial set of hotspot volumes, wherein the first machine learning module segments the hotspots of the 3D functional image according to a single hotspot class [eg, dividing the All hotspots are identified as belonging to a single hotspot category in order to distinguish background regions from hotspot volumes (eg, but not different types of hotspots) (eg, so that each hotspot volume identified by the first 3D heat map is marked as belonging to a single hotspot category, such as the opposite of the background)]; (c) by the processor using a second machine learning module to automatically segment a second initial set of one or more hot spots in the 3D functional image, thereby establishing a second initial set of identification A second initial 3D heat map of the set of hotspot volumes, wherein the second machine learning module identifies a specific anatomical and/or tissue region where the lesion represented by the hotspot is determined to be located according to a plurality of different hotspot categories [eg, each hotspot category (eg, lymph, bone, prostate)] segment the 3D functional image such that the second initial 3D heat map is a multi-class 3D heat map, where each hotspot volume is marked as belonging to a particular one of the plurality of different hotspot classes (eg (d) by the processor by targeting at least a portion of the first initial hot spot identified by the first initial 3D heat map Each particular hotspot volume of the volume set merges the first initial 3D heatmap and the second initial 3D heatmap by identifying matching hotspot volumes of the second initial 3D heatmap (eg, by identifying the first and a substantially overlapping hotspot volume of the second initial 3D heatmap), the matching hotspot volume of the second 3D heatmap has been marked as belonging to a particular hotspot class of a plurality of different hotspot classes; and the first initial 3D heatmap The specific hotspot volume is marked as belonging to the specific hotspot category (to which the matching hotspot volume is marked as belonging), thereby creating a merged 3D heatmap that includes matching hotspots that have been based on the second 3D heatmap a segmented hotspot volume of the first 3D heatmap labeled with the identified class; and (e) storing and/or providing the merged 3D heatmap for display and/or further processing.

在某些實施例中,該複數個不同熱點類別包括選自由以下各者組成之群組之一或多個成員:(i)骨熱點,其等經判定(例如,藉由第二機器學習模組)以表示定位於骨中之病變;(ii)淋巴熱點,其等經判定(例如,藉由第二機器學習模組)以表示定位於淋巴結中之病變;及(iii)前列腺熱點,其等經判定(例如,藉由第二機器學習模組)以表示定位於前列腺中之病變。In some embodiments, the plurality of different hotspot categories include one or more members selected from the group consisting of: (i) bone hotspots, the others of which are determined (eg, by a second machine learning model) group) to represent lesions localized in bone; (ii) lymphatic hotspots, which were determined (eg, by a second machine learning module) to represent lesions localized in lymph nodes; and (iii) prostate hotspots, which Equals are determined (eg, by a second machine learning module) to represent lesions localized in the prostate.

在另一態樣中,本發明係關於一種用於經由自適應定限方法自動化處理受試者之3D影像以識別及/或表徵化(例如,分級;例如,分類;例如,如表示特定病變類型)該受試者內之癌性病變之方法,該方法包括:(a)藉由運算裝置之處理器接收(例如,及/或存取)使用功能成像模態獲得的該受試者之3D功能影像[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)];(b)藉由該處理器接收(例如,及/或存取)在該3D功能影像內識別一或多個初步熱點體積之初步3D熱點圖;(c)藉由該處理器判定一或多個參考值,各參考值基於定位於對應於特定參考組織區域之特定參考體積內之該3D功能影像之體素之強度的量度(例如,血池參考值基於對應於受試者之主動脈之部分之主動脈體積內之強度來判定;例如,肝臟參考值基於對應於受試者之肝臟之肝臟體積內之強度來判定);(d)藉由該處理器藉由針對藉由該初步3D熱點圖識別之至少一部分該一或多個初步熱點體積的各特定初步熱點體積進行以下操作而基於該等初步熱點體積及使用基於自適應定限之分割來建立精細化之3D熱點圖:基於該特定初步熱點體積內之體素之強度來判定對應熱點強度[例如,其中該熱點強度係該特定初步熱點體積內之體素之強度(例如,表示SUV)之最大值];針對該特定初步熱點體積基於(i)該對應熱點強度及(ii)至少一個該一或多個參考值來判定熱點特定臨限值;使用基於定限之分割演算法分割至少一部分3D功能影像(例如,大約在特定初步熱點體積中之子體積),該基於定限之分割演算法使用針對特定初步熱點體積判定之熱點特定臨限值執行影像分割[例如,及識別具有高於熱點特定臨限值之強度且包括初步熱點之最大強度體素之體素叢集(例如,以n連接組分方式彼此連接之體素之3D叢集(例如,其中n = 6、n = 18等))],從而判定對應於特定初步熱點體積之精細化、分析分割之熱點體積;及將該精細化之熱點體積包含於該精細化之3D熱點圖中;及(e)儲存及/或提供該精細化之3D熱點圖以供顯示及/或進一步處理。In another aspect, the invention relates to a method for automatically processing 3D images of a subject to identify and/or characterize (eg, grade; eg, classify; eg, such as representing a particular lesion) via an adaptive bounding method type) a method of a cancerous lesion in the subject, the method comprising: (a) receiving (eg, and/or accessing), by a processor of a computing device, an image of the subject obtained using a functional imaging modality 3D functional images [eg, Positron Emission Tomography (PET); eg, Single Photon Emission Computed Tomography (SPECT)]; (b) received (eg, and/or accessed) by the processor identifying, within the 3D functional image, a preliminary 3D heat map of one or more preliminary hot spot volumes; (c) determining, by the processor, one or more reference values, each reference value based on a specific location corresponding to a specific reference tissue region A measure of the intensity of the voxels of the 3D functional image within the reference volume (eg, blood pool reference values are determined based on the intensity within the aortic volume corresponding to the portion of the subject's aorta; eg, liver reference values are determined based on the corresponding determined by the intensity within the liver volume of the subject's liver); (d) by the processor by each particular preliminary hotspot volume for at least a portion of the one or more preliminary hotspot volumes identified by the preliminary 3D heat map The hotspot volume creates a refined 3D heatmap based on the preliminary hotspot volumes and using adaptive bound-based segmentation by determining the corresponding hotspot intensities based on the intensities of voxels within that particular preliminary hotspot volume [eg, where the hotspot intensity is the maximum value of the intensities of voxels (eg, representing SUVs) within the particular preliminary hotspot volume]; for the particular preliminary hotspot volume based on (i) the corresponding hotspot intensity and (ii) at least one of the one or multiple reference values to determine hotspot specific thresholds; segment at least a portion of the 3D functional image (e.g., approximately a sub-volume within a specific preliminary hotspot volume) using a threshold-based segmentation algorithm using a threshold-based segmentation algorithm for Perform image segmentation [eg, and identify clusters of voxels that have intensities above the hotspot-specific threshold and include the highest intensity voxels of the preliminary hotspot (eg, connect components with n). 3D clusters of voxels connected to each other in a manner (e.g., where n =6, n =18, etc.)], to determine the refinement corresponding to a particular preliminary hotspot volume, the hotspot volume of the analysis segmentation; and the hotspot for that refinement The volume is included in the refined 3D heat map; and (e) storing and/or providing the refined 3D heat map for display and/or further processing.

在某些實施例中,該熱點特定臨限值係使用選自複數個定限函數之特定定限函數來判定,該特定定限函數基於該對應熱點強度與該至少一個參考值之比較來選擇[例如,其中該複數個定限函數之各者與強度(例如,SUV)值之特定範圍相關聯,且該特定定限函數係根據熱點強度及/或其之(例如,預定)百分比落入之該特定範圍來選擇(例如,且其中強度值之各特定範圍係至少部分由該至少一個參考值的倍數來定界)]。In some embodiments, the hotspot specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function being selected based on a comparison of the corresponding hotspot intensity with the at least one reference value [For example, where each of the plurality of bounding functions is associated with a particular range of intensity (eg, SUV) values, and the particular bounding function falls within the hotspot intensity and/or its (eg, predetermined) percentage (eg, and wherein each particular range of intensity values is at least partially bounded by a multiple of the at least one reference value)].

在某些實施例中,該熱點特定臨限值係(例如,藉由該特定定限函數)判定為該對應熱點強度之可變百分比,其中該可變百分比隨著熱點強度增加而減小[例如,其中該可變百分比自身係對應熱點強度之函數(例如,遞減函數)]。In some embodiments, the hotspot specific threshold is determined (eg, by the specific threshold function) as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as the hotspot intensity increases [ For example, where the variable percentage itself corresponds to a function of hot spot intensity (eg, a decreasing function)].

在另一態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級;例如,分類;例如,如表示特定病變類型)該受試者內之癌性病變之方法,該方法包括:(a)藉由運算裝置之處理器接收(例如,及/或存取)使用解剖成像模態[例如,x射線電腦斷層掃描攝影術(CT);例如,磁共振成像(MRI);例如,超聲波]獲得的該受試者之3D解剖影像,其中該3D解剖影像包括該受試者內之組織(例如,軟組織及/或骨)之圖形表示;(b)藉由該處理器自動分割該3D解剖影像以建立3D分割圖,該3D分割圖識別該3D解剖影像中之複數個所關注體積(VOI),包含對應於該受試者之肝臟之肝臟體積及對應於主動脈部分(例如,胸及/或腹部分)之主動脈體積;(c)藉由該處理器接收(例如,及/或存取)使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得的該受試者之3D功能影像[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值,其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(d)藉由該處理器自動分割該3D功能影像內之一或多個熱點,各經分割熱點對應於相對於其周圍提高強度之局部區域且表示(例如,指示)該受試者內之潛在癌性病變,從而識別一或多個經自動分割之熱點體積;(e)藉由該處理器引起轉列該一或多個經自動分割之熱點體積之圖形表示以用於在互動式圖形使用者介面(GUI) (例如,品質控制及報告GUI)內顯示;(f)藉由該處理器經由該互動式GUI接收包括至少一部分該一或多個經自動分割之熱點體積(例如,多達全部)之最終熱點集合的使用者選擇;(g)藉由該處理器針對該最終集合之各熱點體積基於(i)對應於該熱點體積(例如,定位於該熱點體積內)之該功能影像之體素之強度及(ii)使用對應於該肝臟體積及該主動脈體積之該功能影像之體素之強度判定的一或多個參考值判定病變指數值;及(e)儲存及/或提供最終熱點集合及/或病變指數值以供顯示及/或進一步處理。In another aspect, the invention relates to a method for automatically processing a 3D image of a subject to identify and/or characterize (eg, grade; eg, classify; eg, as represent a particular lesion type) the subject A method of cancerous lesions within, the method comprising: (a) receiving (eg, and/or accessing) by a processor of a computing device using an anatomical imaging modality [eg, x-ray computed tomography (CT) 3D anatomical image of the subject obtained by magnetic resonance imaging (MRI); eg, ultrasound], wherein the 3D anatomical image includes a graphical representation of tissue (eg, soft tissue and/or bone) within the subject (b) automatically segmenting the 3D anatomical image by the processor to create a 3D segmentation map that identifies a plurality of volumes of interest (VOIs) in the 3D anatomical image, including those corresponding to the subject's liver; Liver volume and aortic volume corresponding to aortic portions (eg, thoracic and/or abdominal portions); (c) received (eg, and/or accessed) by the processor using functional imaging modalities [eg, positive A 3D functional image of the subject obtained by electron emission tomography (PET); eg, single photon emission computed tomography (SPECT)] [eg, wherein the 3D functional image includes a plurality of voxels, each voxel Represents a specific physical volume within the subject and has an intensity value representing the detected radiation emitted from the specific physical volume, wherein at least a portion of the plurality of voxels of the 3D functional image represent the physical volume within the target tissue region] (d) automatically segmenting, by the processor, one or more hotspots within the 3D functional image, each segmented hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing (eg, indicative) within the subject (e) causing, by the processor, to transfer a graphical representation of the one or more automatically segmented hot spot volumes for use in interactive displayed within a graphical user interface (GUI) (eg, quality control and reporting GUI); (f) receiving, by the processor, via the interactive GUI, including at least a portion of the one or more automatically segmented hotspot volumes (eg, user selection of a final set of hotspots up to all); (g) by the processor for each hotspot volume of the final set based on (i) the intensities of the voxels of the functional image and (ii) determining a lesion index value using one or more reference values determined by the intensities of the voxels of the functional image corresponding to the liver volume and the aortic volume; and (e) storing and /or provide a final set of hot spots and/or lesion index values for display and/or further processing.

在某些實施例中,步驟(b)包括分割該解剖影像使得該3D分割圖識別對應於該受試者之一或多個骨之一或多個骨體積,且步驟(d)包括在該功能影像內使用該一或多個骨體積識別骨骼體積及分割定位於該骨骼體積內之一或多個骨熱點體積(例如,藉由應用高斯濾波器之一或多個差值及對骨骼體積定限)。In certain embodiments, step (b) includes segmenting the anatomical image such that the 3D segmentation map identifies one or more bone volumes corresponding to one or more bones of the subject, and step (d) includes Use the one or more bone volumes within the functional image to identify the bone volume and segment one or more bone hot spot volumes located within the bone volume (eg, by applying one or more difference values of a Gaussian filter and comparing the bone volume limit).

在某些實施例中,步驟(b)包括分割該解剖影像使得該3D分割圖識別對應於該受試者之軟組織器官[例如,左/右肺、左/右臀大肌、尿膀胱、肝臟、左/右腎、膽囊、脾、胸及腹主動脈,且視需要(例如,針對未經歷根治性前列腺切除術之患者)前列腺]之一或多個器官體積,且步驟(d)包括在該功能影像內使用該一或多個經分割之器官體積識別一或多個軟組織(例如,淋巴且視需要前列腺)體積及分割定位於該軟組織體積內之一或多個淋巴及/或前列腺熱點體積(例如,藉由應用高斯濾波器之一或多個拉普拉斯算子(Laplacian)及對軟組織體積定限)。In certain embodiments, step (b) includes segmenting the anatomical image such that the 3D segmentation map identifies soft tissue organs corresponding to the subject [eg, left/right lungs, left/right gluteus maximus, urinary bladder, liver , left/right kidney, gallbladder, spleen, thoracic and abdominal aorta, and optionally (eg, for patients who have not undergone radical prostatectomy) prostate] one or more organ volumes, and step (d) is included in Use the one or more segmented organ volumes within the functional image to identify one or more soft tissue (eg, lymph and optionally prostate) volumes and segment one or more lymphatic and/or prostate hotspots located within the soft tissue volume Volume (eg, by applying one or more Laplacians of Gaussian filters and qualifying the soft tissue volume).

在某些實施例中,步驟(d)進一步包括,在分割該一或多個淋巴及/或前列腺熱點體積之前,調整該功能影像之強度以抑制來自一或多個高攝取組織區域之強度(例如,使用本文中所描述之一或多個抑制方法)。In certain embodiments, step (d) further comprises, prior to segmenting the one or more lymphoid and/or prostate hot spot volumes, adjusting the intensity of the functional image to suppress the intensity from one or more high uptake tissue regions ( For example, using one or more of the inhibition methods described herein).

在某些實施例中,步驟(g)包括使用對應於該肝臟體積之該功能影像之體素之強度來判定肝臟參考值。In certain embodiments, step (g) includes determining a liver reference value using the intensities of the voxels of the functional image corresponding to the liver volume.

在某些實施例中,方法包括將雙組分高斯混合模型擬合至對應於該肝臟體積之功能影像體素之強度的直方圖,使用該雙組分高斯混合模型擬合以自該肝臟體積識別及排除具有與異常低攝取之區域相關聯的強度的體素,及使用剩餘(例如,未排除之)體素之強度判定肝臟參考值。In certain embodiments, the method includes fitting a two-component Gaussian mixture model to a histogram of intensities of functional image voxels corresponding to the liver volume, using the two-component Gaussian mixture model fitting to derive the liver volume from the Voxels with intensities associated with regions of abnormally low uptake are identified and excluded, and the intensities of the remaining (eg, not excluded) voxels are used to determine liver reference values.

在另一態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級;例如,分類;例如,如表示特定病變類型)該受試者內之癌性病變之系統,該系統包括:運算裝置之處理器;及具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器:(a)接收(例如,及/或存取)使用功能成像模態獲得的該受試者之3D功能影像[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)];(b)接收(例如,及/或存取)使用解剖成像模態獲得的該受試者之3D解剖影像[例如,電腦斷層掃描攝影術(CT)影像;磁共振(MR)影像];(c)接收(例如,及/或存取) 3D分割圖,該3D分割圖識別該3D功能影像內及/或該3D解剖影像內之一或多個特定組織區域或組織區域群組(例如,對應於特定解剖區域之一組組織區域;例如,包括其中發生高或低放射性藥物攝取之器官之一群組組織區域);(d)使用一或多個機器學習模組自動偵測及/或分割該3D功能影像內之一或多個熱點之集合,各熱點對應於相對於其周圍提高強度之局部區域且表示受試者內之潛在癌性病變,從而建立如下(i)及(ii)之一或兩者:(i)熱點清單,其針對各熱點識別該熱點之位置[例如,如藉由一或多個機器學習模組偵測],及(ii) 3D熱點圖,其針對各熱點識別該3D功能影像內之對應3D熱點體積[例如,如經由藉由一或多個機器學習模組執行之分割來判定] [例如,其中3D熱點圖係分割圖,其針對各熱點描繪該熱點之3D熱點邊界(例如,不規則邊界) (例如,該3D邊界圍封該3D熱點體積)],其中一或多個機器學習模組之至少一者(例如,多達全部)接收(i) 3D功能影像、(ii) 3D解剖影像及(iii) 3D分割圖作為輸入;及(e)儲存及/或提供熱點清單及/或3D熱點圖以供顯示及/或進一步處理。In another aspect, the invention relates to a method for automatically processing a 3D image of a subject to identify and/or characterize (eg, grade; eg, classify; eg, as represent a particular lesion type) the subject A system of cancerous lesions within, the system comprising: a processor of a computing device; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) Receive (eg, and/or access) a 3D functional image of the subject obtained using a functional imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography ( SPECT)]; (b) receiving (eg, and/or accessing) 3D anatomical images of the subject obtained using anatomical imaging modalities [eg, computed tomography (CT) images; magnetic resonance (MR) image]; (c) receiving (eg, and/or accessing) a 3D segmentation map identifying one or more specific tissue regions or groups of tissue regions within the 3D functional image and/or within the 3D anatomical image group (e.g., a group of tissue regions corresponding to a particular anatomical region; e.g., including a group of tissue regions of organs in which high or low radiopharmaceutical uptake occurs); (d) automated detection using one or more machine learning modules Detecting and/or segmenting a set of one or more hotspots within the 3D functional image, each hotspot corresponding to a local area of increased intensity relative to its surroundings and representing a potential cancerous lesion within the subject, thereby establishing (i) as follows and (ii) one or both of: (i) a list of hotspots that, for each hotspot, identifies the location of that hotspot [eg, as detected by one or more machine learning modules], and (ii) a 3D heat map , which for each hotspot identifies the corresponding 3D hotspot volume within the 3D functional image [eg, as determined by segmentation performed by one or more machine learning modules] [eg, where the 3D heatmap is a segmentation map, which is for Each hotspot delineates a 3D hotspot boundary (eg, an irregular boundary) for the hotspot (eg, the 3D boundary encloses the 3D hotspot volume)], wherein at least one of the one or more machine learning modules (eg, up to all ) receive as input (i) 3D functional images, (ii) 3D anatomical images and (iii) 3D segmentation maps; and (e) store and/or provide a list of hot spots and/or 3D heat maps for display and/or further processing .

在某些實施例中,指令引起處理器:接收初始3D分割圖,該初始3D分割圖識別該3D解剖影像及/或該3D功能影像內之一或多個(例如,複數個)特定組織區域(例如,器官及/或特定骨);將至少一部分該一或多個特定組織區域識別為屬於一或多個組織群組(例如,預定義之群組)之特定者及更新該3D分割圖以將該等經識別特定區域指示為屬於該特定組織群組;及使用該經更新之3D分割圖作為送至該一或多個機器學習模組之至少一者之輸入。In certain embodiments, the instructions cause the processor to: receive an initial 3D segmentation map that identifies one or more (eg, a plurality) of specific tissue regions within the 3D anatomical image and/or the 3D functional image (eg, organs and/or specific bones); identify at least a portion of the one or more specific tissue regions as specific ones belonging to one or more tissue groups (eg, predefined groups) and update the 3D segmentation map to Indicating the identified specific regions as belonging to the specific tissue group; and using the updated 3D segmentation map as an input to at least one of the one or more machine learning modules.

在某些實施例中,該一或多個組織群組包括軟組織群組,使得表示軟組織之特定組織區域經識別為屬於該軟組織群組。在某些實施例中,該一或多個組織群組包括骨組織群組,使得表示骨之特定組織區域經識別為屬於該骨組織群組。在某些實施例中,該一或多個組織群組包括高攝取器官群組,使得與高放射性藥物攝取(例如,在正常情況下,且不一定歸因於存在病變)相關聯的一或多個器官經識別為屬於該高攝取群組。In certain embodiments, the one or more tissue groups include a soft tissue group such that a particular tissue region representing soft tissue is identified as belonging to the soft tissue group. In certain embodiments, the one or more tissue groups include a bone tissue group such that a particular tissue region representing bone is identified as belonging to the bone tissue group. In certain embodiments, the one or more tissue populations comprise high uptake organ populations such that one or more associated with high radiopharmaceutical uptake (eg, under normal circumstances, and not necessarily due to the presence of lesions) Multiple organs were identified as belonging to this high uptake group.

在某些實施例中,指令引起處理器,針對各經偵測及/或分割之熱點,判定該熱點之分類[例如,根據解剖位置,例如,將病變分類為骨、淋巴或前列腺,例如,基於熱點相對於受試者之經判定(例如,藉由處理器)位置指派文數字碼,諸如表1中之標記方案]。In certain embodiments, the instructions cause the processor, for each detected and/or segmented hotspot, to determine a classification of the hotspot [eg, according to anatomical location, eg, classifying the lesion as bone, lymphoid, or prostate, eg, An alphanumeric code is assigned based on the determined (eg, by a processor) location of the hot spot relative to the subject, such as the labeling scheme in Table 1].

在某些實施例中,指令引起處理器使用一或多個機器學習模組之至少一者以針對各經偵測及/或分割之熱點判定該熱點之該分類(例如,其中單個機器學習模組執行偵測、分割及分類)。In certain embodiments, the instructions cause the processor to use at least one of one or more machine learning modules to determine the classification of the hotspot for each detected and/or segmented hotspot (eg, wherein a single machine learning module group to perform detection, segmentation and classification).

在某些實施例中,該一或多個機器學習模組包括:(A)偵測及/或分割遍及整個身體之熱點之全身病變偵測模組;及(B)偵測及/或分割前列腺內之熱點之前列腺病變模組。在某些實施例中,指令引起處理器使用(A)及(B)之各者產生熱點清單及/或圖且合併結果。In certain embodiments, the one or more machine learning modules include: (A) a systemic lesion detection module that detects and/or segments hot spots throughout the body; and (B) detects and/or segments Prostate lesions model of hot spots in the prostate. In some embodiments, the instructions cause the processor to use each of (A) and (B) to generate a hotspot list and/or map and combine the results.

在某些實施例中,在步驟(d),指令引起處理器藉由以下操作對一或多個熱點之集合進行分割及分類以建立經標記之3D熱點圖,該經標記之3D熱點圖針對各熱點識別該3D功能影像內之對應3D熱點體積且其中各熱點經標記為屬於複數個熱點類別之特定熱點類別[例如,各熱點類別識別藉由熱點表示之病變經判定所定位之特定解剖及/或組織區域(例如,淋巴、骨、前列腺)]:使用第一機器學習模組分割該3D功能影像內之一或多個熱點之第一初始集合,從而建立識別第一初始熱點體積集合之第一初始3D熱點圖,其中該第一機器學習模組根據單個熱點類別分割該3D功能影像之熱點[例如,將所有熱點識別為屬於單個熱點類別,以便區分背景區域與熱點體積(例如,但不區分不同類型之熱點) (例如,使得藉由該第一3D熱點圖識別之各熱點體積經標記為屬於一單個熱點類別,如與背景相反)];使用第二機器學習模組分割該3D功能影像內之一或多個熱點之第二初始集合,從而建立識別第二初始熱點體積集合之第二初始3D熱點圖,其中該第二機器學習模組根據複數個不同熱點類別分割該3D功能影像,使得該第二初始3D熱點圖係多類別3D熱點圖,其中各熱點體積經標記為屬於複數個不同熱點類別之特定者(例如,以便區分對應於不同熱點類別之熱點體積,以及區分熱點體積與背景區域);及藉由針對藉由該第一初始3D熱點圖識別之至少一部分該等熱點體積進行以下操作來合併該第一初始3D熱點圖與該第二初始3D熱點圖:識別該第二初始3D熱點圖之匹配熱點體積(例如,藉由識別該等第一及第二初始3D熱點圖之實質上重疊熱點體積),該第二3D熱點圖之該匹配熱點體積已經標記為屬於複數個不同熱點類別之特定熱點類別;及將該第一初始3D熱點圖之該特定熱點體積標記為屬於(該匹配熱點體積經標記為所屬於之)該特定熱點類別,從而建立經合併3D熱點圖,該經合併3D熱點圖包含已根據該第二3D熱點圖之匹配熱點經識別所屬類別來標記之該第一3D熱點圖之經分割熱點體積;且在步驟(e),指令引起處理器儲存及/或提供該經合併之3D熱點圖以供顯示及/或進一步處理。In certain embodiments, in step (d), the instructions cause the processor to segment and classify the set of one or more hot spots to create a labeled 3D heat map for Each hotspot identifies a corresponding 3D hotspot volume within the 3D functional image and wherein each hotspot is marked as belonging to a specific hotspot category of a plurality of hotspot categories [eg, each hotspot category identifies the specific anatomy where the lesion represented by the hotspot is determined to be located and /or tissue region (eg, lymph, bone, prostate)]: segment a first initial set of one or more hotspots within the 3D functional image using a first machine learning module, thereby creating a set of volumes identifying the first initial hotspot volume A first initial 3D heat map in which the first machine learning module segments the hotspots of the 3D functional image according to a single hotspot category [eg, identifying all hotspots as belonging to a single hotspot category in order to distinguish background regions from hotspot volumes (eg, but do not distinguish between different types of hotspots) (eg, so that each hotspot volume identified by the first 3D heatmap is marked as belonging to a single hotspot class, as opposed to background)]; segment the 3D using a second machine learning module a second initial set of one or more hotspots within the functional image, thereby creating a second initial 3D heat map identifying the second initial set of hotspot volumes, wherein the second machine learning module segments the 3D features according to a plurality of different hotspot categories image such that the second initial 3D heat map is a multi-class 3D heat map, where each hotspot volume is marked as belonging to a particular one of a plurality of different hotspot classes (eg, to distinguish between hotspot volumes corresponding to different hotspot classes, and to distinguish between hotspots volume and background area); and merging the first initial 3D heat map and the second initial 3D heat map by performing the following operations on at least a portion of the hot spot volumes identified by the first initial 3D heat map: identifying the The matching hotspot volume of the second initial 3D heat map (eg, by identifying the substantially overlapping hotspot volumes of the first and second initial 3D heatmaps) that has been marked as belonging to the second 3D heatmap a specific hotspot category of a plurality of different hotspot categories; and marking the specific hotspot volume of the first initial 3D heat map as belonging to (to which the matching hotspot volume is marked) the specific hotspot category, thereby creating a merged 3D hotspot Figure, the merged 3D heat map includes segmented hotspot volumes of the first 3D heatmap that have been labeled according to the identified class to which matching hotspots of the second 3D heatmap belong; and in step (e), the instructions cause the processor The merged 3D heat map is stored and/or provided for display and/or further processing.

在某些實施例中,該複數個不同熱點類別包括選自由以下各者組成之群組之一或多個成員:(i)骨熱點,其等經判定(例如,藉由第二機器學習模組)以表示定位於骨中之病變;(ii)淋巴熱點,其等經判定(例如,藉由第二機器學習模組)以表示定位於淋巴結中之病變;及(iii)前列腺熱點,其等經判定(例如,藉由第二機器學習模組)以表示定位於前列腺中之病變。In some embodiments, the plurality of different hotspot categories include one or more members selected from the group consisting of: (i) bone hotspots, the others of which are determined (eg, by a second machine learning model) group) to represent lesions localized in bone; (ii) lymphatic hotspots, which were determined (eg, by a second machine learning module) to represent lesions localized in lymph nodes; and (iii) prostate hotspots, which Equals are determined (eg, by a second machine learning module) to represent lesions localized in the prostate.

在某些實施例中,指令進一步引起處理器:(f)接收及/或存取該熱點清單;及(g)針對該熱點清單中之各熱點,使用分析模型分割該熱點[例如,從而建立經分析分割之熱點之3D圖(例如,該3D圖針對各熱點識別由該經分割熱點區域圍封之包括該3D解剖影像及/或功能影像之體素的熱點體積)]。In certain embodiments, the instructions further cause the processor to: (f) receive and/or access the hotspot list; and (g) for each hotspot in the hotspot list, segment the hotspot using an analytical model [eg, to establish A 3D map of the segmented hotspots is analyzed (eg, the 3D map identifies, for each hotspot, a hotspot volume that includes voxels of the 3D anatomical and/or functional images enclosed by the segmented hotspot region)].

在某些實施例中,指令進一步引起處理器:(h)接收及/或存取該熱點圖;及(i)針對該熱點圖中之各熱點,使用分析模型分割該熱點[例如,從而建立經分析分割之熱點之3D圖(例如,該3D圖針對各熱點識別由該經分割熱點區域圍封之包括該3D解剖影像及/或功能影像之體素的熱點體積)]。In certain embodiments, the instructions further cause the processor to: (h) receive and/or access the heat map; and (i) for each hot spot in the heat map, segment the hot spot using an analytical model [eg, to establish A 3D map of the segmented hotspots is analyzed (eg, the 3D map identifies, for each hotspot, a hotspot volume that includes voxels of the 3D anatomical and/or functional images enclosed by the segmented hotspot region)].

在某些實施例中,該分析模型係自適應定限方法,且在步驟(i),指令引起處理器:判定一或多個參考值,各參考值基於定位於對應於特定參考組織區域之特定參考體積內之該3D功能影像之體素之強度的量度(例如,血池參考值,其係基於對應於受試者之主動脈之部分之主動脈體積內之強度來判定;例如,肝臟參考值,其係基於對應於受試者之肝臟之肝臟體積內之強度來判定);及針對該3D熱點圖之各特定熱點體積:基於該特定熱點體積內之體素之強度來判定對應熱點強度[例如,其中該熱點強度係該特定熱點體積內之體素之強度(例如,表示SUV)之最大值];及針對該特定熱點,基於(i)該對應熱點強度及(ii)至少一個該一或多個參考值,判定熱點特定臨限值。In certain embodiments, the analytical model is an adaptive bounding method, and in step (i), the instructions cause the processor to: determine one or more reference values, each reference value based on a location corresponding to a particular reference tissue region A measure of the intensity of the voxels of the 3D functional image within a specific reference volume (eg, a blood pool reference value, which is determined based on the intensity within the aortic volume corresponding to the portion of the subject's aorta; eg, the liver reference value, which is determined based on the intensity within the liver volume corresponding to the subject's liver); and for each specific hotspot volume of the 3D heat map: the corresponding hotspot is determined based on the intensity of the voxels within the specific hotspot volume intensity [eg, where the hot spot intensity is the maximum value of the intensities of the voxels (eg, representing an SUV) within the particular hot spot volume]; and for the particular hot spot, based on (i) the corresponding hot spot intensity and (ii) at least one The one or more reference values determine the hotspot specific threshold value.

在某些實施例中,該熱點特定臨限值係使用選自複數個定限函數之特定定限函數來判定,該特定定限函數基於該對應熱點強度與該至少一個參考值之比較來選擇[例如,其中該複數個定限函數之各者與強度(例如,SUV)值之特定範圍相關聯,且該特定定限函數係根據熱點強度及/或其之(例如,預定)百分比落入之該特定範圍來選擇(例如,且其中強度值之各特定範圍係至少部分由該至少一個參考值的倍數來定界)]。In some embodiments, the hotspot specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function being selected based on a comparison of the corresponding hotspot intensity with the at least one reference value [For example, where each of the plurality of bounding functions is associated with a particular range of intensity (eg, SUV) values, and the particular bounding function falls within the hotspot intensity and/or its (eg, predetermined) percentage (eg, and wherein each particular range of intensity values is at least partially bounded by a multiple of the at least one reference value)].

在某些實施例中,該熱點特定臨限值係(例如,藉由該特定定限函數)判定為該對應熱點強度之可變百分比,其中該可變百分比隨著熱點強度增加而減小[例如,其中該可變百分比自身係對應熱點強度之函數(例如,遞減函數)]。In some embodiments, the hotspot specific threshold is determined (eg, by the specific threshold function) as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as the hotspot intensity increases [ For example, where the variable percentage itself corresponds to a function of hot spot intensity (eg, a decreasing function)].

在另一態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級;例如,分類;例如,如表示特定病變類型)該受試者內之癌性病變之系統,該系統包括:運算裝置之處理器;及具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器:(a)接收(例如,及/或存取)使用功能成像模態獲得的該受試者之3D功能影像[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)];(b)使用第一機器學習模組自動分割該3D功能影像內之一或多個熱點之第一初始集合,從而建立識別第一初始熱點體積集合、該3D功能影像內之對應3D熱點體積之第一初始3D熱點圖,其中該第一機器學習模組根據單個熱點類別分割該3D功能影像之熱點[例如,將所有熱點識別為屬於單個熱點類別,以便區分背景區域與熱點體積(例如,但不區分不同類型之熱點) (例如,使得藉由該第一3D熱點圖識別之各熱點體積經標記為屬於一單個熱點類別,如與背景相反)];(c)使用第二機器學習模組自動分割該3D功能影像內之一或多個熱點之第二初始集合,從而建立識別第二初始熱點體積集合之第二初始3D熱點圖,其中該第二機器學習模組根據複數個不同熱點類別[例如,各熱點類別識別藉由熱點表示之病變經判定所定位之特定解剖及/或組織區域(例如,淋巴、骨、前列腺)]分割該3D功能影像,使得該第二初始3D熱點圖係多類別3D熱點圖,其中各熱點體積經標記為屬於該複數個不同熱點類別之特定者(例如,以便區分對應於不同熱點類別之熱點體積,以及區分熱點體積與背景區域);(d)藉由針對藉由該第一初始3D熱點圖識別之至少一部分該第一初始熱點體積集合之各特定熱點體積進行以下操作來合併該第一初始3D熱點圖與該第二初始3D熱點圖:識別該第二初始3D熱點圖之匹配熱點體積(例如,藉由識別該等第一及第二初始3D熱點圖之實質上重疊熱點體積),該第二3D熱點圖之該匹配熱點體積已經標記為屬於複數個不同熱點類別之特定熱點類別;及將該第一初始3D熱點圖之該特定熱點體積標記為屬於(該匹配熱點體積經標記為所屬於之)該特定熱點類別,從而建立經合併3D熱點圖,該經合併3D熱點圖包含已根據該第二3D熱點圖之匹配熱點識別所屬類別來標記之該第一3D熱點圖之經分割熱點體積;及(e)儲存及/或提供該經合併之3D熱點圖以供顯示及/或進一步處理。In another aspect, the invention relates to a method for automatically processing a 3D image of a subject to identify and/or characterize (eg, grade; eg, classify; eg, as represent a particular lesion type) the subject A system of cancerous lesions within, the system comprising: a processor of a computing device; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) Receive (eg, and/or access) a 3D functional image of the subject obtained using a functional imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography ( SPECT)]; (b) using a first machine learning module to automatically segment a first initial set of one or more hot spots in the 3D functional image, thereby establishing a first initial set of hot spot volumes that identify the corresponding correspondence in the 3D functional image A first initial 3D heat map of a 3D hotspot volume, wherein the first machine learning module segments the hotspots of the 3D functional image according to a single hotspot category [eg, all hotspots are identified as belonging to a single hotspot category in order to distinguish background regions from hotspot volumes (eg, but not distinguishing between different types of hotspots) (eg, such that each hotspot volume identified by the first 3D heatmap is marked as belonging to a single hotspot category, as opposed to the background)]; (c) using the second The machine learning module automatically segments a second initial set of one or more hot spots in the 3D functional image, thereby creating a second initial 3D heat map identifying the second initial hot spot volume set, wherein the second machine learning module is based on the complex number different hotspot categories (eg, each hotspot category identifies a specific anatomical and/or tissue region (eg, lymph, bone, prostate) where the lesion represented by the hotspot is determined to be located) to segment the 3D functional image such that the second initial A 3D heat map is a multi-class 3D heat map, where each hotspot volume is marked as belonging to a particular one of the plurality of different hotspot classes (e.g., to distinguish between hotspot volumes corresponding to different hotspot classes, and to distinguish between hotspot volumes and background regions); (d) merging the first initial 3D heat map and the second initial 3D hotspot by performing the following operations for each specific hotspot volume of at least a portion of the first initial hotspot volume set identified by the first initial 3D heatmap Figure: Identify the matching hotspot volume of the second initial 3D heat map (eg, by identifying substantially overlapping hotspot volumes of the first and second initial 3D heatmaps), the matching hotspot volume of the second 3D heatmap a particular hotspot category that has been marked as belonging to a plurality of different hotspot categories; and marking the particular hotspot volume of the first initial 3D heat map as belonging to (to which the matching hotspot volume is marked) the particular hotspot category, thereby establishing a merged 3D heat map comprising segmented hotspot volumes of the first 3D heatmap that have been labeled according to the class to which matching hotspot identifications of the second 3D heatmap belong; and (e) storing and/or The merged 3D heat map is provided for display and/or further processing.

在某些實施例中,該複數個不同熱點類別包括選自由以下各者組成之群組之一或多個成員:(i)骨熱點,其等經判定(例如,藉由第二機器學習模組)以表示定位於骨中之病變;(ii)淋巴熱點,其等經判定(例如,藉由第二機器學習模組)以表示定位於淋巴結中之病變;及(iii)前列腺熱點,其等經判定(例如,藉由第二機器學習模組)以表示定位於前列腺中之病變。In some embodiments, the plurality of different hotspot categories include one or more members selected from the group consisting of: (i) bone hotspots, the others of which are determined (eg, by a second machine learning model) group) to represent lesions localized in bone; (ii) lymphatic hotspots, which were determined (eg, by a second machine learning module) to represent lesions localized in lymph nodes; and (iii) prostate hotspots, which Equals are determined (eg, by a second machine learning module) to represent lesions localized in the prostate.

在另一態樣中,本發明係關於一種用於經由自適應定限方法自動化處理受試者之3D影像以識別及/或表徵化(例如,分級;例如,分類;例如,如表示特定病變類型)該受試者內之癌性病變之系統,該系統包括:運算裝置之處理器;及具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器:(a)接收(例如,及/或存取)使用功能成像模態獲得的該受試者之3D功能影像[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)];(b)接收(例如,及/或存取)在該3D功能影像內識別一或多個初步熱點體積之初步3D熱點圖;(c)判定一或多個參考值,各參考值基於定位於對應於特定參考組織區域之特定參考體積內之該3D功能影像之體素之強度的量度(例如,血池參考值基於對應於受試者之主動脈之部分之主動脈體積內之強度來判定;例如,肝臟參考值基於對應於受試者之肝臟之肝臟體積內之強度來判定);(d)藉由針對藉由該初步3D熱點圖識別之至少一部分該一或多個初步熱點體積的各特定初步熱點體積進行以下操作而基於該等初步熱點體積及使用基於自適應定限之分割來建立精細化之3D熱點圖:基於該特定初步熱點體積內之體素之強度來判定對應熱點強度[例如,其中該熱點強度係該特定初步熱點體積內之體素之強度(例如,表示SUV)之最大值];及針對該特定初步熱點體積基於(i)該對應熱點強度及(ii)至少一個該一或多個參考值來判定熱點特定臨限值;使用基於定限之分割演算法分割至少一部分3D功能影像(例如,大約在特定初步熱點體積中之子體積),該基於定限之分割演算法使用針對特定初步熱點判定之熱點特定臨限值執行影像分割[例如,及識別具有高於熱點特定臨限值之強度且包括初步熱點之最大強度體素之體素叢集(例如,以n連接組分方式彼此連接之體素之3D叢集(例如,其中n = 6、n = 18等))],從而判定對應於特定初步熱點體積之精細化、分析分割之熱點體積;及將該精細化之熱點體積包含於該精細化之3D熱點圖中;及(e)儲存及/或提供該精細化之3D熱點圖以供顯示及/或進一步處理。In another aspect, the invention relates to a method for automatically processing 3D images of a subject to identify and/or characterize (eg, grade; eg, classify; eg, such as representing a particular lesion) via an adaptive bounding method type) a system of cancerous lesions in the subject, the system comprising: a processor of a computing device; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the Processor: (a) Receive (eg, and/or access) a 3D functional image of the subject obtained using a functional imaging modality [eg, Positron Emission Tomography (PET); eg, Single Photon Emission computed tomography (SPECT)]; (b) receiving (eg, and/or accessing) a preliminary 3D heat map identifying one or more preliminary hot spot volumes within the 3D functional image; (c) determining one or more reference values, each reference value based on a measure of the intensity of the voxels of the 3D functional image positioned within a specific reference volume corresponding to a specific reference tissue region (eg, a blood pool reference value based on an intensity corresponding to the subject's aorta) (d) by targeting at least the intensities identified by the preliminary 3D heatmap A portion of each particular preliminary hotspot volume of the one or more preliminary hotspot volumes creates a refined 3D heat map based on the preliminary hotspot volumes and using adaptive bound-based segmentation: the intensity of the voxel to determine the corresponding hotspot intensity [eg, where the hotspot intensity is the maximum value of the intensities of the voxels (eg, representing SUVs) within the particular preliminary hotspot volume]; and for the particular preliminary hotspot volume based on (i ) the corresponding hotspot intensity and (ii) at least one of the one or more reference values to determine a hotspot-specific threshold; segment at least a portion of the 3D functional image using a threshold-based segmentation algorithm (eg, approximately in a particular preliminary hotspot volume) sub-volume), the bound-based segmentation algorithm performs image segmentation using a hotspot-specific threshold value for a specific preliminary hotspot determination [eg, and identifying the maximum intensity volume that has an intensity above the hotspot-specific threshold and includes the preliminary hotspot voxel clusters of voxels (e.g., 3D clusters of voxels connected to each other in n-connected components (e.g., where n = 6, n = 18, etc.))] to determine the refinement, analyzing the segmented hot spot volume; and including the refined hot spot volume in the refined 3D heat map; and (e) storing and/or providing the refined 3D heat map for display and/or further processing.

在某些實施例中,該熱點特定臨限值係使用選自複數個定限函數之特定定限函數來判定,該特定定限函數基於該對應熱點強度與該至少一個參考值之比較來選擇[例如,其中該複數個定限函數之各者與強度(例如,SUV)值之特定範圍相關聯,且該特定定限函數係根據熱點強度及/或其之(例如,預定)百分比落入之該特定範圍來選擇(例如,且其中強度值之各特定範圍係至少部分由該至少一個參考值的倍數來定界)]。In some embodiments, the hotspot specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function being selected based on a comparison of the corresponding hotspot intensity with the at least one reference value [For example, where each of the plurality of bounding functions is associated with a particular range of intensity (eg, SUV) values, and the particular bounding function falls within the hotspot intensity and/or its (eg, predetermined) percentage (eg, and wherein each particular range of intensity values is at least partially bounded by a multiple of the at least one reference value)].

在某些實施例中,該熱點特定臨限值係(例如,藉由該特定定限函數)判定為該對應熱點強度之可變百分比,其中該可變百分比隨著熱點強度增加而減小[例如,其中該可變百分比自身係對應熱點強度之函數(例如,遞減函數)]。In some embodiments, the hotspot specific threshold is determined (eg, by the specific threshold function) as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as the hotspot intensity increases [ For example, where the variable percentage itself corresponds to a function of hot spot intensity (eg, a decreasing function)].

在另一態樣中,本發明係關於一種用於自動化處理受試者之3D影像以識別及/或表徵化(例如,分級;例如,分類;例如,如表示特定病變類型)該受試者內之癌性病變之系統,該系統包括:運算裝置之處理器;及具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器:(a)接收(例如,及/或存取)使用解剖成像模態[例如,x射線電腦斷層掃描攝影術(CT);例如,磁共振成像(MRI);例如,超聲波]獲得的該受試者之3D解剖影像,其中該3D解剖影像包括該受試者內之組織(例如,軟組織及/或骨)之圖形表示;(b)自動分割該3D解剖影像以建立3D分割圖,該3D分割圖識別該3D解剖影像中之複數個所關注體積(VOI),包含對應於該受試者之肝臟之肝臟體積及對應於主動脈部分(例如,胸及/或腹部分)之主動脈體積;(c)接收(例如,及/或存取)使用功能成像模態[例如,正電子發射斷層掃描攝影術(PET);例如,單光子發射電腦斷層掃描攝影術(SPECT)]獲得的該受試者之3D功能影像[例如,其中該3D功能影像包括複數個體素,各體素表示該受試者內之特定實體體積且具有表示自該特定實體體積發射之經偵測輻射之強度值,其中該3D功能影像之該複數個體素之至少部分表示目標組織區域內之實體體積];(d)自動分割該3D功能影像內之一或多個熱點,各經分割熱點對應於相對於其周圍提高強度之局部區域且表示(例如,指示)該受試者內之潛在癌性病變,從而識別一或多個經自動分割之熱點體積;(e)引起轉列該一或多個經自動分割之熱點體積之圖形表示以用於在互動式圖形使用者介面(GUI) (例如,品質控制及報告GUI)內顯示;(f)經由該互動式GUI接收包括至少一部分該一或多個經自動分割之熱點體積(例如,多達全部)之最終熱點集合的使用者選擇;(g)針對該最終集合之各熱點體積基於(i)對應於該熱點體積(例如,定位於該熱點體積內)之該功能影像之體素之強度及(ii)使用對應於該肝臟體積及該主動脈體積之該功能影像之體素之強度判定的一或多個參考值判定病變指數值;及(e)儲存及/或提供最終熱點集合及/或病變指數值以供顯示及/或進一步處理。In another aspect, the invention relates to a method for automatically processing a 3D image of a subject to identify and/or characterize (eg, grade; eg, classify; eg, as represent a particular lesion type) the subject A system of cancerous lesions within, the system comprising: a processor of a computing device; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) receiving (eg, and/or accessing) a 3D image of the subject obtained using an anatomical imaging modality [eg, x-ray computed tomography (CT); eg, magnetic resonance imaging (MRI); eg, ultrasound] an anatomical image, wherein the 3D anatomical image includes a graphical representation of tissue (eg, soft tissue and/or bone) within the subject; (b) automatically segment the 3D anatomical image to create a 3D segmentation map that identifies the A plurality of volumes of interest (VOIs) in the 3D anatomical image, including the liver volume corresponding to the subject's liver and the aortic volume corresponding to the aortic portion (eg, thoracic and/or abdominal portion); (c) receiving (eg, and/or access) a 3D image of the subject obtained using a functional imaging modality [eg, positron emission tomography (PET); eg, single photon emission computed tomography (SPECT)] Functional image [eg, wherein the 3D functional image includes a plurality of voxels, each voxel representing a specific physical volume within the subject and having an intensity value representing detected radiation emitted from the specific physical volume, wherein the 3D functional image At least a portion of the plurality of voxels of the image represents a solid volume within the target tissue region]; (d) automatically segment one or more hot spots within the 3D functional image, each segmented hot spot corresponding to a local area of increased intensity relative to its surroundings region and represent (eg, indicate) a potential cancerous lesion within the subject, thereby identifying one or more auto-segmented hot spot volumes; (e) causing a transfer of the one or more auto-segmented hot spot volumes a graphical representation for display within an interactive graphical user interface (GUI) (eg, a quality control and reporting GUI); (f) receiving via the interactive GUI a volume that includes at least a portion of the one or more automatically segmented hotspot volumes User selection of the final set of hotspots (eg, up to all); (g) each hotspot volume for the final set is based on (i) the functional image corresponding to the hotspot volume (eg, positioned within the hotspot volume) and (ii) use one or more reference values determined by the intensities of the voxels of the functional image corresponding to the liver volume and the aortic volume to determine a lesion index value; and (e) store and/or The final set of hotspots and/or lesion index values are provided for display and/or further processing.

在某些實施例中,在步驟(b),指令引起處理器分割該解剖影像,使得該3D分割圖識別對應於該受試者之一或多個骨之一或多個骨體積,且在步驟(d),指令引起處理器在該功能影像內使用該一或多個骨體積識別骨骼體積及分割定位於該骨骼體積內之一或多個骨熱點體積(例如,藉由應用高斯濾波器之一或多個差值及對骨骼體積定限)。In certain embodiments, at step (b), the instructions cause the processor to segment the anatomical image such that the 3D segmentation map identifies one or more bone volumes corresponding to one or more bones of the subject, and at Step (d), the instructions cause the processor to identify a bone volume within the functional image using the one or more bone volumes and segment one or more bone hot spot volumes located within the bone volume (eg, by applying a Gaussian filter) one or more of the differences and bound to the bone volume).

在某些實施例中,在步驟(b),指令引起處理器分割該解剖影像使得該3D分割圖識別對應於該受試者之軟組織器官(例如,左/右肺、左/右臀大肌、尿膀胱、肝臟、左/右腎、膽囊、脾、胸及腹主動脈,且視需要(例如,針對未經歷根治性前列腺切除術之患者)前列腺)之一或多個器官體積,且在步驟(d),指令引起處理器在該功能影像內使用該一或多個經分割之器官體積識別軟組織(例如,淋巴且視需要前列腺)體積及分割定位於該軟組織體積內之一或多個淋巴及/或前列腺熱點體積(例如,藉由應用高斯濾波器之一或多個拉普拉斯算子及對該軟組織體積定限)。In certain embodiments, at step (b), the instructions cause the processor to segment the anatomical image such that the 3D segmentation map identifies soft tissue organs (eg, left/right lungs, left/right gluteus maximus) corresponding to the subject , urinary bladder, liver, left/right kidneys, gallbladder, spleen, thoracic and abdominal aorta, and as needed (eg, prostate for patients who have not undergone radical prostatectomy) one or more organ volumes, and in Step (d), the instructions cause the processor to identify soft tissue (eg, lymph and optionally prostate) volumes within the functional image using the one or more segmented organ volumes and segment one or more localized within the soft tissue volume Lymphatic and/or prostate hot spot volume (eg, by applying one or more Laplacian operators of Gaussian filters and qualifying the soft tissue volume).

在某些實施例中,在步驟(d),指令引起處理器,在分割該一或多個淋巴及/或前列腺熱點體積之前,調整該功能影像之強度以抑制來自一或多個高攝取組織區域之強度(例如,使用本文中所描述之一或多個抑制方法)。In certain embodiments, at step (d), the instructions cause the processor to, prior to segmenting the one or more lymphatic and/or prostate hot spot volumes, adjust the intensity of the functional image to suppress the intensity of the functional image from one or more high uptake tissues The strength of the region (eg, using one or more of the suppression methods described herein).

在某些實施例中,在步驟(g),指令引起處理器使用對應於該肝臟體積之該功能影像之體素之強度來判定肝臟參考值。In some embodiments, at step (g), the instructions cause the processor to use the intensities of the voxels of the functional image corresponding to the liver volume to determine a liver reference value.

在某些實施例中,指令引起處理器:將雙組分高斯混合模型擬合至對應於該肝臟體積之功能影像體素之強度的直方圖,使用該雙組分高斯混合模型擬合以自該肝臟體積識別及排除具有與異常低攝取之區域相關聯的強度的體素,及使用剩餘(例如,未排除之)體素之強度判定肝臟參考值。In certain embodiments, the instructions cause the processor to: fit a two-component Gaussian mixture model to a histogram of intensities corresponding to functional image voxels of the liver volume, using the two-component Gaussian mixture model fit to automatically The liver volume identifies and excludes voxels with intensities associated with regions of abnormally low uptake, and uses the intensities of the remaining (eg, not excluded) voxels to determine liver reference values.

相對於本發明之一項態樣描述之實施例之特徵可相對於本發明之另一態樣應用。Features of embodiments described with respect to one aspect of the invention may be applied with respect to another aspect of the invention.

相關申請案之交叉參考 本申請案主張於2020年7月6日申請之美國臨時專利申請案第63/048,436號、於2020年8月31日申請之美國非臨時專利申請案第17/008,411號、於2020年12月18日申請之美國臨時專利申請案第63/127,666號及於2021年6月10日申請之美國臨時專利申請案第63/209,317號之優先權及權利,各案之全部內容以引用的方式併入本文。 CROSS-REFERENCE TO RELATED APPLICATIONS This application claims US Provisional Patent Application Serial No. 63/048,436, filed July 6, 2020, and US Non-Provisional Patent Application No. 17/008,411, filed August 31, 2020 , the priority and rights of U.S. Provisional Patent Application No. 63/127,666, filed on December 18, 2020, and U.S. Provisional Patent Application No. 63/209,317, filed on June 10, 2021, in all cases The contents are incorporated herein by reference.

經考慮所主張發明之系統、裝置、方法及程序涵蓋使用來自本文中所描述之實施例之資訊發展之變動及調適。可由相關技術之一般技術者執行本文中所描述之該等系統、裝置、方法及程序之調適及/或變動。The systems, devices, methods, and programs of the claimed invention are contemplated to encompass variations and adaptations developed using information from the embodiments described herein. Adaptations and/or variations of the systems, devices, methods, and procedures described herein can be performed by those of ordinary skill in the relevant art.

在其中物品、裝置及系統被描述為具有、包含或包括特定組件或其中程序及方法被描述為具有、包含或包括特定步驟之通篇描述中,經考慮另外存在本發明之基本上由該等所敘述組件組成或由該等所敘述組件組成之物品、裝置及系統,且另外存在根據本發明之基本上由該等所敘述處理步驟組成或由該等所敘述處理步驟組成之程序及方法。Throughout the description in which articles, devices and systems are described as having, comprising or including particular components or in which procedures and methods are described as having, comprising or including particular steps, it is contemplated that there is additionally that the invention essentially consists of such Articles, devices and systems consisting of or consisting of the recited components, and in addition there are programs and methods according to the present invention that consist essentially of or consist of the recited processing steps.

應理解,只要本發明保持可操作,步驟之順序或用於執行特定動作之順序就不重要。此外,可同時進行兩個或更多個步驟或動作。It should be understood that the order of steps, or order for performing a particular action, is immaterial as long as the invention remains operable. Furthermore, two or more steps or actions may be performed simultaneously.

本文中提及任何出版物(例如,在[先前技術]段落中)並非承認該出版物相對於本文中提出之請求項之任一者作為先前技術。[先前技術]段落係出於清楚目的而提出且並不意欲為先前技術相對於任何請求項之描述。Reference herein to any publication (eg, in the [PRIOR ART] paragraph) is not an admission that such publication is prior art with respect to any of the claims made herein. The [PRIOR ART] paragraph is presented for clarity and is not intended to be a description of the prior art with respect to any claim.

為方便讀者而提供標頭,標頭之存在及/或放置並不意欲限制本文中所描述之標的物之範疇。The headers are provided as a convenience to the reader, and their presence and/or placement are not intended to limit the scope of the subject matter described herein.

在本申請案中,除非自上下文另外清楚,否則(i)術語「一」可被理解為意謂「至少一個」;(ii)術語「或」可被理解為意謂「及/或」;(iii)術語「包括」及「包含」可被理解為涵蓋列舉之組件或步驟,無論其等被單獨呈現或結合一或多個額外組件或步驟呈現;及(iv)術語「大約」及「近似」可被理解為允許如一般技術者所理解之標準變動;及(v)在提供範圍之處,包含端點。In this application, unless otherwise clear from the context, (i) the term "a" can be read to mean "at least one"; (ii) the term "or" can be read to mean "and/or"; (iii) the terms "comprising" and "comprising" can be understood to encompass the recited elements or steps, whether presented alone or in combination with one or more additional elements or steps; and (iv) the terms "about" and "about" "Approximate" is to be understood as allowing standard variations as understood by those of ordinary skill; and (v) where ranges are provided, inclusive of the endpoints.

在某些實施例中,術語「大約」在本文中用於參考值時,係指在上下文中類似於參考值之值。一般而言,熟悉上下文之熟習此項技術者將瞭解在該上下文中由「大約」涵蓋之相關變動程度。例如,在一些實施例中,術語「大約」可涵蓋在所參考值之25%、20%、19%、18%、17%、16%、15%、14%、13%、12%、11%、10%、9%、8%、7%、6%、5%、4%、3%、2%、1%或更少內之值範圍。 A. 核醫學影像 In certain embodiments, the term "about" when used herein for a reference value refers to a value that is, in context, similar to the reference value. In general, those skilled in the art familiar with the context will understand the relative degree of variation encompassed by "about" in that context. For example, in some embodiments, the term "about" can encompass 25%, 20%, 19%, 18%, 17%, 16%, 15%, 14%, 13%, 12%, 11% of the referenced value %, 10%, 9%, 8%, 7%, 6%, 5%, 4%, 3%, 2%, 1% or less. A. Nuclear Medicine Imaging

核醫學影像係使用諸如骨掃描成像、正電子發射斷層掃描攝影術(PET)成像及單光子發射電腦斷層掃描攝影術(SPECT)成像之核成像模態獲得。Nuclear medicine images are obtained using nuclear imaging modalities such as bone scan imaging, positron emission tomography (PET) imaging, and single photon emission computed tomography (SPECT) imaging.

如本文中所使用,「影像」(例如,哺乳動物之3-D影像)包含任何視覺表示,諸如照片、視訊圖框、串流視訊以及照片、視訊圖框或串流視訊之任何電子、數位或數學模擬。在某些實施例中,本文中所描述之任何設備包含用於顯示影像或藉由處理器產生之任何其他結果之顯示器。在某些實施例中,本文中所描述之任何方法包含顯示影像或經由該方法產生之任何其他結果之步驟。As used herein, "image" (eg, a 3-D image of a mammal) includes any visual representation, such as a photo, video frame, streaming video, and any electronic, digital, or electronic, digital representation of a photo, video frame, or streaming video or mathematical simulations. In certain embodiments, any apparatus described herein includes a display for displaying an image or any other result produced by a processor. In certain embodiments, any method described herein includes the step of displaying an image or any other result produced by the method.

如本文中所使用,關於「影像」之「3-D」或「三維」意謂傳達關於三個維度之資訊。3-D影像可轉列為三維資料集及/或可顯示為二維表示之集合,或顯示為三維表示。As used herein, "3-D" or "three-dimensional" with respect to "image" is meant to convey information about three dimensions. The 3-D imagery can be converted into a three-dimensional dataset and/or can be displayed as a collection of two-dimensional representations, or displayed as a three-dimensional representation.

在某些實施例中,核醫學影像使用包括放射性藥物之成像劑。核醫學影像係在向患者(例如,人類受試者)投予放射性藥物之後獲得,且提供關於該放射性藥物在該患者內之分佈之資訊。放射性藥物係包括放射性核素之化合物。In certain embodiments, nuclear medicine imaging uses imaging agents including radiopharmaceuticals. Nuclear medicine images are obtained after administration of a radiopharmaceutical to a patient (eg, a human subject) and provide information about the distribution of the radiopharmaceutical within the patient. Radiopharmaceuticals are compounds that include radionuclides.

如本文中所使用,「投予」試劑意謂將物質(例如,成像劑)引入至受試者中。一般而言,可利用任何投予路徑,例如,包含腸胃外(例如,靜脈內)、口服、局部、皮下、腹膜、動脈內、吸入、陰道、直腸、鼻腔、引入至腦脊髓液中或滴入身體隔室中。As used herein, "administering" an agent means introducing a substance (eg, an imaging agent) into a subject. In general, any route of administration can be utilized, including, for example, parenteral (eg, intravenous), oral, topical, subcutaneous, peritoneal, intraarterial, inhalation, vaginal, rectal, nasal, introduction into cerebrospinal fluid, or instillation into the body compartment.

如本文中所使用,「放射性核素」係指包括至少一種元素之放射性同位素之部分。例示性合適放射性核素包含(但不限於)本文中所描述之彼等。在一些實施例中,放射性核素係在正電子發射斷層掃描攝影術(PET)中使用之一種放射性核素。在一些實施例中,放射性核素係在單光子發射電腦斷層掃描攝影術(SPECT)中使用之一種放射性核素。在一些實施例中,放射性核素之非限制性清單包含99m Tc、111 In、64 Cu、67 Ga、68 Ga、186 Re、188 Re、153 Sm、177 Lu、67 Cu、123 I、124 I、125 I、126 I、131 I、11 C、13 N、15 O、18 F、153 Sm、166 Ho、177 Lu、149 Pm、90 Y、213 Bi、103 Pd、109 Pd、159 Gd、140 La、198 Au、199 Au、169 Yb、175 Yb、165 Dy、166 Dy、105 Rh、111 Ag、89 Zr、225 Ac、82 Rb、75 Br、76 Br、77 Br、80 Br、80m Br、82 Br、83 Br、211 At及192 Ir。As used herein, "radionuclide" refers to a portion that includes a radioisotope of at least one element. Exemplary suitable radionuclides include, but are not limited to, those described herein. In some embodiments, the radionuclide is one used in positron emission tomography (PET). In some embodiments, the radionuclide is one used in single photon emission computed tomography (SPECT). In some embodiments, the non-limiting list of radionuclides includes99mTc, 111In , 64Cu , 67Ga , 68Ga , 186Re , 188Re , 153Sm , 177Lu , 67Cu , 123I , 124I , 125 I, 126 I, 131 I, 11 C, 13 N, 15 O, 18 F, 153 Sm, 166 Ho, 177 Lu, 149 Pm, 90 Y, 213 Bi, 103 Pd, 109 Pd, 159 Gd, 140 La, 198 Au, 199 Au, 169 Yb, 175 Yb, 165 Dy, 166 Dy, 105 Rh, 111 Ag, 89 Zr, 225 Ac, 82 Rb, 75 Br, 76 Br, 77 Br, 80 Br, 80m Br, 82 Br, 83 Br, 211 At and 192 Ir.

如本文中所使用,「放射性藥物」係指包括放射性核素之化合物。在某些實施例中,放射性藥物係用於診斷及/或治療目的。在某些實施例中,放射性藥物包含用一或多種放射性核素標記之小分子、用一或多種放射性核素標記之抗體及用一或多種放射性核素標記之抗體之抗原結合部分。As used herein, a "radiopharmaceutical" refers to a compound that includes a radionuclide. In certain embodiments, radiopharmaceuticals are used for diagnostic and/or therapeutic purposes. In certain embodiments, the radiopharmaceutical comprises a small molecule labeled with one or more radionuclides, an antibody labeled with one or more radionuclides, and an antigen-binding portion of an antibody labeled with one or more radionuclides.

核醫學影像(例如,PET掃描;例如,SPECT掃描;例如,全身骨掃描;例如,合成PET-CT影像;例如,合成SPECT-CT影像)偵測自放射性藥物之放射性核素發射之輻射以形成影像。特定放射性藥物在患者內之分佈可藉由生物機制(諸如血流或灌注),以及藉由特異性酶或受體結合互動來判定。不同放射性藥物可經設計以利用不同生物機制及/或特定特異性酶或受體結合互動且因此,當投予給患者時,選擇性地集中於患者內之特定類型之組織及/或區域內。自患者內之具有高於其他區域之放射性藥物濃度之區域發射更大量之輻射,使得此等區域在核醫學影像中顯得更亮。因此,核醫學影像內之強度變動可用於映射放射性藥物在患者內之分佈。放射性藥物在患者內之此映射分佈可用於(例如)推斷患者身體之各個區域內之癌組織的存在。Nuclear medicine images (eg, PET scans; eg, SPECT scans; eg, whole body bone scans; eg, synthetic PET-CT images; eg, synthetic SPECT-CT images) detect radiation emitted from radionuclides of radiopharmaceuticals to form image. The distribution of a particular radiopharmaceutical in a patient can be determined by biological mechanisms, such as blood flow or perfusion, and by specific enzyme or receptor binding interactions. Different radiopharmaceuticals can be designed to take advantage of different biological mechanisms and/or specific specific enzyme or receptor binding interactions and thus, when administered to a patient, selectively concentrate within a specific type of tissue and/or region within the patient . Greater amounts of radiation are emitted from areas within the patient that have higher radiopharmaceutical concentrations than other areas, making these areas appear brighter in nuclear medicine images. Thus, intensity variations within nuclear medicine images can be used to map the distribution of radiopharmaceuticals within a patient. This mapped distribution of radiopharmaceuticals within a patient can be used, for example, to infer the presence of cancerous tissue within various regions of the patient's body.

例如,在投予給患者之後,鍀99m亞甲基二膦酸鹽(99m Tc MDP)選擇性地累積於患者之骨骼區域內,尤其在具有與惡性骨病變相關聯的異常成骨之部位處。放射性藥物在此等部位處之選擇性集中產生可識別熱點—核醫學影像中之高強度之局部化區域。因此,可藉由在患者之全身掃描內識別此等熱點來推斷與轉移性前列腺癌相關聯的惡性骨病變的存在。如下文中所描述,可基於在向患者投予99m Tc MDP之後獲得的全身掃描中之強度變動的自動化分析來運算與患者總生存期及指示疾病狀態、進展、治療功效及類似者之其他預後度量相關的風險指數。在某些實施例中,亦可以類似於99m Tc MDP之方式使用其他放射性藥物。For example, after administration to a patient, Xun99m methylene bisphosphonate ( 99m Tc MDP) selectively accumulates in the patient's skeletal areas, especially at sites with abnormal osteogenesis associated with malignant bone lesions . The selective concentration of radiopharmaceuticals at these sites creates identifiable hot spots - localized areas of high intensity in nuclear medicine imaging. Thus, the presence of malignant bone lesions associated with metastatic prostate cancer can be inferred by identifying these hot spots within the patient's whole body scan. As described below, an automated analysis of intensity changes in whole body scans obtained after administration of99mTc MDP to a patient can be calculated with patient overall survival and other prognostic measures indicative of disease state, progression, treatment efficacy, and the like. associated risk index. In certain embodiments, other radiopharmaceuticals can also be used in a manner similar to99mTc MDP.

在某些實施例中,所使用之特定放射性藥物取決於所使用之特定核醫學成像模態。例如,18F氟化鈉(NaF)亦類似於99m Tc MDP在骨病變中累積,但可與PET成像一起使用。在某些實施例中,PET成像亦可利用易於被前列腺癌細胞吸收之放射性形式之維生素膽鹼。In certain embodiments, the specific radiopharmaceutical used depends on the specific nuclear medicine imaging modality used. For example, 18F sodium fluoride (NaF) also accumulates in bone lesions similar to99mTc MDP, but can be used with PET imaging. In certain embodiments, PET imaging may also utilize a radioactive form of vitamin choline that is readily taken up by prostate cancer cells.

在某些實施例中,可使用選擇性地結合至特定蛋白質或所關注受體(特別是其之表達在癌組織中增加之彼等)之放射性藥物。此等蛋白質或所關注受體包含(但不限於)腫瘤抗原,諸如在結直腸癌中表達之CEA:在多種癌症中表達之Her2/neu;在乳腺癌及卵巢癌中表達之BRCA 1及BRCA 2;及在黑色素瘤中表達之TRP-1及TRP-2。In certain embodiments, radiopharmaceuticals that selectively bind to specific proteins or receptors of interest, particularly those whose expression is increased in cancerous tissue, can be used. These proteins or receptors of interest include, but are not limited to, tumor antigens such as CEA expressed in colorectal cancer; Her2/neu expressed in various cancers; BRCA 1 and BRCA expressed in breast and ovarian cancers 2; and TRP-1 and TRP-2 expressed in melanoma.

例如,人類前列腺特異性膜抗原(PSMA)係在前列腺癌(包含轉移性疾病)中上調。PSMA係由幾乎所有前列腺癌表達且其表達在低分化、轉移性及激素難治性癌中進一步增加。因此,對應於用一或多種放射性核素標記之PSMA結合劑(例如,對PSMA具有高親和力之化合物)之放射性藥物可用於獲得患者之核醫學影像,可自核醫學影像評估患者之各個區域(例如,包含但不限於骨骼區域)內之前列腺癌的存在及/或狀態。在某些實施例中,當疾病處於局部化狀態中時,使用PSMA結合劑獲得的核醫學影像係用於識別前列腺內之癌組織的存在。在某些實施例中,使用包括PSMA結合劑之放射性藥物獲得的核醫學影像係用於識別各個區域內之癌組織之存在,該等區域不僅包含前列腺,而且包含其他器官及組織區域(諸如肺、淋巴結及骨),如在疾病係轉移性時係相關的。For example, human prostate-specific membrane antigen (PSMA) is upregulated in prostate cancer, including metastatic disease. PSMA is expressed by nearly all prostate cancers and its expression is further increased in poorly differentiated, metastatic, and hormone-refractory cancers. Thus, radiopharmaceuticals corresponding to PSMA-binding agents (eg, compounds with high affinity for PSMA) labeled with one or more radionuclides can be used to obtain nuclear medicine images of patients from which various regions of the patient can be assessed ( For example, including but not limited to the presence and/or status of prostate cancer within the skeletal region. In certain embodiments, nuclear medicine imaging obtained using a PSMA binding agent is used to identify the presence of cancerous tissue within the prostate when the disease is in a localized state. In certain embodiments, nuclear medicine images obtained using radiopharmaceuticals including PSMA binding agents are used to identify the presence of cancerous tissue in various regions, including not only the prostate, but other organ and tissue regions such as the lungs , lymph nodes, and bone), as relevant when the disease is metastatic.

特定言之,在投予給患者之後,放射性核素標記之PSMA結合劑基於其等對PSMA之親和力選擇性地積聚於癌組織內。以類似於上文關於99m Tc MDP所描述之方式之方式,放射性核素標記之PSMA結合劑在患者內之特定部位處之選擇性集中產生核醫學影像中之可偵測熱點。在PSMA結合劑集中於表達PSMA之身體之各種癌組織及區域內時,可偵測及評估患者之前列腺內之局部化癌症及/或患者身體之各個區域中之轉移性癌症。可基於在向患者投予PSMA結合劑放射性藥物之後獲得的核醫學影像中之強度變動的自動化分析來運算與患者總生存期及指示疾病狀態、進展、治療功效及類似者之其他預後度量相關的風險指數。Specifically, after administration to a patient, radionuclide-labeled PSMA-binding agents selectively accumulate in cancerous tissue based on their equivalent affinity for PSMA. In a manner similar to that described above for the99mTc MDP, the selective concentration of radionuclide-labeled PSMA binders at specific sites within the patient produces detectable hot spots in nuclear medicine imaging. When the PSMA-binding agent is localized in various cancerous tissues and regions of the body expressing PSMA, localized cancer in the prostate of a patient and/or metastatic cancer in various regions of the patient's body can be detected and assessed. Correlations to patient overall survival and other prognostic measures indicative of disease state, progression, treatment efficacy, and the like can be calculated based on automated analysis of intensity changes in nuclear medicine images obtained after administration of a PSMA binder radiopharmaceutical to a patient. risk index.

各種放射性核素標記之PSMA結合劑可用作用於核醫學成像以偵測及評估前列腺癌之放射性藥物成像劑。在某些實施例中,所使用之特定放射性核素標記之PSMA結合劑取決於諸如待成像之患者之特定成像模態(例如,PET;例如,SPECT)及特定區域(例如,器官)之因素。例如,特定放射性核素標記之PSMA結合劑係適用於PET成像,而其他適用於SPECT成像。例如,特定放射性核素標記之PSMA結合劑促進對患者之前列腺進行成像,且主要在疾病局部化時使用,而其他促進對整個患者身體之器官及區域進行成像,且對於評估轉移性前列腺癌有用的。Various radionuclide-labeled PSMA binders are useful as radiopharmaceutical imaging agents for nuclear medicine imaging to detect and assess prostate cancer. In certain embodiments, the particular radionuclide-labeled PSMA-binding agent used depends on factors such as the particular imaging modality (eg, PET; eg, SPECT) and the particular region (eg, organ) of the patient being imaged . For example, certain radionuclide-labeled PSMA binders are suitable for PET imaging, while others are suitable for SPECT imaging. For example, certain radionuclide-labeled PSMA binders facilitate imaging of a patient's prostate and are primarily used when disease is localized, while others facilitate imaging of organs and regions throughout a patient's body and are useful for evaluating metastatic prostate cancer of.

各種PSMA結合劑及其經放射性核素標記之版本係描述於美國專利第8,778,305號、第8,211,401號及第8,962,799號中,各案之全文以引用的方式併入本文中。若干PSMA結合劑及其經放射性核素標記之版本亦描述於2017年10月26日申請之PCT申請案PCT/US2017/058418 (PCT公開案WO 2018/081354)中,該案之全部內容以引用的方式併入本文中。下文章節J亦描述若干實例性PSMA結合劑及其經放射性核素標記之版本。 B. 自動化病變偵測及分析 i. 自動化病變偵測 Various PSMA binding agents and radionuclide-labeled versions thereof are described in US Pat. Nos. 8,778,305, 8,211,401, and 8,962,799, the entire contents of each of which are incorporated herein by reference. Certain PSMA binders and radionuclide-labeled versions thereof are also described in PCT application PCT/US2017/058418 (PCT publication WO 2018/081354) filed on October 26, 2017, which is incorporated by reference in its entirety manner is incorporated into this article. Section J below also describes several exemplary PSMA binders and their radionuclide-labeled versions. B. Automated Lesion Detection and Analysis i. Automated Lesion Detection

在某些實施例中,本文中所描述之系統及方法利用機器學習技術用於對應於及指示受試者內之可能癌性病變之熱點的自動化影像分割及偵測。In certain embodiments, the systems and methods described herein utilize machine learning techniques for automated image segmentation and detection of hot spots that correspond to and indicate likely cancerous lesions within a subject.

在某些實施例中,本文中所描述之系統及方法可在一基於雲端之平台中實施,例如,如於2017年10月26日申請之PCT/US2017/058418 (PCT公開案WO 2018/081354)中所描述,該案之全部內容以引用的方式併入本文。In certain embodiments, the systems and methods described herein may be implemented in a cloud-based platform, eg, as in PCT/US2017/058418 (PCT publication WO 2018/081354) filed on October 26, 2017 ), which is incorporated by reference in its entirety.

在某些實施例中,如本文中所描述,機器學習模組實施一或多個機器學習技術,諸如隨機森林分類器、人工神經網路(ANN)、廻旋神經網路(CNN)及類似者。在某些實施例中,例如,使用經手動分割及/或標記之影像來訓練實施機器學習技術之機器學習模組以對影像之部分進行識別及/或分類。此訓練可用於判定藉由機器學習模組實施之機器學習演算法之各種參數,諸如與神經網路中之層相關聯的權重。在某些實施例中,一旦訓練機器學習模組(例如)以完成特定任務(諸如識別影像內之特定目標區域),經判定之參數之值就固定且該(例如,不變、靜態的)機器學習模組係用於處理新資料(例如,不同於訓練資料)且在無需對其參數進一步更新(例如,機器學習模組不接收回饋及/或更新)的情況下完成其訓練任務。在某些實施例中,機器學習模組可(例如)基於使用者對準確度之查看接收回饋,且此回饋可用作額外訓練資料以動態地更新機器學習模組。在一些實施例中,經訓練之機器學習模組係具有可調整及/或固定(例如,經鎖定)參數之分類演算法,例如,隨機森林分類器。In certain embodiments, as described herein, the machine learning module implements one or more machine learning techniques, such as random forest classifiers, artificial neural networks (ANNs), convoluted neural networks (CNNs), and the like . In some embodiments, for example, manually segmented and/or labeled images are used to train a machine learning module implementing machine learning techniques to identify and/or classify portions of the images. This training can be used to determine various parameters of the machine learning algorithm implemented by the machine learning module, such as the weights associated with the layers in the neural network. In some embodiments, once a machine learning module is trained (eg,) to accomplish a particular task (such as identifying a particular target region within an image), the value of the determined parameter is fixed and the (eg, invariant, static) The machine learning module is used to process new data (eg, different from the training data) and complete its training task without further updating its parameters (eg, the machine learning module does not receive feedback and/or updates). In some embodiments, the machine learning module may receive feedback, eg, based on a user's view of accuracy, and this feedback may be used as additional training data to dynamically update the machine learning module. In some embodiments, the trained machine learning module is a classification algorithm with adjustable and/or fixed (eg, locked) parameters, such as a random forest classifier.

在某些實施例中,機器學習技術係用於自動分割解剖影像(諸如CT、MRI、超聲波等影像)中之解剖結構以便識別對應於諸如特定器官(例如,前列腺、淋巴結區域、腎臟、肝臟、膀胱、主動脈部分)以及骨之特定目標組織區域之所關注體積。以此方式,機器學習模組可用於產生可映射至功能影像(諸如PET或SPECT影像) (例如,投影至功能影像上)以提供用於評估其中之強度波動之解剖背景的分割遮罩及/或分割圖(例如,包括複數個分割遮罩,各分割遮罩對應於並識別特定目標組織區域)。用於分割影像及使用所獲得的解剖背景用於分析核醫學影像之方法係進一步詳細描述(例如)於2019年1月7日申請之PCT/US2019/012486 (PCT公開案WO 2019/136349)及2020年1月6日申請之PCT/EP2020/050132 (PCT公開案WO 2020/144134)中,各案之全部內容以引用的方式併入本文。In some embodiments, machine learning techniques are used to automatically segment anatomical structures in anatomical images (such as CT, MRI, ultrasound, etc. images) in order to identify structures that correspond to specific organs such as prostate, lymph node regions, kidneys, liver, The volume of interest for specific target tissue regions of the bladder, aortic portion) and bone. In this way, machine learning modules can be used to generate segmentation masks and/or anatomical backgrounds that can be mapped to (eg, projected onto) functional images, such as PET or SPECT images, to provide an assessment of intensity fluctuations therein. Or a segmentation map (eg, comprising a plurality of segmentation masks, each segmentation mask corresponding to and identifying a specific target tissue region). Methods for segmenting images and using the obtained anatomical context for analyzing nuclear medicine images are described in further detail, for example, in PCT/US2019/012486 (PCT publication WO 2019/136349) filed on Jan. 7, 2019, and In PCT/EP2020/050132 (PCT Publication WO 2020/144134) filed on January 6, 2020, the entire contents of each case are incorporated herein by reference.

在某些實施例中,潛在病變經偵測為功能影像(諸如PET影像)中之局部高強度之區域。強度提高之此等局部化區域(亦被稱為熱點)可使用不一定涉及機器學習之影像處理技術(諸如濾波及定限)來偵測,且使用諸如快速行進方法之方法來分割。自解剖影像之分割建立之解剖資訊容許對表示潛在病變之經偵測熱點進行解剖標記。解剖背景亦可用於容許將不同偵測及分割技術用於不同解剖區域中之熱點偵測,此可提高敏感度及效能。In certain embodiments, underlying lesions are detected as localized regions of high intensity in functional images, such as PET images. These localized regions of increased intensity (also known as hotspots) can be detected using image processing techniques that do not necessarily involve machine learning, such as filtering and delimiting, and segmented using methods such as fast marching methods. Anatomical information created from segmentation of anatomical images allows anatomical marking of detected hot spots representing potential lesions. Anatomical background can also be used to allow different detection and segmentation techniques to be used for hot spot detection in different anatomical regions, which can improve sensitivity and performance.

在某些實施例中,可經由互動式圖形使用者介面(GUI)向使用者呈現經自動偵測之熱點。在某些實施例中,為考量由使用者(例如,醫師)偵測但被系統遺漏或較差分割之目標病變,在GUI中包含手動分割工具,以容許使用者手動「繪製」他們認為對應於任何形狀及大小之病變之影像之區域。此等經手動分割之病變接著可連同選定自動偵測之目標病變一起包含於隨後產生之報告中。ii. 基於 AI 之病變偵測 In some embodiments, the automatically detected hotspots may be presented to the user via an interactive graphical user interface (GUI). In some embodiments, to account for target lesions detected by the user (eg, physician) but missed or poorly segmented by the system, manual segmentation tools are included in the GUI to allow the user to manually "draw" what they think corresponds to The area of the image of a lesion of any shape and size. These manually segmented lesions can then be included in a subsequently generated report along with the selected automatically detected target lesions. ii. AI - based lesion detection

在某些實施例中,本文中所描述之系統及方法利用一或多個機器學習模組來分析3D功能影像之強度且偵測表示潛在病變之熱點。例如,藉由收集其中已手動偵測及分割表示病變之熱點之PET/CT影像之資料集,可獲得用於基於AI之病變偵測演算法之訓練材料。此等手動標記之影像可用於訓練一或多個機器學習演算法以自動分析功能影像(例如,PET影像)以準確地偵測及分割對應於癌性病變之熱點。In certain embodiments, the systems and methods described herein utilize one or more machine learning modules to analyze the intensity of 3D functional images and detect hot spots that represent potential lesions. For example, by collecting a dataset of PET/CT images in which hot spots representing lesions have been manually detected and segmented, training material for AI-based lesion detection algorithms can be obtained. These manually labeled images can be used to train one or more machine learning algorithms to automatically analyze functional images (eg, PET images) to accurately detect and segment hot spots corresponding to cancerous lesions.

圖1A展示用於使用實施機器學習演算法(諸如ANN、CNN及類似者)之機器學習模組進行自動化病變偵測及/或分割的實例性程序100a。如圖1A中所展示,接收106 3D功能影像102 (諸如PET或SPECT影像)且將其用作至機器學習模組110之輸入。圖1A展示使用作為放射性藥物102a之PyLTM 獲得的實例性PET影像。PET影像102a經展示疊對於CT影像上(例如,作為PET/CT影像),但機器學習模組110可接收PET (例如,或其他功能影像)自身(例如,不包含CT,或其他解剖影像)作為輸入。在某些實施例中,如下文所描述,亦可接收解剖影像作為輸入。機器學習模組自動偵測及/或分割經判定(藉由機器學習模組)表示潛在癌性病變之熱點120。圖1A中亦展示展示出現在PET影像120b中之熱點之實例性影像。因此,機器學習模組產生(i)熱點清單130及(ii)熱點圖132之一或兩者作為輸出。在某些實施例中,該熱點清單識別經偵測熱點之位置(例如,質心)。在某些實施例中,熱點圖識別如經由藉由機器學習模組110執行之影像分割判定之經偵測熱點之3D體積及/或描繪該等經偵測熱點之3D邊界。可儲存及/或提供熱點清單及/或熱點圖(例如,至其他軟體模組)以供顯示及/或進一步處理140。1A shows an example process 100a for automated lesion detection and/or segmentation using machine learning modules implementing machine learning algorithms such as ANNs, CNNs, and the like. As shown in FIG. 1A , a 3D functional image 102 (such as a PET or SPECT image) is received 106 and used as input to a machine learning module 110 . Figure 1A shows an example PET image obtained using PyL as the radiopharmaceutical 102a. PET image 102a is shown overlaid on CT image (eg, as a PET/CT image), but machine learning module 110 may receive PET (eg, or other functional image) itself (eg, without CT, or other anatomical image) as input. In some embodiments, as described below, anatomical images may also be received as input. The machine learning module automatically detects and/or segments hot spots 120 that are determined (by the machine learning module) to represent potential cancerous lesions. An example image showing hot spots appearing in PET image 120b is also shown in FIG. 1A. Accordingly, the machine learning module generates one or both of (i) a hotspot list 130 and (ii) a hotspot map 132 as outputs. In some embodiments, the hotspot list identifies the locations (eg, centroids) of detected hotspots. In some embodiments, the heat map identifies the 3D volume of detected hot spots as determined through image segmentation performed by the machine learning module 110 and/or delineates the 3D boundaries of the detected hot spots. A list of hot spots and/or heat maps (eg, to other software modules) may be stored and/or provided for display and/or further processing 140.

在某些實施例中,可不僅對功能影像資訊(例如,來自PET影像)而且對解剖資訊訓練基於機器學習之病變偵測演算法,且該等基於機器學習之病變偵測演算法可不僅利用功能影像資訊(例如,來自PET影像)而且利用解剖資訊。例如,在某些實施例中,可對兩個通道(對應於PET影像之部分之第一通道及對應於CT影像之部分之第二通道)訓練用於病變偵測及分割之一或多個機器學習模組,且該一或多個機器學習模組可接收該兩個通道作為輸入。在某些實施例中,自解剖(例如,CT)影像導出之資訊亦可用作至用於病變偵測及/或分割之機器學習模組之輸入。例如,在某些實施例中,亦可使用識別解剖及/或功能影像內之各種組織區域之3D分割圖(例如,藉由一或多個機器學習模組接收該等3D分割圖作為輸入,例如,作為分離輸入通道)以提供解剖背景。In certain embodiments, machine learning based lesion detection algorithms may be trained not only on functional image information (eg, from PET images) but also on anatomical information, and such machine learning based lesion detection algorithms may utilize not only Functional image information (eg, from PET images) and utilize anatomical information. For example, in some embodiments, two channels (a first channel corresponding to a portion of a PET image and a second channel corresponding to a portion of the CT image) may be trained for one or more of lesion detection and segmentation A machine learning module, and the one or more machine learning modules can receive the two channels as input. In some embodiments, information derived from anatomical (eg, CT) images may also be used as input to a machine learning module for lesion detection and/or segmentation. For example, in some embodiments, 3D segmentation maps that identify various tissue regions within anatomical and/or functional images may also be used (eg, by one or more machine learning modules receiving such 3D segmentation maps as input, For example, as a separate input channel) to provide anatomical context.

圖1B展示實例性程序100b,其中接收108 3D解剖影像104 (諸如CT或MR影像)及3D功能影像102兩者且將其等作為送至機器學習模組112之輸入,機器學習模組112基於來自3D解剖影像及3D功能影像102兩者之資訊(例如,體素強度)執行熱點偵測及/或分割122,如本文中所描述。熱點清單130及/或熱點圖132可自機器學習模組產生作為輸出,且經儲存/提供以供進一步處理(例如,圖形轉列以供顯示,藉由其他軟體模組進行後續操作等) 140。1B shows an example procedure 100b in which both 3D anatomical images 104 (such as CT or MR images) and 3D functional images 102 are received 108 and used as input to a machine learning module 112, which is based on Information (eg, voxel intensities) from both the 3D anatomical image and the 3D functional image 102 performs hot spot detection and/or segmentation 122, as described herein. Hotspot list 130 and/or heatmap 132 may be generated as output from a machine learning module and stored/provided for further processing (eg, graph re-layout for display, subsequent manipulation by other software modules, etc.) 140 .

在某些實施例中,自動化病變偵測及分析(例如,用於包含於報告中)包含三個任務:(i)偵測對應於病變之熱點;(ii)分割經偵測熱點(例如,用以在功能影像內識別對應於各病變之3D體積);(iii)將經偵測熱點分類為具有對應於受試者內之真實病變之高或低概率(例如,且因此適於或不適於包含於放射科醫師報告中)。在某些實施例中,一或多個機器學習模組可用於例如一個接一個(例如,循序地)或組合地完成此三個任務。例如,在某些實施例中,訓練第一機器學習模組以偵測熱點及識別熱點位置,訓練第二機器學習模組以分割熱點,且訓練第三機器學習模組以(例如)使用自其他兩個機器學習模組獲得的資訊對經偵測熱點進行分類。In certain embodiments, automated lesion detection and analysis (eg, for inclusion in reports) includes three tasks: (i) detecting hot spots corresponding to lesions; (ii) segmenting detected hot spots (eg, to identify the 3D volume corresponding to each lesion within the functional image); (iii) classify the detected hot spots as having a high or low probability (eg, and therefore suitable or unsuitable) corresponding to the actual lesion within the subject included in the radiologist's report). In some embodiments, one or more machine learning modules may be used, for example, to accomplish these three tasks one after the other (eg, sequentially) or in combination. For example, in some embodiments, a first machine learning module is trained to detect hotspots and identify hotspot locations, a second machine learning module is trained to segment hotspots, and a third machine learning module is trained to, for example, use automated The information obtained by the other two machine learning modules classifies the detected hotspots.

例如,如圖1C之實例性程序100c中所展示,可接收106 3D功能影像102且將其作為送至執行自動化熱點偵測之第一機器學習模組114之輸入。第一機器學習模組114自動偵測3D功能影像中之一或多個熱點124且產生熱點清單130作為輸出。第二機器學習模組116可接收熱點清單130連同3D功能影像作為輸入,且執行自動化熱點分割126以產生熱點圖132。如先前所描述,可儲存及/或提供熱點圖132以及熱點清單130以供進一步處理140。For example, as shown in the example process 100c of FIG. 1C, a 3D functional image 102 may be received 106 and used as input to a first machine learning module 114 that performs automated hotspot detection. The first machine learning module 114 automatically detects one or more hot spots 124 in the 3D functional image and generates a hot spot list 130 as an output. The second machine learning module 116 may receive the hotspot list 130 along with the 3D functional image as input, and perform automated hotspot segmentation 126 to generate the heatmap 132 . Heatmap 132 and hotspot list 130 may be stored and/or provided for further processing 140, as previously described.

在某些實施例中,訓練單個機器學習模組以直接分割影像內之熱點(例如,3D功能影像;例如,用以產生識別對應於經偵測熱點之體積之3D熱點圖),從而組合偵測及分割熱點之前兩個步驟。接著可使用第二機器學習模組(例如)基於先前判定之經分割熱點對經偵測熱點進行分類。在某些實施例中,可訓練單個機器學習模組以在單個步驟中完成全部三個任務—偵測、分割及分類。iii. 病變指數值 In some embodiments, a single machine learning module is trained to directly segment hotspots within an image (eg, a 3D functional image; eg, to generate a 3D heatmap identifying volumes corresponding to detected hotspots) to combine detection Two steps before detecting and segmenting hotspots. The detected hotspots can then be classified using a second machine learning module, eg, based on the previously determined segmented hotspots. In some embodiments, a single machine learning module can be trained to accomplish all three tasks—detection, segmentation, and classification—in a single step. iii. Lesion index value

在某些實施例中,針對經偵測熱點計算病變指數值以提供(例如)對應實體病變內之相對攝取及/或對應實體病變之大小之量度。在某些實施例中,針對特定熱點基於(i)該熱點之強度之量度及(ii)對應於一或多個參考體積(各對應於特定參考組織區域)內之強度之量度的參考值運算病變指數值。例如,在某些實施例中,參考值包含量測對應於主動脈之部分之主動脈體積內之強度的主動脈參考值(亦被稱為血池參考)及量測對應於受試者之肝臟之肝臟體積內之強度的肝臟參考值。在某些實施例中,核醫學影像(例如,PET影像)之體素之強度表示標準攝取值(SUV) (例如,已針對經注射放射性藥物劑量及/或患者體重來校準),且熱點強度之量度及/或參考值之量度係SUV值。使用此等參考值用於運算病變指數值係進一步詳細描述(例如)於2020年1月6日申請之PCT/EP2020/050132中,該案之全部內容以引用的方式併入本文。In certain embodiments, lesion index values are calculated for detected hot spots to provide, for example, a measure of relative uptake within the corresponding solid lesion and/or the size of the corresponding solid lesion. In some embodiments, a reference value operation for a particular hot spot is based on (i) a measure of the intensity of the hot spot and (ii) a measure of intensity corresponding to one or more reference volumes (each corresponding to a particular reference tissue region) Lesion index value. For example, in some embodiments, the reference value includes an aortic reference value (also referred to as a blood pool reference) that measures the intensity within the aortic volume corresponding to a portion of the aorta and a measurement corresponding to the subject's The liver reference value for the intensity within the liver volume of the liver. In certain embodiments, the intensities of voxels in nuclear medicine images (eg, PET images) represent standard uptake values (SUVs) (eg, calibrated for injected radiopharmaceutical dose and/or patient weight), and the intensity of the hot spot The metric and/or reference value metric is the SUV value. The use of these reference values for calculating lesion index values is described in further detail, eg, in PCT/EP2020/050132, filed on January 6, 2020, which is incorporated herein by reference in its entirety.

在某些實施例中,使用分割遮罩來識別(例如) PET影像中之特定參考體積。對於特定參考體積,可經由解剖(例如,CT)影像之分割獲得識別該參考體積之分割遮罩。例如,在某些實施例中(例如,如PCT/EP2020/050132中所描述),可執行3D解剖影像之分割以產生包括各識別特定組織所關注區域之複數個分割遮罩之分割圖。因此,以此方式產生之分割圖之一或多個分割遮罩可用於識別一或多個參考體積。In some embodiments, segmentation masks are used to identify specific reference volumes in, for example, PET images. For a particular reference volume, a segmentation mask identifying the reference volume may be obtained through segmentation of anatomical (eg, CT) images. For example, in some embodiments (eg, as described in PCT/EP2020/050132), segmentation of a 3D anatomical image may be performed to generate a segmentation map including a plurality of segmentation masks each identifying regions of interest for a particular tissue. Thus, one or more segmentation masks of the segmentation map generated in this way can be used to identify one or more reference volumes.

在某些實施例中,為識別待用於運算對應參考值之參考體積之體素,可將遮罩侵蝕固定距離(例如,至少一個體素)以建立識別對應於完全在參考組織區域內之實體區域之參考體積的參考器官遮罩。例如,3 mm及9 mm之侵蝕距離可分別用於主動脈及肝臟參考體積。亦可使用其他侵蝕距離。亦可執行額外遮罩細化(例如,以選擇用於運算參考值之特定、所要體素集合),例如,如下文關於肝臟參考體積所描述。In some embodiments, to identify the voxels to be used to compute the reference volume corresponding to the reference value, the mask may be eroded a fixed distance (eg, at least one voxel) to establish an identification corresponding to a voxel that is completely within the reference tissue region. The reference organ mask for the reference volume of the solid region. For example, erosion distances of 3 mm and 9 mm can be used for aortic and liver reference volumes, respectively. Other erosion distances may also be used. Additional mask refinement may also be performed (eg, to select a particular, desired set of voxels for computing reference values), eg, as described below with respect to liver reference volumes.

可使用參考體積內之強度之各種量度。例如,在某些實施例中,參考體積(例如,如在侵蝕之後由參考體積分割遮罩定義)內部之體素強度之穩健平均值可經判定為體素強度之四分位數間距內之值的平均數(IQR mean )。亦可判定其他量度(諸如峰值、最大值、中值等)。在某些實施例中,主動脈參考值經判定為來自主動脈遮罩內部之體素之SUV之穩健平均值。該穩健平均值經運算為四分位數間距內之值的平均數IQR mean Various measures of intensity within the reference volume can be used. For example, in some embodiments, a robust average of voxel intensities inside a reference volume (eg, as defined by the reference volume segmentation mask after erosion) may be determined to be within the interquartile range of voxel intensities The mean of the values (IQR mean ). Other metrics (such as peak, maximum, median, etc.) may also be determined. In certain embodiments, the aortic reference value is determined as a robust average of SUVs from voxels inside the aortic mask. The robust mean is computed as the mean of the values within the interquartile range, IQR mean .

在某些實施例中,選擇參考體積內之體素之子集以避免來自可具有異常低放射性藥物攝取之參考組織區域的影響。儘管本文中所描述及引用之自動化分割技術可提供對應於特定組織區域之影像區域的準確輪廓(例如,識別),但肝臟中經常存在異常低攝取之區域,該等區域應自參考值計算予以排除。例如,可運算肝臟參考值(例如,肝臟SUV值)以便避免來自肝臟中之具有可(例如)歸因於無示蹤劑攝取之腫瘤而出現之非常低示蹤劑(放射性藥物)活性之區域的影響。在某些實施例中,為考量參考組織區域中之異常低攝取之效應,針對肝臟之參考值計算分析對應於肝臟之體素(例如,經識別之肝臟參考體積內之體素)之強度的直方圖且移除(例如,排除)強度(若其等形成較低強度之第二直方圖峰值),從而僅包含與較高強度值峰值相關聯的強度。In certain embodiments, a subset of voxels within the reference volume are selected to avoid influences from reference tissue regions that may have abnormally low radiopharmaceutical uptake. Although the automated segmentation techniques described and referenced herein can provide accurate contours (eg, identification) of image regions corresponding to specific tissue regions, areas of abnormally low uptake often exist in the liver, which should be calculated from reference values exclude. For example, liver reference values (eg, liver SUV values) can be calculated in order to avoid regions in the liver with very low tracer (radiopharmaceutical) activity that may occur, eg, attributable to tumors without tracer uptake Impact. In certain embodiments, to account for the effect of abnormally low uptake in a reference tissue region, the intensities of the voxels corresponding to the liver (eg, voxels within the identified liver reference volume) are calculated and analyzed for the liver reference value. The histogram also removes (eg, excludes) intensities if they form a second histogram peak of lower intensity, thereby only including intensities associated with higher intensity value peaks.

例如,對於肝臟,可將參考SUV運算為擬合至肝臟參考體積(例如,如由肝臟分割遮罩識別,例如,在上述侵蝕程序之後)內之體素之SUV之直方圖的雙組分高斯混合模型中之主要組分(亦被稱為「模式」,例如,被稱為在「主要模式」中)之平均SUV。在某些實施例中,若次要組分具有大於主要組分之平均SUV,且該次要組分具有權重之至少0.33,則擲回錯誤且未判定肝臟之任何參考值。在某些實施例中,若次要組分具有大於主要峰值之平均數,則肝臟參考遮罩保持原樣。否則,運算分離SUV臨限值。在某些實施例中,該分離臨限值經定義使得屬於處於臨限值或更大之SUV之主要組分的概率與屬於處於分離臨限值或更小之SUV之次要組分的概率相同。接著藉由移除具有小於分離臨限值之SUV之體素來細化參考肝臟遮罩。接著可將肝臟參考值判定為藉由肝臟參考遮罩識別之體素之強度(例如,SUV)之量度,例如,如本文中關於主動脈參考所描述。圖2A繪示實例性肝臟參考運算,其展示具有以紅色展示之高斯混合組分(主要組分244及次要組分246)及以綠色標記之分離臨限值242之肝臟SUV值的直方圖。For example, for liver, the reference SUV can be computed as a two-component Gaussian of a histogram of SUVs fitted to voxels within the liver reference volume (eg, as identified by the liver segmentation mask, eg, after the erosion procedure described above) The mean SUV of the principal components in the mixture model (also referred to as "modes", eg, referred to as in "primary modes"). In certain embodiments, if the secondary component has an average SUV greater than the primary component, and the secondary component has a weight of at least 0.33, an error is returned and any reference value for the liver is not determined. In certain embodiments, if the secondary component has a greater mean than the primary peak, the liver reference mask remains as-is. Otherwise, compute the separation SUV threshold value. In certain embodiments, the separation threshold is defined such that the probability of belonging to a major component of an SUV at the threshold or greater and the probability of belonging to a minor component of an SUV at the separation threshold or less same. The reference liver mask is then refined by removing voxels with SUVs less than the separation threshold. The liver reference value can then be determined as a measure of the intensity (eg, SUV) of the voxels identified by the liver reference mask, eg, as described herein for the aortic reference. 2A depicts an example liver reference operation showing a histogram of liver SUV values with Gaussian mixture components shown in red (major component 244 and minor component 246) and separation threshold 242 marked in green .

圖2B展示用於計算肝臟參考值之肝臟體積之所得部分,其中自參考值計算排除對應於較低值峰值之體素。在圖2B中之各影像上展示精細化肝臟體積遮罩之輪廓252a及252b,其中排除對應於較低值峰值(例如,具有低於分離臨限值242之強度)之體素。如圖中所展示,已排除朝向肝臟底部之較低強度區域,以及靠近肝臟邊緣之區域。Figure 2B shows the resulting portion of the liver volume used to calculate the liver reference value, with voxels corresponding to lower value peaks excluded from the reference value calculation. Contours 252a and 252b of the refined liver volume mask are shown on each image in FIG. 2B, with voxels corresponding to lower value peaks (eg, having intensities below the separation threshold 242) excluded. As shown in the figure, regions of lower intensity towards the base of the liver, as well as regions near the edges of the liver, have been excluded.

圖2C展示其中使用多組分混合模型來避免來自具有低示蹤劑攝取之區域之影響的實例性程序200,如本文中關於肝臟參考體積運算所描述。關於肝臟在圖2C中所展示及在本文中所描述之程序亦可類似地應用於運算其他器官及組織所關注區域之強度量度,諸如主動脈(例如,主動脈部分,諸如胸主動脈部分或腹主動脈部分)、腮腺、臀肌。如所展示,在圖2C中及本文中所描述,在第一步驟,接收3D功能影像202,且在其中識別208對應於特定參考組織區域(例如,肝臟、主動脈、腮腺)之參考體積。接著將多組分混合模型210擬合至參考體積(例如,內)之分佈強度(例如,強度直方圖),且識別該混合模型之主要模式212。判定214與該主要模式相關聯的強度之量度(例如,及排除來自與其他、次要模式相關聯的強度之比重)且將該量度用作經識別參考體積之參考強度值。在某些實施例中,如本文中所描述,藉由識別分離臨限值來判定與主要模式相關聯的強度之量度,使得高於該分離臨限值之強度經判定為與主要模式相關聯,且低於分離臨限值之強度經判定為與次要模式相關聯。具有位於分離臨限值之上之強度之體素係用於判定參考強度值,而具有低於分離臨限值之強度之體素係自參考強度值計算予以排除。2C shows an example procedure 200 in which a multi-component mixture model is used to avoid effects from regions with low tracer uptake, as described herein for liver reference volume calculations. The procedures shown in FIG. 2C and described herein with respect to the liver can also be similarly applied to compute intensity measures of regions of interest for other organs and tissues, such as the aorta (eg, aortic portions such as thoracic aortic portions or abdominal aorta), parotid gland, gluteal muscle. As shown, in FIG. 2C and described herein, in a first step, a 3D functional image 202 is received, and a reference volume corresponding to a particular reference tissue region (eg, liver, aorta, parotid gland) is identified 208 therein. The multicomponent mixture model 210 is then fitted to the distribution intensities (eg, intensity histograms) of a reference volume (eg, within), and the dominant mode 212 of the mixture model is identified. A measure of the intensity associated with the major mode (eg, and excluding proportions from intensities associated with other, minor modes) is determined 214 and used as a reference intensity value for the identified reference volume. In certain embodiments, as described herein, a measure of intensity associated with a dominant mode is determined by identifying a separation threshold such that intensities above the separation threshold are determined to be associated with a dominant mode , and intensities below the separation threshold were determined to be associated with secondary modes. Voxels with intensities above the separation threshold are used to determine the reference intensity value, while voxels with intensities below the separation threshold are excluded from the reference intensity value calculation.

在某些實施例中,偵測216熱點且以此方式判定之參考強度值可用於(例如)經由諸如2019年1月7日申請之PCT/US2019/012486及2020年1月6日申請之PCT/EP2020/050132中所描述之方法的方法來判定經偵測熱點之病變指數值218,各案之全部內容以引用的方式併入本文。iv. 抑制與高攝取器官中之正常攝取相關聯的強度滲出 In certain embodiments, detecting 216 hotspots and the reference intensity value determined in this way can be used, for example, via methods such as PCT/US2019/012486 filed on Jan. 7, 2019 and PCT filed on Jan. 6, 2020 /EP2020/050132, the entire contents of which are incorporated herein by reference, to determine the lesion index value 218 of the detected hot spots. iv. Inhibition of intensity exudation associated with normal uptake in high uptake organs

在某些實施例中,調整功能影像之體素之強度以便抑制/校正與其中在正常情況下發生高攝取之特定器官相關聯的強度滲出。此方法可用於(例如)諸如腎臟、肝臟及尿膀胱之器官。在某些實施例中,以逐步方式一次一個器官地執行校正與多個器官相關聯的強度滲出。例如,在某些實施例中,首先抑制腎臟攝取,接著抑制肝臟攝取,接著抑制尿膀胱攝取。因此,至肝臟抑制之輸入係其中已校正腎臟攝取之影像(例如,且至膀胱抑制之輸入係其中已校正腎臟及肝臟攝取之影像)。In certain embodiments, the intensities of voxels of the functional image are adjusted in order to suppress/correct the intensity exudation associated with specific organs where high uptake normally occurs. This method can be used, for example, on organs such as the kidney, liver, and urinary bladder. In certain embodiments, correcting for intensity exudation associated with multiple organs is performed one organ at a time in a stepwise fashion. For example, in certain embodiments, renal uptake is inhibited first, followed by inhibition of hepatic uptake, followed by inhibition of urinary bladder uptake. Thus, the input to liver suppression is the image in which renal uptake has been corrected (eg, and the input to bladder suppression is the image in which kidney and liver uptake have been corrected).

圖3展示用於校正來自高攝取組織區域之強度滲出之實例性程序300。如圖3中所展示,接收304 3D功能影像且識別306對應於高攝取組織區域之高強度體積。在另一步驟,識別308高強度體積之外之抑制體積。在某些實施例中,如本文中所描述,該抑制體積可經判定為圍封在高強度體積之外但在距高強度體積之預定距離內之區域的體積。在另一步驟,例如,藉由(例如)經由內插(例如,使用廻旋)在基於高強度體積之外(例如,在抑制體積內)之強度判定之高強度體積強度內指派體素來判定310背景影像。在另一步驟,藉由自3D功能影像減去背景影像(例如,經由逐體素強度減法)來判定312估計影像。在另一步驟,判定314抑制圖。如本文中所描述,在某些實施例中,藉由將高強度體積內之體素之強度值外推至高強度體積之外之位置來使用估計影像判定抑制圖。在某些實施例中,僅將強度外推至抑制體積內之位置,且將抑制體積之外之體素之強度設定為0。接著使用抑制圖以(例如)藉由自3D功能影像減去抑制圖(例如,執行逐體素強度減法)來調整3D功能影像之強度316。3 shows an example procedure 300 for correcting for intensity exudation from areas of high uptake tissue. As shown in FIG. 3, a 3D functional image is received 304 and a high intensity volume corresponding to a high uptake tissue region is identified 306. In another step, inhibition volumes outside the high intensity volume are identified 308. In certain embodiments, as described herein, the suppression volume may be determined to be the volume enclosing an area outside the high-intensity volume but within a predetermined distance from the high-intensity volume. In another step, the decision 310 is determined 310, eg, by assigning voxels within the high-intensity volume intensities based on intensity determinations outside the high-intensity volume (eg, within the suppression volume), eg, via interpolation (eg, using convolutions). Background image. In another step, the estimated image is determined 312 by subtracting the background image from the 3D functional image (eg, via voxel-wise intensity subtraction). In another step, a decision 314 suppresses the map. As described herein, in certain embodiments, the estimated image is used to determine the suppression map by extrapolating the intensity values of voxels within the high-intensity volume to locations outside the high-intensity volume. In some embodiments, the intensities are only extrapolated to locations within the suppression volume, and the intensities of voxels outside the suppression volume are set to zero. The suppression map is then used to adjust the intensity 316 of the 3D functional image, eg, by subtracting the suppression map from the 3D functional image (eg, performing voxel-wise intensity subtraction).

用於針對PET/CT合成影像抑制/校正來自特定器官(在某些實施例中,腎臟被一起治療)之強度滲出之實例性程序係如下: 1.     將經投影之CT器官遮罩分割調整至PET影像之高強度區域,以便處理PET/CT未對準。若PET調整之器官遮罩小於10個像素,則不對此器官進行抑制。 2.     運算「背景影像」,以在距PET調整之器官遮罩之衰減距離內用內插之背景攝取取代所有高攝取。此係使用高斯核心之廻旋來完成。 3.     將在估計抑制時應考量之強度運算為輸入PET與背景影像之間的差值。此「估計影像」在給定器官內部具有高強度且在與給定器官相距超過衰減距離之位置處具有零強度。 4.     使用指數模型自估計影像估計抑制圖。該抑制圖僅在PET調整之器官分割之衰減距離內之區域中為非零。 5.     自原始PET影像減去抑制圖。An example procedure for suppressing/correcting for intensity exudation from specific organs (in some embodiments, the kidneys are treated together) for PET/CT composite images is as follows: 1. Adjust the projected CT organ mask segmentation to the high-intensity area of the PET image to handle PET/CT misalignment. If the PET-adjusted organ mask is less than 10 pixels, the organ is not suppressed. 2. Compute the "background image" to replace all high uptakes with interpolated background uptakes within the attenuation distance from the PET adjusted organ mask. This is done using the spin of a Gaussian core. 3. Calculate the intensity that should be considered when estimating suppression as the difference between the input PET and the background image. This "estimated image" has high intensity inside the given organ and zero intensity at locations more than the attenuation distance from the given organ. 4. Estimate the suppression map from the estimated image using the exponential model. The suppression map is non-zero only in regions within the attenuation distance of the PET-adjusted organ segmentation. 5. Subtract the suppression map from the original PET image.

如上文所描述,可針對多個器官之集合之各者以循序方式重複此五個步驟。v. 經偵測病變之解剖標記 As described above, these five steps can be repeated in a sequential manner for each of the collections of multiple organs. v. Anatomical markers of detected lesions

在某些實施例中,對經偵測熱點(例如)自動指派解剖標記,該等解剖標記識別經偵測標記表示之病變經判定所定位之特定解剖區域及/或區域群組。例如,如圖4之實例性程序400中所展示,可接收404 3D功能影像且將其用於(例如)經由本文中所描述之方法之任一者自動偵測熱點406。一旦偵測熱點,便可自動判定408各熱點之解剖分類且用經判定之解剖分類標記各熱點。自動化解剖標記可(例如)使用經偵測熱點之經自動判定之位置連同藉由(例如)識別對應於特定組織區域之影像區域之3D分割圖及/或解剖影像提供的解剖資訊來執行。可儲存及/或提供熱點及各熱點之解剖標記以供進一步處理410。In certain embodiments, detected hot spots, for example, are automatically assigned anatomical markers that identify specific anatomical regions and/or groups of regions where the lesions represented by the detected markers are determined to be located. For example, as shown in the example process 400 of FIG. 4, a 3D-enabled image can be received 404 and used, for example, to automatically detect hot spots 406 via any of the methods described herein. Once the hotspots are detected, the anatomical classification of each hotspot can be automatically determined 408 and each hotspot can be marked with the determined anatomical classification. Automated anatomical marking can be performed, for example, using automatically determined locations of detected hot spots in conjunction with anatomical information provided by, for example, identifying 3D segmentation maps and/or anatomical images of image regions corresponding to particular tissue regions. Hot spots and anatomical markers for each hot spot can be stored and/or provided for further processing 410 .

例如,可將經偵測熱點自動分類成如下五種類別: ●   T (前列腺腫瘤) ●   N (骨盆淋巴結) ●   Ma (非骨盆淋巴) ●   Mb (骨轉移) ●   Mc (不位於前列腺或淋巴結中之軟組織轉移)For example, detected hotspots can be automatically classified into the following five categories: ● T (prostate tumor) ● N (pelvic lymph nodes) ● Ma (non-pelvic lymph) ● Mb (bone metastases) ● Mc (soft tissue metastases not located in prostate or lymph nodes)

以下表1列出與五種類別之各者相關聯的組織區域。對應於與特定類別相關聯的組織區域之各者內之位置的熱點可相應地自動指派給該類別。 表1.對應於病變解剖標記方法中之五種類別之組織區域的清單 骨 Mb 淋巴結 Ma 骨盆淋巴結 N 前列腺 T 軟組織 Mc 顱骨 子宮頸 模板右 前列腺 胸廓 鎖骨上 模板左    頸部 腰椎 腋窩 骶前    胸椎 縱膈 其他、骨盆    食管 骨盆 肺門       肝臟 四肢 腸系膜       膽囊    肘部                胰腺    主動脈周圍/旁       腎上腺    其他、非骨盆       腎臟             膀胱             皮膚             肌肉             其他 vi. 圖形使用者介面以及品質控制及報告 Table 1 below lists the organizational areas associated with each of the five categories. Hotspots corresponding to locations within each of the organizational areas associated with a particular category may be automatically assigned to that category accordingly. Table 1. List of tissue regions corresponding to the five categories in the lesion anatomical labeling method Bone Mb Lymph node Ma Pelvic lymph nodes N Prostate T soft tissue Mc skull cervix template right prostate brain Thorax supraclavicular template left neck lumbar spine armpit Presacral lung thoracic mediastinum other, pelvis esophagus pelvis hilum liver limbs mesentery gallbladder elbow spleen popliteal pancreas Peri-aortic/para-aortic adrenal glands Other, non-pelvic kidney bladder skin muscle other vi. Graphical User Interface and Quality Control and Reporting

在某些實施例中,經偵測熱點及相關聯資訊(諸如經運算之病變指數值及解剖標記)係用互動式圖形使用者介面(GUI)顯示以便容許醫學專業人員(諸如醫師、放射科醫師、技術人員等)進行查看。因此,醫學專業人員可使用GUI來查看及確認經偵測熱點以及對應指數值及/或解剖標記之準確度。在某些實施例中,GUI亦可容許使用者識別及分割(例如,手動地)醫學影像內之額外熱點,從而容許醫學專業人員識別他/她認為自動化偵測程序可能遺漏之額外潛在病變。一旦經識別,亦可針對此等手動識別及分割之病變判定病變指數值及/或解剖標記。例如,如圖5B中所指示,使用者可查看針對各熱點判定之位置,以及解剖標記,諸如(例如,自動判定之) miTNM分類。該miTNM分類方案係進一步詳細描述於Eiber等人之J. Nucl. Med. 第59卷,第469-78頁 (2018)之「Prostate Cancer Molecular Imaging Standardized Evaluation (PROMISE): Proposed miTNM Classification for the Interpretation of PSMA-Ligand PET/CT」中,該案之全部內容以引用的方式併入本文。一旦使用者對經偵測熱點及自其運算之資訊之集合感到滿意,他們便可確認他們的核可並產生最終、經簽名報告,該最終、經簽名報告可被查看且用於與患者討論結果及診斷,並評估預後及治療選項。In certain embodiments, detected hotspots and associated information (such as computed lesion index values and anatomical landmarks) are displayed with an interactive graphical user interface (GUI) to allow medical professionals (such as physicians, radiologists, physicians, technicians, etc.) to check. Thus, a medical professional can use the GUI to view and confirm the accuracy of detected hot spots and corresponding index values and/or anatomical landmarks. In some embodiments, the GUI may also allow the user to identify and segment (eg, manually) additional hot spots within the medical image, thereby allowing the medical professional to identify additional potential lesions that he/she believes may be missed by the automated detection process. Once identified, lesion index values and/or anatomical landmarks may also be determined for these manually identified and segmented lesions. For example, as indicated in FIG. 5B, the user may view the location determined for each hot spot, as well as anatomical landmarks, such as (eg, automatically determined) miTNM classifications. The miTNM classification scheme is described in further detail in "Prostate Cancer Molecular Imaging Standardized Evaluation (PROMISE): Proposed miTNM Classification for the Interpretation of Eiber et al., J. Nucl. Med. Vol. 59, pp. 469-78 (2018)) PSMA-Ligand PET/CT", which is incorporated by reference in its entirety. Once users are satisfied with the collection of detected hotspots and the information computed from them, they can confirm their approval and generate a final, signed report that can be viewed and used for discussion with patients Outcomes and diagnosis, and assess prognosis and treatment options.

例如,如圖5A中所展示,在用於互動式熱點查看及偵測之實例性程序500中,接收504 3D功能影像且(例如)使用本文中所描述之自動化偵測方法之任一者自動偵測506熱點。在互動式GUI內圖形表示及轉列經自動偵測之熱點之集合508以供使用者查看。使用者可選擇經自動偵測之熱點之至少部分(例如,多達全部)以用於包含於最終熱點集合中510,該最終熱點集合接著可用於進一步計算512 (例如)以判定針對患者之風險指數值。For example, as shown in FIG. 5A, in an example process 500 for interactive hotspot viewing and detection, a 3D-enabled image is received 504 and automatically, for example, using any of the automated detection methods described herein Detect 506 hotspots. A set 508 of automatically detected hotspots is graphically represented and listed within the interactive GUI for viewing by the user. The user may select at least some (eg, up to all) of the automatically detected hotspots for inclusion 510 in a final set of hotspots, which may then be used for further calculations 512 (eg, to determine risk to the patient) index value.

圖5B展示用於經偵測病變之使用者查看及用於品質控制及報告之病變指數值的實例性工作流程520。該實例性工作流程容許經分割病變之使用者查看以及用於計算病變指數值之肝臟及主動脈分割,如本文中所描述。例如,在第一步驟,使用者查看影像(例如,CT影像)的品質522及用於獲得肝臟及血池(例如,主動脈)參考值之自動化分割之準確度524。如圖6A及圖6B中所展示,GUI容許使用者評估影像及經疊對分割以確保肝臟之自動化分割(602,圖6A中之紫色)在健康肝臟組織內且血池之自動化分割(主動脈部分604,在圖6B中展示為鮭魚色)在主動脈及左心室內。5B shows an example workflow 520 for user viewing of detected lesions and lesion index values for quality control and reporting. This example workflow allows users of segmented lesions to view liver and aorta segmentations for use in calculating lesion index values, as described herein. For example, in a first step, the user reviews the quality 522 of the image (eg, CT image) and the accuracy 524 of the automated segmentation for obtaining liver and blood pool (eg, aorta) reference values. As shown in Figures 6A and 6B, the GUI allows the user to evaluate the image and segment the overlay to ensure that the automated segmentation of the liver (602, purple in Figure 6A) is within healthy liver tissue and the automated segmentation of the blood pool (aortic portion) 604, shown as salmon in Figure 6B) within the aorta and left ventricle.

在另一步驟526,使用者驗證經自動偵測之熱點及/或識別額外熱點(例如)以建立對應於病變之熱點之最終集合以用於包含於所產生之報告中。如圖6C中所展示,使用者可藉由將滑鼠懸停於在GUI內顯示之熱點之圖形表示上方來選擇經自動識別之熱點(例如,作為PET及/或CT影像上之疊對及/或經標記區域)。為促進熱點選擇,可經由色彩變化(例如,變綠)向使用者指示所選擇之特定熱點。接著,使用者可在熱點上點選以選擇其,此可經由另一色彩變化在視覺上向使用者確認。例如,如圖6C中所展示,在選擇之後,熱點變為粉紅色。在使用者選擇之後,可向使用者顯示定量判定之值(諸如病變指數及/或解剖標記),以容許其等驗證經自動判定之值528。In another step 526, the user validates the automatically detected hotspots and/or identifies additional hotspots, eg, to create a final set of hotspots corresponding to lesions for inclusion in the generated report. As shown in Figure 6C, the user may select automatically identified hotspots (eg, as overlays and/or overlays on PET and/or CT images) by hovering the mouse over the graphical representation of the hotspot displayed within the GUI. / or marked area). To facilitate hotspot selection, the particular hotspot selected may be indicated to the user via a color change (eg, turning green). The user can then click on the hotspot to select it, which can be visually confirmed to the user via another color change. For example, as shown in Figure 6C, after selection, the hot spot turns pink. Following user selection, quantitatively determined values (such as lesion index and/or anatomical landmarks) may be displayed to the user to allow them to verify the automatically determined values 528 .

在某些實施例中,GUI容許使用者自經(自動)預識別之熱點之集合選擇熱點以確認其等確實表示病變526a且亦識別尚未自動偵測之對應於病變之額外熱點526b。In some embodiments, the GUI allows the user to select hotspots from a set of (automatically) pre-identified hotspots to confirm that they indeed represent lesions 526a and also to identify additional hotspots 526b corresponding to lesions that have not been automatically detected.

如圖6D及圖6E中所展示,使用者可使用GUI工具以在影像(例如,PET影像及/或CT影像;例如,疊對於CT影像上之PET影像)之圖塊上繪製以標記對應於新的、經手動識別之病變之區域。定量資訊(諸如病變指數及/或解剖標記)可針對經手動識別之病變自動判定,或可由使用者手動輸入。As shown in Figures 6D and 6E, a user may use a GUI tool to draw on tiles of an image (eg, a PET image and/or a CT image; eg, a PET image overlaid on a CT image) to mark labels corresponding to New, manually identified areas of lesions. Quantitative information, such as lesion index and/or anatomical markers, may be determined automatically for manually identified lesions, or may be entered manually by the user.

在另一步驟,例如,一旦使用者選擇及/或手動識別所有病變,GUI就顯示品質控制檢核表以供使用者查看530,如圖7中所展示。一旦使用者查看並完成檢核表,他們便可點選「建立報告」以簽名及產生最終報告532。所產生之報告之實例係在圖8中展示。 C. 用於病變分割之實例性機器學習網路架構 i. 機器學習模組輸入及架構 In another step, for example, once the user selects and/or manually identifies all lesions, the GUI displays a quality control checklist for the user to review 530, as shown in FIG. Once the user has viewed and completed the checklist, they can click "Create Report" to sign and generate the final report 532. An example of the generated report is shown in FIG. 8 . C. Example Machine Learning Network Architecture for Lesion Segmentation i. Machine Learning Module Input and Architecture

參考圖9,其展示在一些實施例中之實例性熱點偵測及分割程序900,熱點偵測及/或分割係藉由接收功能影像902及解剖影像904以及分割圖906作為輸入之機器學習模組908執行,分割圖906提供(例如)各種組織區域(諸如軟組織及骨以及如本文中所描述之各種器官)之分割。Referring to FIG. 9, which shows an example hotspot detection and segmentation process 900 in some embodiments, hotspot detection and/or segmentation by a machine learning model that receives functional images 902 and anatomical images 904 and segmentation maps 906 as input Group 908 performs, segmentation map 906 provides, for example, segmentation of various tissue regions such as soft tissue and bone and various organs as described herein.

功能影像902可係PET影像。如本文中所描述,可按比例調整功能影像902之強度體素以表示SUV值。在某些實施例中,亦可使用如本文中所描述之其他功能影像。解剖影像904可係CT影像。在某些實施例中,按比例調整CT影像904之體素強度以表示亨氏單位(Hounsfield unit)。在某些實施例中,可使用如本文中所描述之其他解剖影像。The functional image 902 may be a PET image. As described herein, the intensity voxels of functional image 902 may be scaled to represent SUV values. In certain embodiments, other functional images as described herein may also be used. The anatomical image 904 may be a CT image. In some embodiments, the voxel intensities of the CT image 904 are scaled to represent Hounsfield units. In certain embodiments, other anatomical images as described herein may be used.

在一些實施例中,機器學習模組908實施使用U網架構之機器學習演算法。在一些實施例中,機器學習模組908實施使用特徵金字塔網路(FPN)架構之機器學習演算法。在一些實施例中,可使用各種其他機器學習架構來偵測及/或分割病變。在某些實施例中,如本文中所描述之機器學習模組執行語義分割。在某些實施例中,如本文中所描述之機器學習模組執行例項分割,(例如)從而區分一個病變與另一個病變。In some embodiments, the machine learning module 908 implements a machine learning algorithm using the U-Net architecture. In some embodiments, the machine learning module 908 implements a machine learning algorithm using a Feature Pyramid Network (FPN) architecture. In some embodiments, various other machine learning architectures may be used to detect and/or segment lesions. In certain embodiments, a machine learning module as described herein performs semantic segmentation. In certain embodiments, a machine learning module as described herein performs instance segmentation, eg, to distinguish one lesion from another.

在一些實施例中,藉由機器學習模組作為輸入接收之三維分割圖906識別如對應於諸如特定器官(例如,前列腺、肝臟、主動脈、膀胱、本文中所描述之各種其他器官等)及/或骨之特定組織所關注區域的經接收之3D解剖及/或功能影像中之各種體積(例如,經由複數個3D分割遮罩)。此外或替代性地,機器學習模組可接收識別組織區域群組之3D分割圖906。例如,在一些實施例中,可使用識別軟組織區域、骨且接著背景區域之3D分割圖。在一些實施例中,3D分割圖可識別其中發生高位準之放射性藥物攝取之高攝取器官群組。例如,高攝取器官群組可包含肝臟、脾、腎臟及尿膀胱。在一些實施例中,3D分割圖識別高攝取器官群組以及一或多個其他器官,諸如主動脈(例如,低攝取軟組織器官)。亦可使用其他組織區域群組。In some embodiments, the three-dimensional segmentation map 906 received as input by the machine learning module identifies as corresponding to a particular organ such as (eg, prostate, liver, aorta, bladder, various other organs described herein, etc.) and Various volumes (eg, via a plurality of 3D segmentation masks) in the received 3D anatomical and/or functional images of regions of interest for a particular tissue of bone. Additionally or alternatively, the machine learning module may receive a 3D segmentation map 906 that identifies groups of tissue regions. For example, in some embodiments, a 3D segmentation map that identifies soft tissue regions, bone, and then background regions may be used. In some embodiments, the 3D segmentation map can identify groups of high uptake organs in which high levels of radiopharmaceutical uptake occur. For example, a high uptake organ group can include liver, spleen, kidney, and urinary bladder. In some embodiments, the 3D segmentation map identifies groups of high uptake organs as well as one or more other organs, such as the aorta (eg, low uptake soft tissue organs). Other organizational area groups may also be used.

至機器學習模組908之功能影像、解剖影像及分割圖輸入可具有各種大小及維度。例如,在某些實施例中,功能影像、解剖影像及分割圖之各者係(例如,藉由三維矩陣表示之)三維影像之補塊。在一些實施例中,該等補塊之各者具有相同大小,例如,各輸入係體素之[32 x 32 x 32]或[64 x 64 x 64]補塊。The functional images, anatomical images, and segmentation map inputs to the machine learning module 908 can be of various sizes and dimensions. For example, in some embodiments, each of the functional image, anatomical image, and segmentation map is a patch of a three-dimensional image (eg, represented by a three-dimensional matrix). In some embodiments, each of the patches is the same size, eg, a [32 x 32 x 32] or [64 x 64 x 64] patch of each input line voxel.

機器學習模組908分割熱點且產生識別一或多個熱點體積之3D熱點圖910。例如,3D熱點圖910可包括具有與功能影像、解剖影像或分割圖輸入之一或多者相同大小且識別一或多個熱點體積的一或多個遮罩。以此方式,3D熱點圖910可用於識別功能影像、解剖影像或分割圖內之對應於熱點且因此實體病變之體積。The machine learning module 908 segments the hotspots and generates a 3D heatmap 910 that identifies one or more hotspot volumes. For example, the 3D heat map 910 may include one or more masks of the same size as one or more of the functional image, anatomical image, or segmentation map inputs and identifying one or more hot spot volumes. In this way, the 3D heat map 910 can be used to identify volumes within functional images, anatomical images, or segmentation maps that correspond to hot spots and thus solid lesions.

在一些實施例中,機器學習模組908分割熱點體積,從而區分背景(即,非熱點)區域與熱點體積。例如,機器學習模組908可係將體素分類為背景或屬於單個熱點類別之二元分類器。因此,機器學習模組908可產生類別不可知論的(例如,或「單類別」) 3D熱點圖作為輸出,該類別不可知論的3D熱點圖識別熱點體積但不區分特定熱點體積可表示之不同解剖位置及/或病變類型(例如,骨轉移、淋巴結節、局部前列腺)。在一些實施例中,機器學習模組908分割熱點體積且亦根據複數個熱點類別對熱點進行分類,各熱點類別表示由熱點表示之特定解剖位置及/或病變類型。以此方式,機器學習模組908可直接產生多類別3D熱點圖,該多類別3D熱點圖識別一或多個熱點體積且將各熱點體積標記為屬於複數個熱點類別之特定者。例如,經偵測熱點可經分類為骨轉移、淋巴結節或前列腺病變。在一些實施例中,可包含其他軟組織分類。In some embodiments, the machine learning module 908 segments the hotspot volume to distinguish background (ie, non-hotspot) regions from the hotspot volume. For example, the machine learning module 908 may be a binary classifier that classifies voxels as background or belonging to a single hotspot class. Thus, the machine learning module 908 can generate as output a class-agnostic (eg, or "single-class") 3D heatmap that identifies hotspot volumes but does not distinguish between the different anatomical locations a particular hotspot volume can represent and /or type of lesion (eg, bone metastases, lymph nodes, localized prostate). In some embodiments, the machine learning module 908 segments the hotspot volume and also classifies the hotspots according to a plurality of hotspot categories, each hotspot category representing a particular anatomical location and/or lesion type represented by the hotspot. In this manner, the machine learning module 908 can directly generate a multi-class 3D heat map that identifies one or more hotspot volumes and labels each hotspot volume as belonging to a particular one of the plurality of hotspot classes. For example, detected hot spots can be classified as bone metastases, lymph nodes, or prostate lesions. In some embodiments, other soft tissue classifications may be included.

除了根據熱點表示真實病變之可能性對熱點進行分類之外或代替此,可執行此分類,如本文中在(例如)章節B.ii中所描述。ii. 病變分類後處理及 / 或輸出 In addition to or instead of classifying hotspots according to their likelihood to represent true lesions, this classification may be performed, as described herein in, for example, Section B.ii. ii. Lesion classification post-processing and / or output

參考圖10A及圖10B,在一些實施例中,在藉由一或多個機器學習模組在影像中進行熱點偵測及/或分割之後(藉助實體病變作為熱點在(例如)功能影像中出現之理解,術語「病變」與「熱點」在圖9至圖12中可互換使用),執行後處理1000以將熱點標記為屬於特定熱點類別。例如,經偵測熱點可經分類為骨轉移、淋巴結節或前列腺病變。在一些實施例中,可使用表1中之標記方案。在一些實施例中,可藉由機器學習模組執行此標記,該機器學習模組可係用於執行熱點之分割及/或偵測之相同機器學習模組,或可係個別地或連同其他輸入(諸如3D功能影像、3D解剖影像及/或分割圖,如本文中所描述)一起接收經偵測熱點(例如,識別其等之位置)之清單及/或3D熱點圖(例如,描繪如經由分割判定之熱點邊界)作為輸入的分離模組。如圖10B中所展示,在一些實施例中,用作機器學習模組908之輸入以執行病變偵測及/或分割之分割圖906亦可用於(例如)根據解剖位置對病變進行分類。在一些實施例中,可使用其他(例如,不同)分割圖(例如,不一定為作為輸入饋送至機器學習模組中之相同分割圖)。iii. 平行器官特異性病變偵測模組 10A and 10B, in some embodiments, after hot spot detection and/or segmentation in an image by one or more machine learning modules (with solid lesions as hot spots appearing, for example, in functional images) Understanding that the terms "lesion" and "hot spot" are used interchangeably in Figures 9-12), post-processing 1000 is performed to mark hot spots as belonging to a particular hot spot category. For example, detected hot spots can be classified as bone metastases, lymph nodes, or prostate lesions. In some embodiments, the labeling scheme in Table 1 can be used. In some embodiments, this labeling may be performed by a machine learning module, which may be the same machine learning module used to perform segmentation and/or detection of hotspots, or may be performed individually or in conjunction with other Inputs (such as 3D functional images, 3D anatomical images, and/or segmentation maps, as described herein) are received along with a list of detected hot spots (eg, identifying their locations) and/or a 3D heat map (eg, depicting as The splitting module that takes the hotspot boundary determined by splitting) as input. As shown in FIG. 10B , in some embodiments, the segmentation map 906 used as input to the machine learning module 908 to perform lesion detection and/or segmentation can also be used to classify lesions, eg, according to anatomical location. In some embodiments, other (eg, different) segmentation maps may be used (eg, not necessarily the same segmentation maps that are fed into the machine learning module as input). iii. Parallel organ-specific lesion detection module

參考圖11A及圖11B,在一些實施例中,一或多個機器學習模組包括執行定位於對應器官中之熱點之偵測及/或分割之一或多個器官特異性模組。例如,如圖11A及圖11B之實例性程序1100及1120中分別展示,前列腺模組1108a可用於執行前列腺區域中之偵測及/或分割。在一些實施例中,一或多個器官特異性模組係結合偵測及/或分割遍及受試者之整個身體之熱點的全身模組1108b一起使用。在一些實施例中,一或多個器官特異性模組之結果1110a與來自全身模組之結果1110b合併,以形成最終熱點清單及/或熱點圖1112。在一些實施例中,合併可包含將結果(例如,熱點清單及/或3D熱點圖) 1110a及1110b與其他輸出(諸如藉由使用其他方法分割熱點所建立之3D熱點圖1114,此可包含使用其他機器學習模組及/或技術以及其他分割方法)組合。在一些實施例中,在藉由一或多個機器學習模組偵測及/或分割熱點之後,可執行額外分割方法。此額外分割步驟可使用(例如)自一或多個機器學習模組獲得的熱點分割及/或偵測結果作為輸入。在某些實施例中,如圖11B中所展示,如本文中(例如)在下文章節C.iv. 中所描述之分析分割方法1122可連同器官特異性病變偵測模組一起使用。分析分割1122使用來自上游機器學習模組1108b及1108a之結果1110b及1110a,以及PET影像1102以使用分析分割技術(例如,其不利用機器學習)來分割熱點且建立經分析分割之3D熱點圖1124。iv. 分析分割 Referring to FIGS. 11A and 11B , in some embodiments, one or more machine learning modules include one or more organ-specific modules that perform detection and/or segmentation of hotspots located in corresponding organs. For example, as shown in the example procedures 1100 and 1120 of FIGS. 11A and 11B , respectively, the prostate module 1108a may be used to perform detection and/or segmentation in the prostate region. In some embodiments, one or more organ-specific modules are used in conjunction with a whole-body module 1108b that detects and/or segments hot spots throughout the subject's entire body. In some embodiments, results 1110a from one or more organ-specific modules are combined with results 1110b from the whole body module to form a final hotspot list and/or hotspot map 1112. In some embodiments, merging may include combining results (eg, hotspot lists and/or 3D heatmaps) 1110a and 1110b with other outputs (such as 3D heatmaps 1114 created by segmenting hotspots using other methods, which may include using other machine learning modules and/or techniques and other segmentation methods) combinations. In some embodiments, after hotspots are detected and/or segmented by one or more machine learning modules, additional segmentation methods may be performed. This additional segmentation step may use, for example, hotspot segmentation and/or detection results obtained from one or more machine learning modules as input. In certain embodiments, as shown in FIG. 11B , an analytical segmentation method 1122 as described herein, eg, in Section C.iv. below, may be used in conjunction with an organ-specific lesion detection module. Analytical segmentation 1122 uses results 1110b and 1110a from upstream machine learning modules 1108b and 1108a, and PET image 1102 to segment hot spots using analytical segmentation techniques (eg, which do not utilize machine learning) and create an analytically segmented 3D heat map 1124 . iv. Analytical segmentation

參考圖12,在一些實施例中,機器學習技術可用於執行熱點偵測及/或初始分割,且(例如)作為後續步驟,使用分析模型來執行各熱點之最終分割。Referring to Figure 12, in some embodiments, machine learning techniques may be used to perform hot spot detection and/or initial segmentation, and, for example, as a subsequent step, an analytical model is used to perform final segmentation of each hot spot.

如本文中所使用,術語「分析模型」及「分析分割」係指基於(例如,使用)預定規則及/或函數(例如,數學函數)之分割方法。例如,在某些實施例中,分析分割方法可使用一或多個預定規則(諸如有序序列之影像處理步驟、對影像應用一或多個數學函數、條件邏輯分支及類似者)來分割熱點。分析分割方法可包含(但不限於)基於定限之方法(例如,包含影像定限步驟)、位準設定方法(例如,快速行進方法)、圖切割方法(例如,分水嶺分割)或主動輪廓模型。在某些實施例中,分析分割方法不依靠訓練步驟。相比而言,在某些實施例中,機器學習模型將使用已被自動訓練以使用一訓練資料集合(例如,包括(例如)由放射科醫師或其他執業醫師手動分割之影像及熱點之實例)預分割熱點之模型來分割熱點,且旨在模仿訓練集合中之分割行為。As used herein, the terms "analytical model" and "analytical segmentation" refer to segmentation methods based on (eg, using) predetermined rules and/or functions (eg, mathematical functions). For example, in some embodiments, the analysis segmentation method may use one or more predetermined rules (such as an ordered sequence of image processing steps, applying one or more mathematical functions to the image, conditional logic branches, and the like) to segment hot spots . Analytical segmentation methods may include, but are not limited to, bound-based methods (eg, including an image bounding step), level-setting methods (eg, fast-moving methods), graph cutting methods (eg, watershed segmentation), or active contour models . In some embodiments, the analysis segmentation method does not rely on a training step. In contrast, in some embodiments, the machine learning model will use instances that have been automatically trained to use a set of training data (eg, including, for example, images and hotspots manually segmented by, for example, a radiologist or other practitioner). ) models pre-segmented hotspots to segment hotspots and are designed to mimic the segmentation behavior in the training set.

使用分析分割模型來判定最終分割可係有利的,例如,因為在特定情況下分析模型可比機器學習方法更容易理解及除錯。在一些實施例中,此等分析分割方法可連同藉由機器學習技術產生之病變分割一起對3D功能影像進行操作。It may be advantageous to use an analytical segmentation model to determine the final segmentation, for example, because the analytical model may be easier to understand and debug than machine learning methods in certain situations. In some embodiments, these analytical segmentation methods may operate on 3D functional images in conjunction with lesion segmentation generated by machine learning techniques.

例如,如圖12中所展示,在使用分析模型進行熱點分割之實例性程序1200中,機器學習模組1208接收PET影像1202、CT影像1204及分割圖1206作為輸入。機器學習模組1208執行分割以建立識別一或多個熱點體積之3D熱點圖1210。分析分割模型1212使用機器學習模組產生之3D熱點圖1210,以及PET影像1202來執行分割且建立識別經分析分割之熱點體積之3D熱點圖1214。v. 例示性熱點分割 For example, as shown in FIG. 12, in an example procedure 1200 for hot spot segmentation using an analytical model, a machine learning module 1208 receives as input a PET image 1202, a CT image 1204, and a segmentation map 1206. The machine learning module 1208 performs segmentation to create a 3D heat map 1210 that identifies one or more hot spot volumes. The analytical segmentation model 1212 uses the 3D heatmap 1210 generated by the machine learning module, and the PET image 1202 to perform segmentation and create a 3D heatmap 1214 that identifies the analyzed segmented hotspot volume. v. Illustrative Hotspot Segmentation

圖13A及圖13B展示用於熱點偵測及/或分割之機器學習模組架構之實例。圖13A展示實例性U網架構(圖13A之括號中之「N =」識別各層中之濾波器之數目)且圖13B展示實例性FPN架構。圖13C展示另一實例性FPN架構。13A and 13B show examples of machine learning module architectures for hotspot detection and/or segmentation. Figure 13A shows an example U-net architecture (the "N=" in parentheses in Figure 13A identifies the number of filters in each layer) and Figure 13B shows an example FPN architecture. 13C shows another example FPN architecture.

圖14A至圖14C展示使用實施U網架構之機器學習模組獲得的熱點分割的實例性結果。影像中之十字絲及亮點指示經分割之熱點1402 (表示潛在病變)。圖15A及圖15B展示使用實施FPN之機器學習模組獲得的實例性熱點分割結果。特定言之,圖15A展示疊對於CT影像上之輸入PET影像。圖15B展示使用實施FPN之機器學習模組判定之疊對於CT影像上之實例性熱點圖。經疊對熱點圖可以深紅色展示靠近受試者之脊椎之熱點體積1502。 D. 實例性圖形使用者介面 14A-14C show example results of hotspot segmentation obtained using a machine learning module implementing the U-Net architecture. Crosshairs and bright spots in the image indicate segmented hot spots 1402 (representing potential lesions). 15A and 15B show example hotspot segmentation results obtained using a machine learning module implementing FPN. In particular, Figure 15A shows an input PET image superimposed on a CT image. 15B shows an example heat map overlaid on a CT image determined using a machine learning module implementing FPN. The overlaid heat map can show the hot spot volume 1502 near the subject's spine in dark red. D. Example Graphical User Interface

在某些實施例中,本文中所描述之病變偵測、分割、分類及相關技術可包含促進使用者互動(例如,與實施本文中所描述之各種方法之軟體程式)及/或結果查看之GUI。例如,在某些實施例中,GUI部分及視窗尤其容許使用者上傳及管理待分析之資料,視覺化經由本文中所描述之方法產生之影像及結果,及產生概述發現之報告。特定實例性GUI視圖之螢幕截圖係在圖16A至圖16E中展示。In certain embodiments, the lesion detection, segmentation, classification, and related techniques described herein may include facilitating user interaction (eg, with software programs implementing the various methods described herein) and/or result viewing GUI. For example, in some embodiments, GUI portions and windows, among other things, allow users to upload and manage data to be analyzed, visualize images and results generated through the methods described herein, and generate reports summarizing findings. Screenshots of certain example GUI views are shown in Figures 16A-16E.

例如,圖16A展示提供由使用者上傳及檢視研究[例如,在相同檢查及/或掃描期間收集之影像資料(例如,根據醫學數位成像及通信(DICOM)標準),諸如經由PET/CT掃描收集之PET影像及CT影像]之實例性GUI視窗。在某些實施例中,經上傳之研究被自動新增至列出已上傳一或多個PET/CT影像之受試者/患者之識別符的患者清單。對於圖16中所展示之患者清單中之各項目,展示患者ID以及該患者之可用PET/CT研究,以及對應報告。在某些實施例中,團隊概念容許建立對經上傳資料之特定子集工作且被提供對經上傳資料之該特定子集之存取的多個使用者之群組(例如,團隊)。在某些實施例中,患者清單可與特定團隊相關聯且自動與該特定團隊共用,以便對該團隊之各成員提供對患者清單之存取。For example, Figure 16A shows providing for uploading and viewing of studies by a user [eg, image data collected during the same exam and/or scan (eg, according to the Digital Imaging and Communications in Medicine (DICOM) standard), such as collected via PET/CT scans Example GUI windows for PET images and CT images]. In some embodiments, uploaded studies are automatically added to a patient list listing the identifiers of subjects/patients who have uploaded one or more PET/CT images. For each item in the patient list shown in Figure 16, the patient ID and available PET/CT studies for that patient, along with the corresponding report, are displayed. In some embodiments, the team concept allows for the creation of groups (eg, teams) of multiple users who work on a particular subset of uploaded data and are provided access to that particular subset of uploaded data. In some embodiments, a patient list may be associated with and automatically shared with a particular team to provide members of the team access to the patient list.

圖16B展示容許使用者檢視醫學影像資料之實例性GUI檢視器1610。在某些實施例中,該檢視器係多模態檢視器,以容許使用者檢視多個成像模態,以及各種格式及/或其等之組合。例如,圖16B中所展示之檢視器容許使用者檢視PET及/或CT影像,以及其等之融合(例如,疊對)。在某些實施例中,檢視器容許使用者檢視呈各種格式之3D醫學影像資料。例如,檢視器可容許使用者選擇及檢視沿著3D影像之特定(例如,選定)橫截面平面之各種2D圖塊。在某些實施例中,檢視器容許使用者檢視3D影像資料之最大強度投影(MIP)。亦可提供視覺化3D影像資料之其他方式。在此實例中,如圖16B中所展示,在檢視器之左手側提供控制面板圖形介面工具集1612,且容許使用者檢視可用研究資訊(諸如日期、各種患者資料及成像參數等)。16B shows an example GUI viewer 1610 that allows a user to view medical image data. In some embodiments, the viewer is a multimodal viewer to allow the user to view multiple imaging modalities, as well as various formats and/or combinations thereof. For example, the viewer shown in Figure 16B allows the user to view PET and/or CT images, and fusions (eg, overlays) thereof. In some embodiments, the viewer allows the user to view 3D medical image data in various formats. For example, the viewer may allow the user to select and view various 2D tiles along a particular (eg, selected) cross-sectional plane of the 3D image. In some embodiments, the viewer allows the user to view the maximum intensity projection (MIP) of the 3D image data. Other ways of visualizing 3D image data may also be provided. In this example, as shown in Figure 16B, a control panel GUI toolset 1612 is provided on the left hand side of the viewer and allows the user to view available study information (such as dates, various patient data and imaging parameters, etc.).

參考圖16C,在某些實施例中,GUI檢視器包含病變選擇工具,該病變選擇工具容許使用者選擇作為該使用者識別及選擇為(例如)有可能表示真實潛在性實體病變之影像之所關注體積(VOI)的病變體積。在某些實施例中,病變體積係自(例如)經由本文中所描述之方法之任一者自動識別及分割之熱點體積的集合選擇。可保存選定病變體積以用於包含於可用於報告及/或進一步定量分析之最終經識別病變體積集合中。在某些實施例中,例如,如圖16C中所展示,根據特定病變體積之使用者選擇,顯示1614特定病變之各種特徵/定量量度[例如,最大強度、峰值強度、平均強度、體積、病變指數(LI)、解剖分類(例如,miTNM類別、位置等)等]。Referring to FIG. 16C, in some embodiments, the GUI viewer includes a lesion selection tool that allows a user to select a location for an image that the user identifies and selects as likely to represent a real underlying solid lesion, for example. Lesion volume of volume of interest (VOI). In certain embodiments, the lesion volume is selected from a collection of hot spot volumes that are automatically identified and segmented, eg, via any of the methods described herein. Selected lesion volumes can be saved for inclusion in a final set of identified lesion volumes that can be used for reporting and/or further quantitative analysis. In certain embodiments, eg, as shown in FIG. 16C , based on user selection of a particular lesion volume, various characteristic/quantitative measures of a particular lesion are displayed 1614 [eg, maximum intensity, peak intensity, mean intensity, volume, lesion Index (LI), anatomical classification (eg, miTNM class, location, etc.), etc.].

參考圖16D,此外或替代性地,GUI檢視器可容許使用者檢視根據本文中所描述之各項實施例執行之自動化分割的結果。分割可如本文中所描述經由CT影像之自動化分析來執行,且可包含表示肝臟及/或主動脈之3D體積之識別及分割。分割結果可疊對於醫學影像資料之表示上,諸如於CT及/或PET影像表示上。Referring to Figure 16D, in addition or alternatively, a GUI viewer may allow a user to view the results of automated segmentation performed in accordance with various embodiments described herein. Segmentation can be performed via automated analysis of CT images as described herein, and can include identification and segmentation of 3D volumes representing the liver and/or aorta. The segmentation results can be overlaid on representations of medical image data, such as CT and/or PET image representations.

圖16E展示經由如本文中所描述之醫學影像資料之分析產生之實例性報告1620。在此實例中,報告1620概述經查看研究之結果且提供表徵化選定(例如,由使用者)病變體積1622之特徵及定量量度。例如,如圖16E中所展示,對於各選定病變體積,報告包含病變ID、病變類型(例如,miTNM分類)、病變位置、SUV最大值、SUV峰值、SUV平均值、體積及病變指數值。 E. 使用多個機器學習模組之熱點分割及分類 16E shows an example report 1620 generated through analysis of medical image data as described herein. In this example, the report 1620 summarizes the results of the viewed study and provides characteristics and quantitative measures that characterize the selected (eg, by the user) lesion volume 1622 . For example, as shown in Figure 16E, for each selected lesion volume, the report includes lesion ID, lesion type (eg, miTNM classification), lesion location, SUV max, SUV peak, SUV mean, volume, and lesion index value. E. Hotspot segmentation and classification using multiple machine learning modules

在某些實施例中,平行使用多個機器學習模組以對熱點進行分割及分類。圖17A係用於對熱點進行分割及分類之實例性程序1700之方塊流程圖。實例性程序1700對3D PET/CT影像執行影像分割以分割熱點體積及根據經(自動)判定之解剖位置將各經分割熱點體積特定言之分類為淋巴、骨或前列腺熱點。In some embodiments, multiple machine learning modules are used in parallel to segment and classify hotspots. 17A is a block flow diagram of an example process 1700 for segmenting and classifying hotspots. The example procedure 1700 performs image segmentation on a 3D PET/CT image to segment hot spot volumes and categorize each segmented hot spot volume as a lymphoid, bone or prostate hot spot in particular according to the (automatically) determined anatomical location.

實例性程序1700接收3D PET影像1702及3D CT影像1704作為輸入且對其等進行操作。CT影像1704輸入至第一、器官分割機器學習模組1706,第一、器官分割機器學習模組1706執行分割以識別CT影像中之表示特定組織區域及/或所關注器官或多個(例如,相關)組織區域及/或器官之解剖群組的3D體積。因此,器官分割機器學習模組1706係用於產生在CT影像內識別特定組織區域及/或所關注器官或其等之解剖群組的3D分割圖1708。例如,在某些實施例中,分割圖1708識別對應於器官之兩個解剖群組之兩個所關注體積,一個所關注體積對應於包括肝臟、脾、腎臟及尿膀胱之高攝取軟組織器官之解剖群組,且第二所關注體積對應於作為低攝取軟組織器官之主動脈(例如,胸及腹部分)。在某些實施例中,器官分割機器學習模組1706產生識別各種個體器官(包含構成分割圖1708之解剖群組之彼等個體器官,以及在某些實施例中其他個體器官)之初始分割圖作為輸出,且分割圖1708係自該初始分割圖建立(例如,藉由對對應於解剖群組之個體器官之體積指派相同標記)。因此,在某些實施例中,3D分割圖1708使用在以下各者之間進行識別及區分之三個標記:(i)屬於高攝取軟組織器官之體素;(ii)低攝取軟組織器官,即,主動脈;及(iii)作為背景之其他區域。The example procedure 1700 receives 3D PET image 1702 and 3D CT image 1704 as input and operates on them, among others. The CT image 1704 is input to a first, organ segmentation machine learning module 1706, which performs segmentation to identify regions in the CT image that represent a particular tissue region and/or an organ or organs of interest (eg, related) 3D volumes of anatomical groups of tissue regions and/or organs. Thus, the organ segmentation machine learning module 1706 is used to generate a 3D segmentation map 1708 that identifies specific tissue regions and/or anatomical groups of organs of interest or the like within the CT image. For example, in some embodiments, the segmentation map 1708 identifies two volumes of interest corresponding to two anatomical groups of organs, one volume of interest corresponding to the high uptake soft tissue organs including liver, spleen, kidney, and urinary bladder Anatomical groups, and the second volume of interest corresponds to the aorta (eg, thoracic and abdominal portions) as a low-uptake soft tissue organ. In some embodiments, the organ segmentation machine learning module 1706 generates an initial segmentation map that identifies various individual organs, including those individual organs that make up the anatomical groups of the segmentation map 1708, and in some embodiments other individual organs As output, a segmentation map 1708 is built from the initial segmentation map (eg, by assigning the same labels to volumes of individual organs corresponding to the anatomical group). Thus, in certain embodiments, the 3D segmentation map 1708 uses three markers that identify and differentiate between (i) voxels belonging to high uptake soft tissue organs; (ii) low uptake soft tissue organs, i.e. , the aorta; and (iii) other regions as background.

在圖17A中所展示之實例性程序1700中,器官分割機器學習模組1706實施U網架構。可使用其他架構(例如,FPN)。PET影像1702、CT影像1704及3D分割圖1708係用作至兩個平行熱點分割模組之輸入。In the example procedure 1700 shown in Figure 17A, the organ segmentation machine learning module 1706 implements a U-Net architecture. Other architectures (eg, FPN) may be used. PET image 1702, CT image 1704 and 3D segmentation map 1708 are used as input to two parallel hotspot segmentation modules.

在某些實施例中,實例性程序1700平行使用兩個機器學習模組以依不同方式對熱點進行分割及分類,且接著合併其等結果。例如,據發現,機器學習模組在其僅識別單個熱點類別時執行更準確分割,例如,是否將影像區域識別為熱點,而非多個—淋巴、骨、前列腺—所要熱點類別。因此,程序1700利用第一、單類別熱點分割模組1712以執行準確分割及利用第二、多類別熱點分割模組1714以將熱點分類成所要三種類別。In some embodiments, the example process 1700 uses two machine learning modules in parallel to segment and classify hotspots in different ways, and then combine their equal results. For example, it was found that the machine learning module performed more accurate segmentation when it only identified a single hotspot category, e.g. whether an image region was identified as a hotspot, rather than multiple—lymphoid, bone, prostate—desired hotspot categories. Therefore, the procedure 1700 utilizes the first, single-class hotspot segmentation module 1712 to perform accurate segmentation and the second, multi-class hotspot segmentation module 1714 to classify the hotspots into the three desired classes.

特定言之,第一、單類別熱點分割模組1712執行分割以產生識別表示病變之3D體積(而其他影像區域經識別為背景)之第一、單類別3D熱點圖1716。因此,單類別熱點分割模組1712執行二元分類,以將影像體素標記為屬於兩種類別(背景或單個熱點類別)之一者。第二、多類別熱點分割模組1714分割熱點且對經分割之熱點體積指派複數個熱點分類標記之一者,如與使用單個熱點類別相反。特定言之,多類別熱點分割模組1714將經分割之熱點體積分類為淋巴、骨或前列腺熱點。因此,多類別熱點分割模組產生識別表示熱點之3D體積且將其等標記為淋巴、骨或前列腺(而其他影像區域經識別為背景)之第二、多類別3D熱點圖1718。在程序1700中,單類別熱點分割模組及多類別熱點分割模組各實施FPN架構。可使用其他機器學習架構(例如,U網)。In particular, the first, single-class hot spot segmentation module 1712 performs segmentation to generate a first, single-class 3D heat map 1716 that identifies the 3D volume representing the lesion (while other image regions are identified as the background). Therefore, the single-class hotspot segmentation module 1712 performs binary classification to label image voxels as belonging to one of two classes (background or single hotspot class). Second, the multi-class hotspot segmentation module 1714 segments hotspots and assigns one of a plurality of hotspot classification labels to the segmented hotspot volume, as opposed to using a single hotspot class. In particular, the multi-class hotspot segmentation module 1714 classifies the segmented hotspot volumes as lymphoid, bone, or prostate hotspots. Thus, the multi-class hotspot segmentation module generates a second, multi-class 3D heat map 1718 that identifies 3D volumes representing hotspots and labels them as lymphoid, bone or prostate (while other image regions are identified as background). In procedure 1700, the single-class hotspot segmentation module and the multi-class hotspot segmentation module each implement the FPN architecture. Other machine learning architectures (eg, U-Net) may be used.

在某些實施例中,為產生經分割及分類之熱點1724之最終3D熱點圖,合併1722單類別熱點圖1716及多類別熱點圖1718。特定言之,比較單類別熱點圖1716之各熱點體積與多類別熱點圖1718之熱點體積以識別表示相同實體位置且因此相同(潛在)實體病變之匹配熱點體積。匹配熱點體積可(例如)基於空間重疊(例如,體積重疊百分比)、接近度(例如,臨限距離內之重心)及類似者之各種量度來識別。針對其識別來自多類別熱點圖1718之匹配熱點體積之單類別熱點圖1716之熱點體積被指派匹配熱點體積之標記—淋巴、骨或前列腺。以此方式,經由單類別熱點分割模組1712準確地分割熱點且接著使用多類別熱點分割模組1714之結果標記該等熱點。In some embodiments, to generate a final 3D heat map of segmented and classified hot spots 1724, a single-class heat map 1716 and a multi-class heat map 1718 are combined 1722. In particular, each hotspot volume of the single-class heatmap 1716 is compared to the hotspot volumes of the multi-class heatmap 1718 to identify matching hotspot volumes that represent the same physical location and thus the same (potential) physical lesion. Matching hotspot volumes may be identified, for example, based on various measures of spatial overlap (eg, percent volume overlap), proximity (eg, center of gravity within a threshold distance), and the like. The single-class heat map 1716 for which the matching hot-spot volumes from the multi-class heat map 1718 are identified is assigned the label of the matching hot-spot volume—lymph, bone, or prostate. In this way, hotspots are accurately segmented via single-class hotspot segmentation module 1712 and then marked using the results of multi-class hotspot segmentation module 1714.

參考圖17B,在特定情況下,對於單類別熱點圖1716之特定熱點體積,未發現來自多類別熱點圖1718之匹配熱點體積。此等熱點體積係基於與3D分割圖1738之比較來標記,3D分割圖1738可不同於分割圖1708,識別對應於淋巴及骨區域之3D體積。Referring to Figure 17B, in certain cases, for a particular hotspot volume of single-class heatmap 1716, no matching hotspot volume from multi-class heatmap 1718 was found. These hot spot volumes are marked based on a comparison with the 3D segmentation map 1738, which may be different from the segmentation map 1708, identifying 3D volumes corresponding to lymphatic and bone regions.

在某些實施例中,單類別熱點分割模組1712可不分割前列腺區域中之熱點,使得單類別熱點圖不包含前列腺區域中之任何熱點。來自多類別熱點圖1718之經標記為前列腺熱點之熱點體積可用於包含於經合併熱點圖1724中。在某些實施例中,單類別熱點分割模組1712可分割前列腺區域中之一些熱點,但額外熱點(例如,在單類別熱點圖1716中未識別)可藉由多類別熱點分割模組1714分割且藉由多類別熱點分割模組1714識別為前列腺熱點。存在於多類別熱點圖1718中之此等額外熱點體積可包含於經合併熱點圖1724中。In some embodiments, the single-class hotspot segmentation module 1712 may not segment hotspots in the prostate region, such that the single-class heatmap does not include any hotspots in the prostate region. Hotspot volumes from multi-class heatmap 1718 marked as prostate hotspots may be used for inclusion in merged heatmap 1724. In some embodiments, the single-class hotspot segmentation module 1712 may segment some hotspots in the prostate region, but additional hotspots (eg, not identified in the single-class heatmap 1716 ) may be segmented by the multi-class hotspot segmentation module 1714 And it is identified as a prostate hot spot by the multi-class hot spot segmentation module 1714 . These additional hotspot volumes present in multi-class heatmap 1718 may be included in merged heatmap 1724.

因此,在某些實施例中,在熱點合併步驟1722中使用來自CT影像1704、PET影像1702、3D器官分割圖1738、單類別熱點圖1716及多類別熱點圖1718之資訊以產生經分割及分類之熱點體積1724之經合併3D熱點圖。Thus, in some embodiments, information from CT image 1704, PET image 1702, 3D organ segmentation map 1738, single-class heat map 1716, and multi-class heat map 1718 are used in the hotspot merging step 1722 to generate segmented and classified The merged 3D heat map of the hot spot volume 1724.

在一種實例性合併方法中,當來自多類別熱點圖及單類別熱點圖之熱點體積之任兩個體素對應於/表示相同實體位置時,判定重疊(例如,在兩個熱點體積之間)。若單類別熱點圖之特定熱點體積僅與多類別熱點圖之一個熱點體積重疊(例如,僅識別來自多類別熱點圖之一個匹配熱點體積),則根據多類別熱點圖之重疊熱點體積經識別所屬類別來標記單類別熱點圖之該特定熱點體積。若特定熱點體積與多類別熱點圖之兩個或更多個熱點體積(各經識別為屬於不同熱點類別)重疊,則單類別熱點體積之各體素被指派與來自多類別熱點圖之經重疊熱點體積中之最接近體素相同的類別。若單類別熱點圖之特定熱點體積不與多類別熱點圖之任何熱點體積重疊,則基於與識別軟組織區域(例如,器官)及/或骨之3D分割圖的比較對該特定熱點體積指派熱點類別。例如,在一些實施例中,若以下陳述之任一者為真,則特定熱點體積可經標記為屬於骨類別: (i)   若熱點體積之超過20%與肋骨分割重疊; (ii)  若熱點體積不與器官分割中之任何標記重疊,且熱點遮罩中之CT之平均值大於100亨氏單位; (iii) 若熱點體積之SUVmax 之位置與器官分割中之骨標記重疊;或 (iv)  若熱點體積之超過50%與器官分割中之骨標記重疊。In one example merging method, overlap (eg, between two hotspot volumes) is determined when any two voxels from the hotspot volumes from the multi-class heatmap and the single-class heatmap correspond to/represent the same physical location . If a particular hotspot volume of a single-class heatmap overlaps with only one hotspot volume of a multi-class heatmap (eg, only one matching hotspot volume from a multiclass heatmap is identified), then the overlapping hotspot volume from the multiclass heatmap is identified as belonging to category to label that specific hotspot volume of the single-category heatmap. If a particular hotspot volume overlaps with two or more hotspot volumes of the multiclass heatmap (each identified as belonging to a different hotspot class), then each voxel of the single-class hotspot volume is assigned to overlap with the one from the multiclass heatmap The closest voxel-identical class in the hotspot volume. If a specific hotspot volume of a single-class heatmap does not overlap any hotspot volume of a multi-class heatmap, then assign a hotspot class to that specific hotspot volume based on comparison with a 3D segmentation map identifying soft tissue regions (eg, organs) and/or bone . For example, in some embodiments, a particular hot spot volume may be marked as belonging to the bone category if any of the following statements are true: (i) if more than 20% of the hot spot volume overlaps the rib segmentation; (ii) if the hot spot The volume does not overlap any marker in the organ segmentation and the mean value of the CT in the hot spot mask is greater than 100 Heinz units; (iii) if the location of the SUV max of the hot spot volume overlaps with the bone marker in the organ segmentation; or (iv) If more than 50% of the hot spot volume overlaps with bone markers in organ segmentation.

在一些實施例中,若熱點體積之50%或以上不與器官分割中之骨標記重疊,則特定熱點體積可經識別為淋巴。In some embodiments, a particular hotspot volume may be identified as lymph if 50% or more of the hotspot volume does not overlap with bone markers in the organ segmentation.

在一些實施例中,當單類別熱點圖之所有熱點體積經分類成淋巴、骨或前列腺時,來自多類別模型之任何剩餘前列腺熱點經疊加至單類別熱點圖上且包含於經合併熱點圖中。In some embodiments, when all hotspot volumes of the single-class heatmap are classified as lymphoid, bone, or prostate, any remaining prostate hotspots from the multi-class model are overlaid onto the single-class heatmap and included in the merged heatmap .

圖17C展示根據參考圖17A及圖17B所描述之實施例之用於實施熱點分割及分類方法之實例性電腦程序1750。 F. 經由自適應定限方法之分析分割 Figure 17C shows an example computer program 1750 for implementing a hotspot segmentation and classification method according to the embodiment described with reference to Figures 17A and 17B. F. Analytical segmentation via adaptive bounding methods

在某些實施例中,例如,如本文中在章節C.iv中所描述,本文中所描述之影像分析技術利用分析分割步驟來細化經由如本文中所描述之機器學習模組判定之熱點分割。例如,在某些實施例中,藉由如本文中所描述之機器學習方法產生之3D熱點圖係用作至細化及/或執行全新分割之分析分割模型之初始輸入。In certain embodiments, eg, as described herein in Section C.iv, the image analysis techniques described herein utilize an analysis segmentation step to refine hot spots determined by a machine learning module as described herein segmentation. For example, in some embodiments, a 3D heat map generated by machine learning methods as described herein is used as an initial input to an analytical segmentation model that refines and/or performs a completely new segmentation.

在某些實施例中,分析分割模型利用定限演算法,藉此藉由比較解剖影像(例如,CT影像、MR影像)及/或功能影像(例如,SPECT影像、PET影像) (例如,合成解剖及功能影像,諸如PET/CT或SPECT/CT影像)中之體素之強度與一或多個臨限值來分割熱點。In some embodiments, the analysis segmentation model utilizes a defined algorithm whereby anatomical images (eg, CT images, MR images) and/or functional images (eg, SPECT images, PET images) (eg, synthetic The intensity of the voxels in anatomical and functional images, such as PET/CT or SPECT/CT images, and one or more thresholds to segment hot spots.

參考圖18A,在某些實施例中,自適應定限方法,藉此針對特定熱點,比較(例如)經由如本文中所描述之機器學習方法針對該特定熱點判定之初始熱點體積內的強度與一或多個參考值以判定用於特定熱點之臨限值。接著藉由分析分割模型使用用於特定熱點之該臨限值以分割特定熱點且判定最終熱點體積。Referring to Figure 18A, in some embodiments, an adaptive bounding method whereby, for a particular hotspot, the intensity within an initial hotspot volume determined for that particular hotspot, eg, via machine learning methods as described herein, is compared with One or more reference values to determine threshold values for a particular hotspot. This threshold value for the particular hot spot is then used by analyzing the segmentation model to segment the particular hot spot and determine the final hot spot volume.

圖18A展示用於經由自適應定限方法分割熱點之實例性程序1800。程序1800利用識別一或多個3D熱點體積之初始3D熱點圖1802、PET影像1804及3D器官分割圖1806。初始3D熱點圖1802可經由本文中所描述之各種機器學習方法及/或基於與GUI之使用者互動自動判定。使用者可(例如)藉由選擇用於包含於3D熱點圖1802中之子集來細化經自動判定之熱點體積之集合。此外或替代性地,使用者可(例如)藉由用GUI繪製影像上之邊界來手動判定3D熱點體積。18A shows an example procedure 1800 for segmenting hotspots via an adaptive bounding method. Process 1800 utilizes an initial 3D heat map 1802, a PET image 1804, and a 3D organ segmentation map 1806 that identify one or more 3D hotspot volumes. The initial 3D heat map 1802 may be determined automatically via various machine learning methods described herein and/or based on user interaction with the GUI. The user may refine the set of automatically determined hotspot volumes, eg, by selecting a subset for inclusion in the 3D heatmap 1802. Additionally or alternatively, the user may manually determine the 3D hotspot volume, eg, by drawing boundaries on the image with the GUI.

在某些實施例中,3D器官分割圖識別對應於特定參考組織區域(諸如主動脈部分及/或肝臟)之一或多個參考體積。如本文中(例如)在章節B.iii中所描述,特定參考體積內之體素之強度可用於運算相關聯參考值1808,可比較經識別及分割之熱點之強度與該等參考值(例如,充當「量尺」)。例如,肝臟體積可用於運算肝臟參考值,且主動脈部分用於運算主動脈或血池參考值。在程序1800中,主動脈部分之強度係用於運算1808血池參考值1810。血池參考值1810係結合初始3D熱點圖1802及PET影像1804一起用於判定用於執行初始3D熱點圖1802中之熱點的基於定限之分析分割的臨限值。In certain embodiments, the 3D organ segmentation map identifies one or more reference volumes corresponding to a particular reference tissue region, such as aortic portions and/or liver. As described herein, for example, in Section B.iii, the intensities of voxels within a particular reference volume can be used to compute associated reference values 1808, and the intensities of identified and segmented hot spots can be compared to those reference values (eg, , acting as a "measuring ruler"). For example, the liver volume can be used to compute the liver reference, and the aortic portion used to compute the aorta or blood pool reference. In routine 1800, the strength of the aortic portion is used to calculate 1808 the blood pool reference value 1810. The blood pool reference value 1810 is used in conjunction with the initial 3D heat map 1802 and the PET image 1804 to determine threshold values for performing bound-based analysis segmentation of the hot spots in the initial 3D heat map 1802.

特定言之,對於在初始3D熱點圖1802中識別之特定熱點體積(其識別表示實體病變之特定熱點),定位於該特定熱點體積內之PET影像1804體素之強度係用於判定特定熱點之熱點強度。在某些實施例中,熱點強度係定位於特定熱點體積內之體素之強度的最大值。例如,對於表示SUV之PET影像強度,判定特定熱點體積內之最大SUV (SUVmax )。可使用其他量度,諸如峰值(例如,SUVpeak )、平均數、中值、四分位數平均數(IQRmean )。In particular, for a specific hotspot volume identified in the initial 3D heat map 1802 (which identifies a specific hotspot representing a solid lesion), the intensities of the PET image 1804 voxels located within the specific hotspot volume are used to determine the specificity of the hotspot. Hot spot strength. In some embodiments, the hot spot intensity is the maximum value of the intensity of the voxels located within a particular hot spot volume. For example, for a PET image intensity representing an SUV, the maximum SUV ( SUVmax ) within a particular hot spot volume is determined. Other measures may be used, such as peak (eg, SUV peak ), mean, median, interquartile mean (IQR mean ).

在某些實施例中,基於熱點強度與血池參考值之比較,判定用於特定熱點之熱點特定臨限值。在某些實施例中,熱點強度與血池參考值之間的比較係用於選擇複數個(例如,預先界定)定限函數之一者,且採用該選定定限函數來運算用於特定熱點之熱點特定臨限值。在某些實施例中,定限函數將熱點特定臨限值運算為特定熱點之熱點強度(例如,最大強度)及/或血池參考值之函數。例如,定限函數可將熱點特定臨限值運算為(i)比例調整因數與(ii)特定熱點之熱點強度(或其他強度量度)及/或血池參考之乘積。在某些實施例中,該比例調整因數係常數。在某些實施例中,比例調整因數係經判定為特定熱點之強度量度之函數之內插值。在某些實施例中及/或針對特定定限函數,比例調整因數係用於判定對應於最大臨限值之平線區位準之常數,例如,如本文中之章節G中進一步詳細描述。In some embodiments, a hotspot-specific threshold value for a specific hotspot is determined based on a comparison of the hotspot intensity to a blood pool reference value. In some embodiments, the comparison between the hot spot intensity and the blood pool reference value is used to select one of a plurality of (eg, pre-defined) bounding functions and employ the selected bounding function to operate for a particular hotspot The hotspot specific threshold value. In some embodiments, the bounded function computes a hotspot-specific threshold value as a function of the hotspot's intensity (eg, maximum intensity) and/or blood pool reference value for the specific hotspot. For example, a bounding function may operate a hotspot-specific threshold value as the product of (i) a scaling factor and (ii) the hotspot intensity (or other intensity measure) and/or blood pool reference for the specific hotspot. In some embodiments, the scaling factor is a constant. In some embodiments, the scaling factor is determined as an interpolation of a function of the intensity measure of a particular hot spot. In certain embodiments and/or for a particular bounded function, the scaling factor is a constant used to determine the level of the flat line region corresponding to the maximum threshold value, eg, as described in further detail in Section G herein.

例如,在(例如,經由條件邏輯)與運算各種定限函數之間進行選擇之實例性方法之偽碼係如下展示:If 90% of SUVmax ≤ [blood pool reference], then threshold= 90% of SUVmax. Else, if 50% of SUVmax ≥ 2 x [blood pool reference], then threshold = 2 x [blood pool reference]. Else, use linear interpolation to determine a percentage of SUVmax, with the interpolation starting at 90% at [[blood pool reference] / 0.9] and ending at 50% at [2 x [blood pool reference] / 0.5]. If [interpolated percentage] x SUVmax is below 2 x [blood pool reference], then threshold = [interpolated percentage] x SUVmax. Else, threshold = 2 x [blood pool reference]. For example, pseudocode for an example method of selecting (eg, via conditional logic) and operating various bounded functions is shown below: If 90% of SUVmax ≤ [blood pool reference], then threshold= 90% of SUVmax . Else, if 50% of SUVmax ≥ 2 x [blood pool reference], then threshold = 2 x [blood pool reference]. Else, use linear interpolation to determine a percentage of SUVmax, with the interpolation starting at 90% at [[ blood pool reference] / 0.9] and ending at 50% at [2 x [blood pool reference] / 0.5]. If [interpolated percentage] x SUVmax is below 2 x [blood pool reference], then threshold = [interpolated percentage] x SUVmax. Else, threshold = 2 x [blood pool reference].

圖18B及圖18C繪示藉由上文偽碼實施之特定實例性自適應定限方法。圖18B繪製臨限值1832依據特定熱點之熱點強度(在該實例中,SUVmax )之變動。圖18C繪製作為特定熱點之SUVmax 之比例之熱點特定臨限值依據該特定熱點之SUVmax 之變動。各曲線圖中之虛線指示相對於血池參考之特定值(在圖18B及圖18C之實例性繪圖中具有1.5之SUV),且亦指示圖18C中之SUVmax 之90%及50%。18B and 18C illustrate certain example adaptive bounding methods implemented by the above pseudo-code. FIG. 18B plots the threshold value 1832 as a function of the hotspot intensity (in this example, SUVmax ) of a particular hotspot. Figure 18C plots the variation of the hotspot specific threshold as a ratio of the SUVmax for a specific hotspot as a function of the SUVmax for that specific hotspot. The dashed lines in each graph indicate specific values relative to the blood pool reference (with an SUV of 1.5 in the example plots of Figures 18B and 18C), and also indicate 90% and 50% of the SUV max in Figure 18C.

參考圖18D至圖18F,如本文中所描述之自適應定限方法解決與利用固定或相對臨限值之先前定限技術相關聯的挑戰及缺點。特定言之,雖然在某些實施例中基於最大標準攝取值(SUVmax )的基於定限之病變分割提供用以分割熱點體積以用於估計諸如攝取體積及SUVmean 之參數的透明且可重現方式,但習知固定及相對臨限值在病變SUVmax 之全動態範圍下效果不佳。固定臨限值方法使用單個(例如,使用者定義之) SUV值作為用於分割影像內之熱點之臨限值。例如,使用者可將固定臨限位準設定於4.5之值。相對臨限值方法使用特定、恆定分率或百分比,且使用用於各熱點之設定於熱點最大SUV之特定分率或百分比之局部臨限值分割熱點。例如,使用者可將相對臨限值設定於40%,使得使用經計算為最大熱點SUV值之40%之臨限值分割各熱點。此兩種方法(習知固定臨限值及相對臨限值)存在缺點。例如,難以定義適用於所有患者之適當固定臨限值。習知相對臨限值方法亦存在問題,因為將臨限值定義為熱點最大值或峰值強度之固定分率導致使用較低臨限值分割具有較低整體強度之熱點。因此,使用低臨限值分割可表示具有相對較低攝取之較小病變之低強度熱點,與實際上表示實體上更大病變之更高強度熱點相比,可導致更大經識別熱點體積。Referring to Figures 18D-18F, the adaptive thresholding method as described herein addresses the challenges and disadvantages associated with previous thresholding techniques that utilize fixed or relative threshold values. In particular, while limit-based lesion segmentation based on maximum standard uptake value ( SUVmax ) in certain embodiments provides a transparent and reproducible method for segmenting hotspot volumes for estimating parameters such as uptake volume and SUV mean . However, the conventional fixed and relative threshold values do not work well under the full dynamic range of the lesion SUV max . The fixed threshold method uses a single (eg, user-defined) SUV value as the threshold for segmenting hot spots within an image. For example, the user may set the fixed threshold level at a value of 4.5. The relative threshold method uses a specific, constant fraction or percentage, and splits the hotspots using a local threshold for each hotspot set at a specific fraction or percentage of the hotspot's maximum SUV. For example, the user may set the relative threshold value at 40% so that each hotspot is segmented using a threshold value calculated to be 40% of the maximum hotspot SUV value. There are drawbacks to these two approaches (the conventional fixed and relative thresholds). For example, it is difficult to define an appropriate fixed threshold value that applies to all patients. There are also problems with the conventional relative threshold method, because defining the threshold as a fixed fraction of the hotspot maximum or peak intensity results in using a lower threshold to segment hotspots with lower overall intensities. Thus, segmentation using a low threshold value can represent low-intensity hotspots of smaller lesions with relatively low uptake, resulting in a larger volume of identified hotspots than higher-intensity hotspots that actually represent physically larger lesions.

例如,圖18D及圖18E繪示使用經判定為最大熱點強度之50% (例如,50% SUVmax )之臨限值分割兩個熱點。各圖在垂直方向上繪製依據位置而變化之強度,展示穿過熱點之線。圖18D展示繪示表示大實體病變1848之高強度熱點之強度之變動的曲線圖1840。熱點強度1842在熱點中心附近達到峰值且熱點臨限值1844經設定為熱點強度1842之最大值之50%。使用熱點臨限值1844分割熱點產生近似匹配實體病變之大小之經分割體積(如所展示,例如藉由比較線性尺寸1846與所繪示病變1848)。圖18E展示繪示表示小實體病變1858之低強度熱點之強度之變動的曲線圖1850。熱點強度1852亦在熱點中心附近達到峰值且熱點臨限值1854亦經設定為最大熱點強度1852之50%。然而,由於與高強度熱點之熱點強度1842相比,熱點強度1852的峰值不那麼尖銳,且具有較低強度峰值,故相對於熱點強度之最大值設定臨限值導致低得多的絕對臨限值。因此,基於定限之分割產生與較高強度熱點之熱點體積相比更大之熱點體積,儘管所表示之實體病變較小,如所展示,(例如)藉由比較線性尺寸1856與所繪示病變1858。因此,相對臨限值可針對較小實體病變產生較大明顯熱點體積。此對於評估治療反應尤其成問題,因為較低強度病變將具有較低臨限值且因此,對治療作出反應之病變可看似在體積上增加。For example, Figures 18D and 18E illustrate splitting two hotspots using a threshold value determined to be 50% of the maximum hotspot intensity (eg, 50% SUVmax ). Each graph plots intensity as a function of location in a vertical direction, showing lines passing through hotspots. FIG. 18D shows a graph 1840 depicting the variation in intensity of high-intensity hot spots representing large solid lesions 1848. FIG. The hotspot intensity 1842 peaks near the center of the hotspot and the hotspot threshold 1844 is set to 50% of the maximum value of the hotspot intensity 1842 . Segmenting the hotspot using the hotspot threshold 1844 yields a segmented volume that approximately matches the size of the solid lesion (as shown, eg, by comparing the linear dimension 1846 to the depicted lesion 1848). FIG. 18E shows a graph 1850 depicting the variation in intensity of low-intensity hot spots representing small solid lesions 1858. FIG. The hotspot intensity 1852 also peaks near the center of the hotspot and the hotspot threshold 1854 is also set to 50% of the maximum hotspot intensity 1852. However, since the peak of the hotspot intensity 1852 is less sharp and has a lower intensity peak than the hotspot intensity 1842 of the high intensity hotspot, setting the threshold relative to the maximum value of the hotspot intensity results in a much lower absolute threshold value. Thus, bound-based segmentation results in a larger hotspot volume than the hotspot volume of higher intensity hotspots, although the solid lesions represented are smaller, as shown, for example, by comparing the linear dimension 1856 with the depicted Lesion 1858. Therefore, the relative threshold value can produce larger apparent hot spot volumes for smaller solid lesions. This is especially problematic for assessing treatment response, as lower intensity lesions will have lower threshold values and, therefore, lesions that respond to treatment may appear to increase in volume.

在某些實施例中,如本文中所描述之自適應定限藉由利用經運算為熱點強度之百分比之自適應臨限值來解決此等缺點,百分比(i)隨熱點強度(例如,SUVmax )增加而降低且(ii)取決於熱點強度(例如,SUVmax )及整體生理攝取(例如,如由諸如血池參考值之參考值量測)兩者。因此,不同於習知相對定限方法,本文中所描述之自適應定限方法中所使用之熱點強度的特定分率/百分比改變,且其自身係熱點強度之函數且在某些實施例中亦考量生理攝取。例如,如圖18F之闡釋性繪圖1860中所展示,利用如本文中所描述之可變、自適應定限方法將臨限值1864設定為峰值熱點強度1852之較高百分比(例如,如圖18F中所展示為90%)。如圖18F中所繪示,如此做容許基於定限之分割來識別更準確反映熱點表示之病變1866之真實大小的熱點體積。In certain embodiments, adaptive thresholding as described herein addresses these drawbacks by utilizing an adaptive threshold value computed as a percentage of hotspot intensity, the percentage (i) varies with hotspot intensity (eg, SUV max ) increases and decreases and (ii) depends on both hot spot intensity (eg, SUV max ) and overall physiological uptake (eg, as measured by a reference such as a blood pool reference). Thus, unlike conventional relative bounding methods, the specific fraction/percentage change in hotspot intensity used in the adaptive bounding method described herein is itself a function of hotspot intensity and in some embodiments Physiological uptake is also considered. For example, as shown in the illustrative plot 1860 of FIG. 18F, the threshold value 1864 is set to a higher percentage of the peak hot spot intensity 1852 using a variable, adaptive thresholding method as described herein (eg, as shown in FIG. 18F ). shown in 90%). As depicted in Figure 18F, doing so allows for the identification of hotspot volumes that more accurately reflect the true size of the lesion 1866 represented by the hotspot based on constrained segmentation.

在某些實施例中,藉由首先使用分水嶺演算法將異構病變分成同構子組分,且最終自附近強度峰值排除攝取來促進定限。如本文中所描述,自適應定限可(例如)藉由經由如本文中所描述之機器學習模組實施之深度神經網路而應用於手動預分割之病變以及自動化偵測,以提高可重現性及穩健性且增加可解釋性。 G. 比較用於 PYL-PET/CT 成像之實例性定限函數及比例調整因數之實例性研究 In certain embodiments, confinement is facilitated by first dividing heterogeneous lesions into homogeneous subcomponents using a watershed algorithm, and finally excluding uptake from nearby intensity peaks. As described herein, adaptive delimitation can be applied to manually pre-segmented lesions and automated detection, for example, by deep neural networks implemented through machine learning modules as described herein, to improve reproducibility Reality and robustness and increased interpretability. G. Example Study Comparing Example Defining Functions and Scaling Factors for PYL-PET/CT Imaging

此實例描述經執行以評估用於如本文中(例如)在章節F中所描述之自適應定限方法中之各種參數且使用手動註解之病變作為參考來比較固定臨限值及相對臨限值的研究。This example describes what was performed to evaluate various parameters for use in adaptive thresholding methods as described herein, for example, in Section F, and to compare fixed and relative thresholds using manually annotated lesions as references Research.

此實例之研究使用242名患者之18 F-DCFPyL PET/CT掃描,其中熱點對應於由有經驗的核醫學讀者手動分割之骨、淋巴及前列腺病變。總共註解792個熱點體積,涉及167名患者。執行兩項研究以評估定限演算法。在第一項研究中,經手動註解之熱點係用不同定限演算法細化,且估計大小順序之保留程度,即,在細化之後較小熱點體積保持小於最初較大熱點體積的程度。在第二項研究中,藉由對藉由根據本文中所描述之各項實施例之機器學習方法自動偵測之可疑熱點定限來進行細化係經執行且與手動註解進行比較。The study of this example used 18 F-DCFPyL PET/CT scans of 242 patients, with hot spots corresponding to bone, lymphoid and prostate lesions manually segmented by an experienced nuclear medicine reader. A total of 792 hotspot volumes were annotated, involving 167 patients. Two studies were performed to evaluate the bounded algorithm. In the first study, manually annotated hotspots were refined with different bounded algorithms and the degree to which the order of size was preserved, ie, the extent to which smaller hotspot volumes remained smaller than initially larger hotspot volumes after refinement, were estimated. In a second study, refinement by defining suspicious hotspots automatically detected by machine learning methods according to various embodiments described herein was performed and compared to manual annotation.

此實例中之PET影像強度經按比例調整以表示標準攝取值(SUV),且在此章節中被稱為攝取或攝取強度。經比較之不同定限演算法係如下:在SUV=2.5時之固定臨限值、SUVmax 之50%之相對臨限值及自適應臨限值之變體。自適應臨限值係由SUVmax 之下降百分比定義(在具有及不具有最大臨限位準之情況下)。將平線區位準設定為高於對應於健康組織之區域中之正常攝取強度。執行兩項支援調查研究以選擇適當平線區位準:一項研究選擇主動脈中之正常攝取強度,且一項研究選擇前列腺中之正常攝取強度。除其他之外,定限方法係基於其等與核醫學讀者執行之註解相比保留之大小順序來評估。例如,若核醫學讀者手動分割熱點,且經手動分割之熱點體積根據大小排序,則大小順序之保留係指藉由使用自動化定限方法(例如,其不包含使用者互動)分割相同熱點而產生之熱點體積將以相同方式根據其等大小排序的程度。根據經加權秩相關量度,自適應定限方法之兩項實施例在大小順序保留方面實現最佳效能。此等自適應定限方法之兩者利用以低強度病變之SUVmax 之90%開始且在血池參考值的兩倍(例如,2 x [主動脈參考攝取])處達到平線區之臨限值。第一方法(被稱為「P9050-sat」)在平線區位準係SUVmax 之50%時達到平線區,另一方法(被稱為「P9040-sat」)在平線區係SUVmax 之40%時達到平線區。The PET image intensities in this example are scaled to represent standard uptake values (SUVs) and are referred to in this section as uptake or uptake intensity. The different bounding algorithms compared are as follows: a fixed threshold at SUV=2.5, a relative threshold of 50% of SUV max , and a variant of an adaptive threshold. The adaptive threshold is defined by the percent drop in SUV max (with and without the maximum threshold level). The flat line zone level was set higher than the normal uptake intensity in the zone corresponding to healthy tissue. Two supporting investigation studies were performed to select the appropriate horizon level: one study selected normal uptake intensities in the aorta, and one study selected normal uptake intensities in the prostate. Among other things, delimitation methods were evaluated based on their order of magnitude preserved compared to annotations performed by nuclear medicine readers. For example, if a nuclear medicine reader manually segments hotspots, and the manually segmented hotspot volumes are ordered by size, then the preservation of size order refers to those generated by segmenting the same hotspot using an automated bound method (eg, which does not include user interaction) The extent to which the hotspot volumes will be sorted in the same way according to their equal size. According to a weighted rank correlation measure, two embodiments of the adaptive bound method achieve the best performance in terms of size order preservation. Both of these adaptive delimiting methods utilize approaches that start at 90% of the SUV max of low-intensity lesions and reach the plateau at twice the blood pool reference value (eg, 2 x [aortic reference uptake]). limit. The first method (referred to as "P9050-sat") reached the flat-line area at the flat-line area level at 50% of SUV max , and the other method (referred to as "P9040-sat") was in the flat-line area of SUV max 40% of the time to reach the flat line area.

亦發現使用定限細化經自動偵測及分割之熱點改變精確度-召回率權衡。雖然原始、經自動偵測及分割之熱點具有高召回率及低精確度,但使用P9050-sat定限方法細化分割在精確度及召回率方面產生更平衡的效能。It was also found that using bounded refinement of automatically detected and segmented hot spots alters the precision-recall trade-off. While raw, automatically detected and segmented hotspots have high recall and low precision, refining the segmentation using the P9050-sat bound method yields more balanced performance in terms of precision and recall.

改良之相對大小保留指示治療反應之評估將得到改良/更準確,此係因為演算法更佳擷取核醫學讀者之註解之大小順序。可藉由引入分離定限方法而使過度分割與欠分割之間的權衡自偵測步驟解除耦合,即,除了使用如本文中所描述之機器學習方法執行之自動化熱點偵測及分割方法之外,亦使用如本文中所描述之分析、自適應分割方法。The improved relative size retention indicates that the assessment of treatment response will be improved/more accurate because the algorithm better captures the size order of the nuclear medicine readers' annotations. The trade-off between over-segmentation and under-segmentation can be decoupled from the self-detection step by introducing a separation-defining approach, i.e., in addition to automated hotspot detection and segmentation methods performed using machine learning methods as described herein , also using analytical, adaptive segmentation methods as described herein.

本文中所描述之實例性支援研究係用於判定用於運算對應於最大臨限值之平線區值之比例調整因數。例如,如本文中所描述,在本實例中,基於各種參考區域中之正常、健康組織中之強度來判定此等比例調整因數。例如,將基於主動脈區域中之強度之血池參考乘以1.6之因數產生通常高於主動脈中之強度值之95%但低於前列腺中之典型正常攝取的位準。因此,在特定實例性定限函數中,使用較高值。特定言之,為達成通常亦高於正常前列腺組織中之大多數強度之位準,判定2之因數。該值係基於對前列腺體積內之PET影像體素之矢狀、冠狀及橫向平面中之直方圖及影像投影的調查研究而手動判定,但排除對應於腫瘤攝取之任何部分。實例性影像圖塊及展示比例調整因數之對應直方圖係在圖18G中展示。i. 介紹 The example supporting studies described herein were used to determine the scaling factor used to compute the flat line region value corresponding to the maximum threshold value. For example, as described herein, in this example, these scaling factors are determined based on intensities in normal, healthy tissue in various reference regions. For example, multiplying a blood pool reference based on intensity in the aortic region by a factor of 1.6 yields levels typically higher than 95% of intensity values in the aorta but lower than typical normal uptake in the prostate. Therefore, in certain example bound functions, higher values are used. In particular, a factor of 2 was determined to achieve a level of intensity that is also generally higher than most intensities in normal prostate tissue. This value was determined manually based on a survey of histograms and image projections in the sagittal, coronal and transverse planes of PET image voxels within the prostate volume, excluding any portion corresponding to tumor uptake. Example image tiles and corresponding histograms showing scaling factors are shown in Figure 18G. i. Introduction

定義PET/CT中之病變體積可係主觀程序,因為病變表現為PET中之熱點且通常不具有明確邊界。有時,病變體積可基於其等在CT影像中之解剖範圍來分割,然而,此方法將導致忽略關於示蹤劑攝取之特定資訊,因為將不覆蓋完全攝取。此外,特定病變可在功能、PET影像中可見,但在CT影像中無法看到。此章節描述設計旨在準確識別反映生理攝取體積(即,其中攝取高於背景之體積)之熱點體積之定限方法的實例性研究。為以此方式執行分割及識別熱點體積,選擇臨限值以便平衡包含背景之風險與未分割反映完全攝取體積之足夠大熱點體積之風險。Defining lesion volume in PET/CT can be a subjective procedure, as lesions appear as hot spots in PET and often do not have well-defined boundaries. Sometimes the lesion volume can be segmented based on its anatomical extent in the CT image, however, this approach would result in ignoring specific information about tracer uptake as complete uptake would not be covered. In addition, certain lesions can be seen on functional, PET images, but not on CT images. This section describes an example study designed to accurately identify hot spot volumes that reflect physiological uptake volumes (ie, volumes where uptake is above background). To perform segmentation and identify hotspot volumes in this manner, threshold values are chosen to balance the risk of including background with the risk of not segmenting a sufficiently large hotspot volume to reflect the full ingested volume.

此風險權衡通常藉由選擇針對預期熱點體積判定之SUVmax 值之50%或40%作為臨限值來解決。此方法之基本原理在於,對於高攝取病變(例如,對應於PET影像中之高強度熱點),可將臨限值設定為高於低攝取病變(例如,其等對應於較低強度熱點),同時維持不分割表示完全攝取體積之體積之與熱點體積相同的風險位準。然而,對於低信雜比熱點,使用SUVmax 之50%之臨限值將導致分割中包含背景。為避免此,可使用SUVmax 之下降百分比,例如,針對低強度熱點以(例如) 90%或75%開始。此外,一旦臨限值充分高於背景位準,包含背景之風險就為低,此針對遠低於高攝取病變之SUVmax 之50%之臨限值發生。因此,臨限值可以高於典型背景強度之平線區位準為上限。This risk trade-off is usually addressed by choosing 50% or 40% of the SUV max value determined for the expected hot spot volume as the threshold value. The rationale for this approach is that for high-uptake lesions (eg, corresponding to high-intensity hotspots in PET images), the threshold can be set higher than for low-uptake lesions (eg, corresponding to lower-intensity hotspots), At the same time, the same risk level as the hot spot volume is maintained for the volume representing the complete intake volume without segmentation. However, for low signal-to-noise ratio hotspots, using a threshold value of 50% of SUV max will result in the inclusion of background in the segmentation. To avoid this, a percentage drop in SUV max can be used, eg, starting at, eg, 90% or 75% for low intensity hot spots. Furthermore, the risk of including background is low once the threshold value is sufficiently above the background level, which occurs for threshold values well below 50% of the SUV max for high uptake lesions. Therefore, the threshold value can be above the flat line level of typical background intensity as an upper limit.

用於遠高於典型背景攝取強度之攝取強度位準之一個參考係平均肝臟攝取。基於實際背景攝取強度,可需要其他參考位準。背景攝取強度在骨、淋巴及前列腺中不同,其中骨具有最低背景攝取強度且前列腺具有最高背景攝取強度。不考慮組織而使用相同定限方法係有利/較佳的,因為其容許使用相同分割方法而不考慮特定病變之位置及/或分類。因此,此實例之研究針對全部三種組織類型中之病變使用相同定限參數來評估臨限值。在此實例中評估之自適應定限變體包含在肝臟攝取強度達到平線區之一個變體,在經估計為高於主動脈攝取之位準達到平線區之一個變體,及在經估計為高於前列腺攝取強度之位準達到平線區之若干變體。Mean liver uptake was used as a reference system for uptake intensity levels well above typical background uptake intensity levels. Other reference levels may be required based on the actual background uptake intensity. Background uptake intensity differed in bone, lymph and prostate, with bone having the lowest background uptake intensity and prostate having the highest background uptake intensity. Using the same qualifying method regardless of tissue is advantageous/preferable because it allows the same segmentation method to be used regardless of the location and/or classification of a particular lesion. Therefore, the study of this example used the same delimiting parameters to assess threshold values for lesions in all three tissue types. The adaptive delimiting variants evaluated in this example included a variant that reached a plateau in hepatic uptake intensity, a variant that reached a plateau at a level estimated to be higher than aortic uptake, and a Several variants were estimated to reach the plateau above the level of prostate uptake intensity.

特定先前方法已將位準判定為經運算為血池攝取強度之平均數加上血池攝取強度之標準偏差的兩倍(例如,血池攝取強度之平均數+ 2 x SD)之縱膈血池攝取強度之函數。然而,依靠標準偏差估計之此方法可導致非所要錯誤及雜訊敏感度。特定言之,估計標準偏差遠不如估計平均數穩健,且可受雜訊、微小分割錯誤或PET/CT未對準影響。用以估計高於血液攝取強度之位準之更穩健方式使用固定因數乘以平均數或參考主動脈值。為找到合適因數,研究主動脈中之攝取強度之分佈且在此實例中對其進行描述。亦研究正常前列腺攝取強度以判定可應用於參考主動脈攝取以運算通常高於正常前列腺強度之位準的適當因數。ii. 方法 對手動註解定限 Certain prior methods have determined the level to be mediastinal blood calculated as the mean of blood pool uptake intensity plus twice the standard deviation of blood pool uptake intensity (eg, mean blood pool uptake intensity + 2 x SD). A function of pool uptake intensity. However, this approach, which relies on standard deviation estimates, can lead to undesired errors and noise susceptibility. In particular, the estimated standard deviation is much less robust than the estimated mean, and can be affected by noise, small segmentation errors, or PET/CT misalignment. A more robust way to estimate levels above blood uptake intensity uses a fixed factor multiplied by the mean or reference aortic value. To find suitable factors, the distribution of uptake intensity in the aorta was studied and described in this example. Normal prostate uptake intensity is also studied to determine appropriate factors that can be applied to reference aortic uptake to calculate levels generally higher than normal prostate intensity. ii. The method is limited to manual annotations

此研究使用資料之子集,該子集僅含有其中相同類型之至少一個其他病變在同一患者中之病變。此導致具有涉及92名患者之684個經手動分割之病變攝取體積(骨中278個,淋巴結中357個,前列腺中49個)之資料集。執行藉由定限之自動細化,且比較輸出與原始體積。藉由患者及組織類型內之精細化體積與原始體積之間的秩相關之經加權平均值來量測效能,其中權重由該患者中之經分割熱點體積之數目給出。此效能量度指示是否保留經分割熱點體積之間的相對大小,但忽略絕對大小(其等經主觀定義,因為攝取體積不具有明確邊界)。然而,對於特定患者及組織類型,相同核醫學讀者進行所有註解,且因此可假定該等註解以系統方式進行,其中相較於較大病變註解,較小病變註解實際上反映較小攝取體積。對經自動偵測之病變定限 This study used a subset of data that contained only lesions in which at least one other lesion of the same type was in the same patient. This resulted in a dataset with 684 manually segmented lesion uptake volumes (278 in bone, 357 in lymph nodes, and 49 in prostate) involving 92 patients. Perform automatic refinement by bounds and compare the output to the original volume. Performance is measured as a weighted average of the rank correlation between refined volumes and raw volumes within a patient and tissue type, where the weight is given by the number of segmented hotspot volumes in that patient. This efficacy metric indicates whether relative sizes between segmented hotspot volumes are preserved, but absolute sizes (which etc. are subjectively defined, since ingested volumes do not have clear boundaries). However, for a particular patient and tissue type, the same nuclear medicine reader makes all the annotations, and it can therefore be assumed that these annotations are done in a systematic manner, where smaller lesion annotations actually reflect smaller uptake volumes than larger lesion annotations. Defining automatically detected lesions

此研究使用尚未用於訓練用於熱點偵測及分割之機器學習模組之資料之子集,從而導致具有涉及67名患者之285個經手動分割之病變攝取體積(骨104個,淋巴129個,前列腺52個)之資料集。在匹配經手動分割之病變之精細化(及未精細化)之經自動偵測體積之間量測精確度及召回率(敏感度對於骨為90%至91%,對於淋巴為92%至93%,對於前列腺為94%至98%)。此等效能量度量化經自動偵測及可能細化之熱點與經手動註解之熱點之間的相似性。血液攝取 This study used a subset of data that had not yet been used to train a machine learning module for hotspot detection and segmentation, resulting in 285 manually segmented lesion uptake volumes involving 67 patients (104 in bone, 129 in lymph, A dataset of prostate 52). Precision and recall (sensitivity 90% to 91% for bone and 92% to 93% for lymph) were measured between automatically detected volumes matching the refined (and unrefined) of manually segmented lesions %, and 94% to 98% for the prostate). This equivalent energy quantifies the similarity between automatically detected and possibly refined hotspots and manually annotated hotspots. blood uptake

對於242名患者,使用深度學習管線在CT組分中分割主動脈之胸部分。經分割之主動脈體積經投影至PET空間,且被侵蝕3 mm以最小化主動脈體積含有主動脈之外或血管壁中之區域的風險,同時盡可能多地保留主動脈內部之攝取。對於剩餘攝取強度,在各患者中,運算商數q = (aortaMEAN + 2 x aortaSD) / aortaMEAN。 前列腺攝取For 242 patients, a deep learning pipeline was used to segment the thoracic portion of the aorta in the CT component. The segmented aortic volume was projected into the PET space and eroded by 3 mm to minimize the risk of the aortic volume containing areas outside the aorta or in the vessel wall, while preserving as much uptake inside the aorta as possible. For residual uptake intensity, in each patient, the quotient q = (aortaMEAN + 2 x aortaSD) / aortaMEAN was calculated. Prostate uptake

在29名患者中,研究前列腺中之正常攝取。該研究係利用經由機器學習模組判定之經分割前列腺體積來執行。排除經手動註解之前列腺病變中之攝取強度。相對於主動脈參考攝取強度正規化之剩餘攝取強度係藉由直方圖以及軸向、矢狀及冠狀平面中之最大強度投影來視覺化,參見圖18G中之實例。最大投影之目的係為直方圖中之離群強度(尤其是與膀胱攝取有關之高於健康組織中之最大攝取強度的強度)找到解釋。定限方法 In 29 patients, normal uptake in the prostate was studied. The study was performed using segmented prostate volumes determined by a machine learning module. Uptake intensities in manually annotated prostate lesions were excluded. Residual uptake intensities normalized to aortic reference uptake intensities were visualized by histograms and maximum intensity projections in the axial, sagittal and coronal planes, see example in Figure 18G. The purpose of the maximum projection is to find an explanation for the outlier intensities in the histogram, especially those associated with bladder uptake that are higher than the maximum uptake intensities in healthy tissue. Qualified method

比較兩種基線方法(在SUV=2.5之固定臨限值及在SUVmax 之50%之相對臨限值)與自適應臨限值之六個變體。自適應臨限值係使用各與特定範圍之SUVmax 值相關聯的三種定限函數來定義。特定言之: (1)低範圍定限函數:第一定限函數係用於運算用於低範圍中之SUVmax 值之臨限值。該第一定限函數將臨限值運算為SUVmax 之固定(高)百分比, (2)中間範圍定限函數:第二定限函數係用於運算用於中間範圍內之SUVmax 值之臨限值。該第二定限函數將臨限值運算為SUVmax 之線性下降百分比,以等於範圍之上限之臨限值之最大臨限值為上限,及 (3)高範圍定限函數:高範圍定限函數係用於運算高範圍內之SUVmax 值之臨限值。該高範圍定限函數將臨限值設定於最大固定臨限值(飽和臨限值)或SUVmax 之固定(低)百分比(非飽和臨限值)。Two baseline methods (fixed threshold at SUV=2.5 and relative threshold at 50% of SUV max ) were compared with six variants of adaptive thresholds. The adaptive threshold is defined using three bounding functions each associated with a specific range of SUV max values. Specifically: (1) Low range bound function: The first bound function is used to calculate the threshold value for the SUV max value in the low range. The first bound function calculates the threshold value as a fixed (high) percentage of SUV max , (2) Intermediate range bound function: The second bound function is used to calculate the approximate value of SUV max for the intermediate range limit. The second bound function computes the threshold value as a linear percentage decrease of SUV max , with the maximum threshold value equal to the threshold value of the upper limit of the range as the upper limit, and (3) high-range bound function: high-range bound The function is used to calculate the threshold value of the SUV max value in the high range. The high range bound function sets the threshold value at a fixed maximum threshold value (saturation threshold value) or a fixed (low) percentage of SUV max (non-saturation threshold value).

上述三個定限函數之精確參數及範圍在各種自適應定限演算法之間改變,且在下面表2中列出。 表2:用於自適應定限演算法之定限函數之範圍及參數。 自適應 臨限值 低範圍 中間範圍 * 以區間右端值為上限 範圍 P9050-sat 90% SUVmax <主動脈 => 90% of SUVmax 90% SUVmax =主動脈,至 50% SUVmax = 2 x主動脈 => Interp. perc. of SUVmax * 50% SUVmax > 2 x主動脈 => 2 x主動脈 P9040-sat 90% SUVmax <主動脈 => 90% of SUVmax 90% SUVmax =主動脈,至 40% SUVmax = 2 x主動脈 => Interp. perc. of SUVmax * 40% SUVmax > 2 x主動脈 => 2 x主動脈 P7540-sat 75% SUVmax < 主動脈 => 75% of SUVmax 75% SUVmax =主動脈,至 40% SUVmax = 2 x主動脈 => Interp. perc. of SUVmax * 40% SUVmax > 2 x主動脈 => 2 x主動脈 P9050-non-sat 90% SUVmax <主動脈 => 90% of SUVmax 90% SUVmax =主動脈,至 50% SUVmax = 2 x主動脈 => Interp. perc. of SUVmax * 50% SUVmax > 2 x主動脈 => 50 % of SUVmax A9050-sat 90% SUVmax <主動脈 => 90% of SUVmax 90% SUVmax =主動脈,至 50% SUVmax = 1.6 x主動脈 => Interp. perc. of SUVmax * 50% SUVmax > 1.6 x主動脈 => 1.6 x主動脈 L9050s-sat 90% SUVmax <主動脈 => 90% of SUVmax 90% SUVmax =主動脈,至 50% SUVmax =肝臟 => Interp. perc. of SUVmax * 50% SUVmax >肝臟 =>肝臟 The exact parameters and ranges of the above three bounding functions vary between various adaptive bounding algorithms and are listed in Table 2 below. Table 2: Ranges and parameters of bounded functions for adaptive bounded algorithms. adaptive threshold low range The middle range * is the upper limit of the right end of the range high range P9050-sat 90% of SUV max <aorta => 90% of SUV max 90% SUV max = aorta, to 50% SUV max = 2 x aorta => Interp. perc. of SUV max * 50% SUV max > 2 x aorta => 2 x aorta P9040-sat 90% of SUV max <aorta => 90% of SUV max 90% SUV max = aorta, to 40% SUV max = 2 x aorta => Interp. perc. of SUV max * 40% SUV max > 2 x aorta => 2 x aorta P7540-sat 75% of SUV max < aorta => 75% of SUV max 75% SUV max = aorta, to 40% SUV max = 2 x aorta => Interp. perc. of SUV max * 40% SUV max > 2 x aorta => 2 x aorta P9050-non-sat 90% of SUV max <aorta => 90% of SUV max 90% SUV max = aorta, to 50% SUV max = 2 x aorta => Interp. perc. of SUV max * 50% SUV max > 2 x aorta => 50 % of SUV max A9050-sat 90% of SUV max <aorta => 90% of SUV max 90% SUV max = aorta, to 50% SUV max = 1.6 x aorta => Interp. perc. of SUV max * 50% SUV max > 1.6 x aorta => 1.6 x aorta L9050s-sat 90% of SUV max <aorta => 90% of SUV max 90% SUV max = aorta, to 50% SUV max = liver => Interp. perc. of SUV max * 50% SUV max > Liver => Liver

依以下方式針對P9050-sat運算在中間SUVmax 範圍中使用之內插百分比:

Figure 02_image001
, 其中SUVhigh 之50%等於2 x [主動脈攝取強度],且SUVlow 之90%等於主動脈攝取強度。The interpolated percentage used in the intermediate SUV max range for the P9050-sat operation as follows:
Figure 02_image001
, where 50% of SUV high equals 2 x [aortic uptake intensity] and 90% of SUV low equals aortic uptake intensity.

類似地運算其他自適應定限演算法中使用之內插百分比。中間範圍中之臨限值則為:

Figure 02_image003
且對於其他自適應定限演算法類似地。iii. 結果 對經手動註解之病變定限 Interpolated percentages used in other adaptive bounding algorithms are computed similarly. The threshold value in the middle range is then:
Figure 02_image003
And similarly for other adaptive finite algorithms. iii. Results are limited to manually annotated lesions

藉由P9050-sat及9040-sat方法獲得最高經加權秩相關(0.81),其中P7540-sat、A9050-sat及L9050-sat亦提供高值。SUVmax 之相對50% (0.37)及P9050-non-sat (0.61)導致最低經加權秩相關。在SUV=2.5之固定臨限值導致低於大多數自適應定限方法之介於(0.74)之間的秩相關。針對定限方法之各者之經加權秩相關結果係在下面表3中概述。 表3.用於經評估之定限方法之秩相關之經加權平均值 定限 策略 秩相關之經加權平均值 固定,SUV = 2.5 0.74 SUVmax 之相對50% 0.37 P9050-sat 0.81 P9040-sat 0.81 P7540-sat 0.79 P9050-non-sat 0.61 A9050-sat 0.80 L9050s-sat 0.78 對經自動偵測之病變定限 The highest weighted rank correlation (0.81) was obtained by the P9050-sat and 9040-sat methods, with P7540-sat, A9050-sat and L9050-sat also providing high values. Relative 50% of SUV max (0.37) and P9050-non-sat (0.61) resulted in the lowest weighted rank correlation. A fixed threshold at SUV=2.5 resulted in a rank correlation between (0.74) lower than most adaptive bound methods. The weighted rank correlation results for each of the qualifying methods are summarized in Table 3 below. Table 3. Weighted Means of Rank Correlation for Evaluated Delimitation Methods limit strategy weighted average of rank correlations Fixed, SUV = 2.5 0.74 Relative 50% of SUV max 0.37 P9050-sat 0.81 P9040-sat 0.81 P7540-sat 0.79 P9050-non-sat 0.61 A9050-sat 0.80 L9050s-sat 0.78 Defining automatically detected lesions

在無細化之情況下,自動熱點偵測具有低精確度(0.31至0.47)但高召回率(0.83至0.92),以指示過度分割。使用SUVmax 之相對50%的定限演算法進行細化提高精確度(0.70至0.77),但將召回率降低至約50% (0.44至0.58)。使用P9050-sat進行細化提高精確度(0.51至0.84),其中召回率下降較少(0.61至0.89),以指示較少過度分割但較多欠分割之平衡。P9040-sat在此等方面類似於P9050-sat執行,而L9050-sat具有最高精確度(0.85至0.95)但最低召回率(0.31至0.56)。表4a至表4e展示精確度及召回率之完整結果。 表4a.在未進行分析分割細化之情況下之精確度及召回率值 細化 精確度 召回 骨熱點 0.38 0.92 淋巴熱點 0.47 0.83 前列腺熱點 0.31 0.93 表4b.在經由SUVmax 之相對50%的定限方法進行細化之情況下之精確度及召回率值 SUVmax 相對 50% 精確度 召回 骨熱點 0.74 0.58 淋巴熱點 0.77 0.44 前列腺熱點 0.70 0.51 表4c.在使用P9050-sat實施方案進行自適應分割之情況下之精確度及召回率值 P9050-sat 精確度 召回率 骨熱點 0.84 0.61 淋巴熱點 0.70 0.67 前列腺熱點 0.51 0.89 表4d.在使用P9040-sat實施方案進行自適應分割之情況下之精確度及召回率值 P9040-sat 精確度 召回率 骨熱點 0.84 0.59 淋巴熱點 0.71 0.66 前列腺熱點 0.52 0.89 表4e.在使用L9050-sat實施方案進行自適應分割之情況下之精確度及召回率值 L9050-sat 精確度 召回率 骨熱點 0.95 0.31 淋巴熱點 0.91 0.39 前列腺熱點 0.85 0.56 支援定限方法:血液攝取 Without refinement, automatic hotspot detection has low precision (0.31 to 0.47) but high recall (0.83 to 0.92), indicating over-segmentation. Refinement using a bounded algorithm relative to 50% of SUV max improves precision (0.70 to 0.77), but reduces recall to about 50% (0.44 to 0.58). Refinement with P9050-sat improves precision (0.51 to 0.84) with less drop in recall (0.61 to 0.89) to indicate a balance of less over-segmentation but more under-segmentation. P9040-sat performs similarly to P9050-sat in these respects, while L9050-sat has the highest precision (0.85 to 0.95) but the lowest recall (0.31 to 0.56). Table 4a to Table 4e show the complete results for precision and recall. Table 4a. Precision and recall values without analysis segmentation refinement No refinement Accuracy recall _ bone hot spots 0.38 0.92 lymphatic hotspots 0.47 0.83 Prostate Hotspot 0.31 0.93 Table 4b. Precision and recall values with refinement via relative 50% bound method of SUV max Relative 50% of SUV max Accuracy recall _ bone hot spots 0.74 0.58 lymphatic hotspots 0.77 0.44 Prostate Hotspot 0.70 0.51 Table 4c. Precision and recall values with adaptive segmentation using the P9050-sat implementation P9050-sat Accuracy recall bone hot spots 0.84 0.61 lymphatic hotspots 0.70 0.67 Prostate Hotspot 0.51 0.89 Table 4d. Precision and recall values with adaptive segmentation using the P9040-sat implementation P9040-sat Accuracy recall bone hot spots 0.84 0.59 lymphatic hotspots 0.71 0.66 Prostate Hotspot 0.52 0.89 Table 4e. Precision and recall values with adaptive segmentation using the L9050-sat implementation L9050-sat Accuracy recall bone hot spots 0.95 0.31 lymphatic hotspots 0.91 0.39 Prostate Hotspot 0.85 0.56 Support limited method: blood intake

對於所得商數,qMEAN + 2 x qSD係1.54,因此使用1.6之因數經判定為用於實現高於大多數血液攝取強度值之臨限位準的良好候選者。在實例性研究中,僅三名患者具有高於1.6 x aortaMEAN之aortaMEAN + 2 x aortaSD。三個離群患者具有q=1.64、1.92及1.61,其中具有1.92之因數之患者具有溢出至脾中之錯誤主動脈分割,且其他患者具有接近1.6之商數。支援定限方法:前列腺攝取 For the resulting quotient, qMEAN + 2 x qSD is 1.54, so using a factor of 1.6 was determined to be a good candidate for achieving a threshold level above most blood uptake intensity values. In the example study, only three patients had aortaMEAN + 2 x aortaSD above 1.6 x aortaMEAN. Three outlier patients had q=1.64, 1.92, and 1.61, where the patient with a factor of 1.92 had erroneous aortic segmentation spilling into the spleen, and the others had quotients close to 1.6. Qualified Method of Support: Prostate Uptake

基於考慮到對正常前列腺強度之直方圖及軸向、矢狀及冠狀平面中之投影的手動查看,2.0之值將為應用於主動脈參考值以獲得高於前列腺中之典型攝取強度之位準的適當比例調整因數。 H. 實例:使用基於 AI 之熱點分割與僅使用定限進行比較 Based on a manual review taking into account the histogram of normal prostate intensities and projections in the axial, sagittal and coronal planes, a value of 2.0 would be the level to be applied to the aortic reference to obtain higher than typical uptake intensities in the prostate appropriate scaling factor for . H. Example: Using AI -Based Hotspot Segmentation vs. Using Only Throttling

在此實例中,比較使用如本文中所描述之利用機器學習模組對熱點進行分割及分類的基於AI之方法執行的熱點偵測及分割與僅利用基於定限之分割的習知方法。In this example, hotspot detection and segmentation performed using an AI-based approach to segmenting and classifying hotspots utilizing machine learning modules as described herein is compared to conventional methods that utilize only threshold-based segmentation.

圖19A展示不利用機器學習技術之習知熱點分割方法1900。代替性地,由使用者基於對熱點之手動描繪來執行熱點分割,接著為基於強度(例如,SUV)之定限1904。使用者手動遮罩放置1922指示影像1920內之所關注區域(ROI) 1924之圓形標記。一旦放置該ROI,固定或相對定限方法便可用於分割經手動放置之ROI內之熱點1926。個別地,相對定限方法將特定ROI之臨限值設定為ROI內之最大SUV之固定百分比,且基於SUV之定限方法係用於分割各經使用者識別之熱點,從而細化初始經使用者繪製之邊界。由於此習知方法依靠使用者手動識別及繪製熱點之邊界,故其可耗時,且此外,分割結果以及下游量化1906 (例如,熱點量度之運算)可因不同使用者而改變。此外,如影像1928及1930中概念性地繪示,取決於特定臨限值,不同臨限值可產生不同熱點分割1929、1931。此外,雖然可調諧SUV臨限位準以偵測早期疾病,但如此做通常導致大量偽陽性結果,從而分散對真陽性的注意力。最後,例如,如本文中所闡釋,習知基於固定或相對SUV之定限方法遭受病變大小之高估及/或低估。19A shows a conventional hotspot segmentation method 1900 that does not utilize machine learning techniques. Instead, hot spot segmentation is performed by the user based on a manual delineation of the hot spot, followed by intensity (eg, SUV) based qualification 1904 . The user manually masks placement 1922 a circular marker indicating a region of interest (ROI) 1924 within the image 1920. Once the ROI is placed, a fixed or relatively defined approach can be used to segment 1926 hot spots within the manually placed ROI. Individually, relative bounding methods set the threshold for a particular ROI as a fixed percentage of the maximum SUV within the ROI, and SUV-based bounding methods are used to segment each user-identified hotspot to refine the initial used boundaries drawn by the author. Since this conventional method relies on the user to manually identify and draw the boundaries of the hotspots, it can be time-consuming, and furthermore, the segmentation results and downstream quantification 1906 (eg, computation of hotspot metrics) can vary from user to user. Furthermore, as conceptually depicted in images 1928 and 1930, different thresholds may result in different hotspot segmentations 1929, 1931, depending on the particular threshold. Furthermore, although the SUV threshold level can be tuned to detect early disease, doing so often results in a large number of false positives that distract from true positives. Finally, for example, as explained herein, conventional fixed or relative SUV based bounding methods suffer from overestimation and/or underestimation of lesion size.

參考圖19B,根據本文中所描述之某些實施例的基於AI之方法1950利用一或多個機器學習模組來自動分析(例如,合成PET/CT之) CT 1954及PET影像1952以對熱點進行偵測、分割及分類1956,而非利用含有熱點之ROI之手動、基於使用者之選擇結合基於SUV之定限。如本文中進一步詳細描述,基於機器學習之熱點分割及分類可用於建立初始3D熱點圖,該初始3D熱點圖接著可用作分析分割方法1958 (諸如本文中例如在章節F及G中所描述之自適應定限技術)之輸入。除其他之外,使用機器學習方法減少查看影像(例如,由醫療執業醫師,諸如放射科醫師)所需之使用者主觀性及時間。此外,AI模型能夠執行複雜任務,且可識別早期病變以及高負荷轉移性疾病,同時保持偽陽性率為低。以此方式改良之熱點分割提高與量測可用於評估疾病嚴重性、預後、治療反應及類似者之量度等相關的下游量化1960之準確度。Referring to Figure 19B, an AI-based method 1950 according to certain embodiments described herein utilizes one or more machine learning modules to automatically analyze (eg, synthesize PET/CT) CT 1954 and PET images 1952 to detect hot spots Detect, segment, and classify 1956 instead of using manual, user-based selection combined with SUV-based delimitation of ROIs containing hotspots. As described in further detail herein, machine learning-based hotspot segmentation and classification can be used to create an initial 3D heat map, which can then be used to analyze segmentation methods 1958 (such as described herein, for example, in Sections F and G) Adaptive bounding technique) input. Among other things, using machine learning methods reduces user subjectivity and time required to view images (eg, by a medical practitioner, such as a radiologist). In addition, the AI model is capable of performing complex tasks and can identify early-stage lesions as well as high-burden metastatic disease, while keeping false-positive rates low. Improved hotspot segmentation in this way increases the accuracy of downstream quantifications 1960 associated with measures that can be used to assess disease severity, prognosis, treatment response, and the like.

圖20示範與習知定限方法相比,基於機器學習之分割方法之經改良效能。在基於機器學習之方法中,藉由首先如本文中(例如,在章節E中)所描述使用機器學習模組偵測及分割熱點,以及使用實施章節F及G中所描述之自適應定限技術之版本之分析模型進行細化來執行熱點分割。習知定限方法係使用固定定限來執行,以分割具有高於固定臨限值之強度之體素叢集。如圖20中所展示,雖然習知定限方法歸因於尿道中對放射性藥物之攝取而產生偽陽性2002a及2002b,但機器學習分割技術正確地忽略尿道攝取且僅分割前列腺病變2004及2006。Figure 20 demonstrates the improved performance of the machine learning based segmentation method compared to conventional bound methods. In a machine learning-based approach, hotspots are detected and segmented by first using a machine learning module as described herein (eg, in Section E), and using the adaptive bounding described in implementing Sections F and G The analytical model of the technical version is refined to perform hot spot segmentation. Conventional bound methods are performed using a fixed bound to segment clusters of voxels with intensities above a fixed threshold. As shown in Figure 20, while the conventional bounding method produced false positives 2002a and 2002b due to uptake of radiopharmaceuticals in the urethra, the machine learning segmentation technique correctly ignored urethral uptake and only segmented prostate lesions 2004 and 2006.

圖21A至圖21I比較藉由習知定限方法執行之在腹部區域內之熱點分割結果(左手影像)與根據本文中所描述之某些實施例之機器學習方法之熱點分割結果(右手影像)。圖21A至圖21I展示在腹部區域中沿著垂直方向移動之3D影像之2D圖塊系列,其中藉由各方法識別之熱點區域經疊對。圖中所展示之結果展示腹部攝取係習知定限方法之問題,其中在左手側影像中出現大的偽陽性區域。此可能由於腎臟及膀胱中之大攝取而引起。習知分割方法需要複雜方法來抑制此攝取且限制此等偽陽性。相比而言,圖21A至圖21I中所展示之用於分割影像之機器學習模型不依靠任何此抑制,且代替性地學會忽略此種攝取。 I. 實例性 CAD 裝置實施方案 Figures 21A-21I compare the results of hotspot segmentation in the abdominal region performed by a conventional bounding method (left-hand image) with the results of hotspot segmentation (right-hand image) according to a machine learning method according to some embodiments described herein . Figures 21A-21I show a series of 2D tiles of a 3D image moving in a vertical direction in the abdominal region, where the hotspot regions identified by each method are superimposed. The results presented in the figures demonstrate that abdominal uptake is a problem with conventional limit-of-limit methods, where a large false-positive area appears in the left-hand side image. This may be caused by large uptake in the kidneys and bladder. Conventional segmentation methods require sophisticated methods to suppress this uptake and limit these false positives. In contrast, the machine learning model shown in Figures 21A-21I for segmenting images does not rely on any such suppression, and instead learns to ignore such uptake. I. Example CAD Device Implementations

此章節描述根據本文中所描述之某些實施例之實例性CAD裝置實施方案。此實例中所描述之CAD裝置被稱為「aPROMISE」且使用多個機器學習模組執行自動化器官分割。實例性CAD裝置實施方案使用分析模型來執行熱點偵測及分割。This section describes example CAD device implementations according to certain embodiments described herein. The CAD device described in this example is called "aPROMISE" and uses a number of machine learning modules to perform automated organ segmentation. An example CAD device implementation uses an analytical model to perform hot spot detection and segmentation.

此實例中所描述之aPROMISE ( 自動化前列腺特異性膜抗原成像分割 ) 實例性實施方案利用具有網路介面的基於雲端之軟體平台,使用者可在該網路介面處上傳呈DICOM檔案之形式之PSMA PET/CT影像資料之身體掃描,查看患者研究且在團隊內共用研究評估。該軟體符合醫學數位成像及通信(DICOM) 3標準。可針對各患者上傳多次掃描,且系統對各項研究提供單獨查看。軟體包含提供查看頁面之GUI,該查看頁面顯示並容許使用者在同時展示PET、CT、PET/CT融合及最大強度投影(MIP)之4面板視圖中檢視研究,且包含分別顯示各視圖之選項。裝置係用於查看整個患者研究,以使用影像視覺化及分析工具以使使用者識別及標記所關注區域(ROI)。在查看影像資料時,使用者可藉由自在用滑鼠指針懸停在經分割區域上方時突顯之預定義熱點選擇,或藉由手動繪製(即,選擇影像圖塊中之個別體素以包含為熱點)來標記ROI。針對選定或(手動)繪製之熱點自動執行定量分析。使用者可查看此定量分析之結果且判定應將哪些熱點報告為可疑病變。在aPROMISE中,所關注區域(ROI)係指影像之連續子部分;熱點係指具有高局部強度(例如,指示高攝取) (例如,相對於周圍區域)之ROI,且病變係指被視為對疾病可疑之經使用者定義或使用者選擇之ROI。The example implementation of aPROMISE ( Automated Prostate-Specific Membrane Antigen Imaging Segmentation ) described in this example utilizes a cloud-based software platform with a web interface where users can upload PSMA in the form of a DICOM file Body scans of PET/CT imaging data, review patient studies and share study assessments within the team. The software is compliant with the Digital Imaging and Communications in Medicine (DICOM) 3 standard. Multiple scans can be uploaded for each patient, and the system provides individual viewing of each study. The software includes a GUI that provides a viewing page that displays and allows the user to view the study in a 4-panel view showing PET, CT, PET/CT fusion, and maximum intensity projection (MIP) simultaneously, and includes the option to display each view separately . The device is used to view the entire patient study to enable the user to identify and mark regions of interest (ROI) using image visualization and analysis tools. When viewing image data, the user can freely select by pre-defined hotspots that highlight when hovering over the segmented area with the mouse pointer, or by manually drawing (ie, selecting individual voxels in the image tile to include as the hotspot) to mark the ROI. Quantitative analysis is performed automatically on selected or (manually) drawn hotspots. The user can view the results of this quantitative analysis and decide which hot spots should be reported as suspicious lesions. In aPROMISE, a region of interest (ROI) is defined as a contiguous subsection of the image; hotspots are defined as ROIs with high local intensity (eg, indicative of high uptake) (eg, relative to surrounding regions), and lesions are defined as User-defined or user-selected ROI for suspected disease.

為建立報告,實例性實施方案之軟體需要簽名使用者確認品質控制,且對報告預覽進行電子簽名。經簽名報告經保存於裝置中且可作為JPG或DICOM檔案匯出。To create the report, the software of the example implementation requires the signing user to confirm quality control and electronically sign the report preview. The signed report is saved on the device and can be exported as a JPG or DICOM file.

aPROMISE裝置係實施於微服務架構中,如本文中進一步詳細描述且在圖29A及圖29B中展示。i. 工作流程 aPROMISE devices are implemented in a microservices architecture, as described in further detail herein and shown in Figures 29A and 29B. i. Workflow

圖22描繪aPROMISE裝置自上傳DICOM檔案至匯出經電子簽名之報告之工作流程。當登入時,使用者可將DICOM檔案匯入至aPROMISE中。經匯入之DICOM檔案經上傳至患者清單,其中使用者可點選患者以顯示對應研究可供查看。患者清單之佈局原理係顯示於圖23中。Figure 22 depicts the workflow of an aPROMISE device from uploading a DICOM file to exporting an electronically signed report. When logged in, users can import DICOM files into aPROMISE. The imported DICOM file is uploaded to the patient list, where the user can click on the patient to display the corresponding study available for viewing. The layout principle of the patient list is shown in FIG. 23 .

此視圖2300列出團隊內具有經上傳研究之所有患者且顯示患者資訊(姓名、ID及性別)、最新研究上傳日期及研究狀態。研究狀態指示每個患者的研究是否準備就緒以供查看(藍色符號,2302)、具有錯誤之研究(紅色符號,2304)、計算之研究(橙色符號,2306)及具有可用報告之研究(黑色符號,2308)。狀態符號之右上角中之數字指示每個患者之具有特定狀態之研究之數目。藉由點選患者、選擇研究且識別患者是否已進行前列腺切除術來起始研究之查看。研究資料將被公開且在查看視窗中顯示。This view 2300 lists all patients within the team that have uploaded studies and displays patient information (name, ID, and gender), date of last study upload, and study status. Study status indicates whether each patient's study is ready for review (blue symbols, 2302), studies with errors (red symbols, 2304), calculated studies (orange symbols, 2306), and studies with available reports (black symbols) symbol, 2308). The number in the upper right corner of the status symbol indicates the number of studies with a particular status for each patient. Viewing of the study was initiated by clicking on the patient, selecting the study, and identifying whether the patient had undergone prostatectomy. Research data will be made public and displayed in the viewing window.

圖24展示查看視窗2400,其中使用者可檢查PET/CT影像資料。病變係由使用者手動標記及報告,該使用者自藉由軟體分割之預定義之熱點選擇,或自藉由使用繪製工具用於選擇體素以作為熱點包含於程式中而製成之使用者定義之熱點來選擇。預定義之熱點(具有高局部強度攝取之所關注區域)係使用針對軟組織(前列腺及淋巴結節)及骨之特定方法自動分割,且在用滑鼠指針懸停於經分割區域上方時經突顯。使用者可選擇開啟分割顯示選項以同時在視覺上呈現預定義之熱點之分割。選定或繪製之熱點係自動定量分析之主體且在面板2402、2422及2442中詳述。FIG. 24 shows a viewing window 2400 in which the user can review PET/CT image data. Lesions are manually marked and reported by the user, either from predefined hotspots segmented by software, or from user-defined ones made by using a drawing tool for selecting voxels for inclusion in the program as hotspots hotspots to choose from. Predefined hot spots (regions of interest with high local intensity uptake) are automatically segmented using specific methods for soft tissue (prostate and lymph nodes) and bone, and are highlighted when hovering over segmented regions with the mouse pointer. The user can choose to turn on the split display option to visually present the splits of the predefined hotspots at the same time. The selected or plotted hot spots are the subject of automated quantitative analysis and are detailed in panels 2402, 2422, and 2442.

左側之可伸縮面板2402概述自DICOM資料提取之患者及研究資訊。面板2402亦顯示及列出關於由使用者選擇之熱點之定量資訊。手動驗證熱點位置及類型—T:在原發性腫瘤中局部化;N:區域轉移性疾病;Ma/b/c:遠處轉移性疾病(淋巴結、骨及軟組織)。裝置顯示自動化定量分析—關於使用者選擇之熱點之SUV最大值、SUV峰值、SUV平均數、病變體積、病變指數(LI),以容許使用者查看及決定哪些熱點在標準化報告中報告為病變。The retractable panel 2402 on the left summarizes patient and study information extracted from DICOM data. Panel 2402 also displays and lists quantitative information about hotspots selected by the user. Manual verification of hotspot location and type—T: localized in primary tumor; N: regional metastatic disease; Ma/b/c: distant metastatic disease (lymph nodes, bone, and soft tissue). The device displays automated quantitative analysis—SUV max, SUV peak, SUV mean, lesion volume, lesion index (LI) for user-selected hotspots to allow the user to view and decide which hotspots are reported as lesions in a standardized report.

中間面板2422包含DICOM影像資料之四面板視圖顯示。左上角顯示CT影像,右上角顯示PET/CT融合視圖,左下角顯示PET影像且右下角展示MIP。Middle panel 2422 contains a four-panel view display of DICOM image data. The CT image is shown in the upper left corner, the PET/CT fusion view is shown in the upper right corner, the PET image is shown in the lower left corner and the MIP is shown in the lower right corner.

MIP係用於體積資料之視覺化方法,其自各種視角顯示3D影像體積之2D投影。MIP成像係在Wallis JW、Miller TR、Lerner CA、Kleerup EC中描述。核醫學中之三維顯示。IEEE Trans Med Imaging。1989;8(4):297-30. doi: 10.1109/42.41482. PMID: 18230529。MIP is a visualization method for volumetric data that displays 2D projections of 3D image volumes from various perspectives. The MIP imaging system is described in Wallis JW, Miller TR, Lerner CA, Kleerup EC. 3D Display in Nuclear Medicine. IEEE Trans Med Imaging. 1989;8(4):297-30. doi: 10.1109/42.41482. PMID: 18230529.

可伸縮右面板2442包括用於最佳化影像查看之以下視覺化控制件及其用以操縱影像以用於查看目的之快捷鍵:The retractable right panel 2442 includes the following visual controls for optimizing image viewing and their shortcut keys for manipulating the image for viewing purposes:

視埠: ●   存在或不存在十字絲 ●   用於PET/CT融合影像之衰落選項 ●   選擇哪一標準核醫學色圖來視覺化PET示蹤劑攝取強度。Viewport: ● presence or absence of reticle ● Fading option for PET/CT fusion images ● Which standard nuclear medicine colormap to choose to visualize PET tracer uptake intensity.

SUV及CT視窗: ●   對影像開視窗,亦被稱為對比度擴展、直方圖修改或對比度增強,其中影像係經由強度操縱,改變圖片之外觀以突顯特定結構。 ●   在SUV視窗中,用於SUV強度之開視窗預設可藉由滑塊或快捷鍵調整。 ●   在CT視窗中,可使用快捷鍵或藉由點選並拖動輸入而自下拉式清單選擇用於亨氏強度之視窗預設,其中影像之亮度係經由視窗位準調整且對比度係經由視窗寬度調整。SUV and CT windows: ● Windowing the image, also known as contrast expansion, histogram modification, or contrast enhancement, where the image is manipulated by intensity to change the appearance of the image to highlight specific structures. ● In SUV window, the open window preset for SUV intensity can be adjusted by slider or shortcut key. ● In the CT window, the window preset for Heinz Intensity can be selected from the drop-down list using shortcut keys or by clicking and dragging the input, where the brightness of the image is adjusted by the window level and the contrast is by the window width Adjustment.

分割 ●   用以開啟或關閉參考器官之分割或全身分割之視覺化之器官分割顯示選項。 ●   使用者可選擇哪些面板視圖用以顯示器官分割。 ●   用以開啟或關閉選定區域中之預定義熱點之呈現之熱點分割顯示選項;骨盆區域、在骨或所有熱點中。segmentation ● To enable or disable the visualization of reference organ segmentation or whole-body segmentation visualization of organ segmentation display options. ● User can select which panel views to display organ segmentation. ● Hotspot split display option to turn on or off the presentation of predefined hotspots in the selected area; pelvis area, in bone or all hotspots.

檢視器手勢 ●   用於對CT視窗變焦、搖攝,改變圖塊及隱藏查看視窗之熱點之快捷鍵及組合。Viewer Gestures ● Shortcut keys and combinations for zooming, panning, changing blocks and hiding the hotspots of the viewing window in the CT window.

為進行報告建立,簽名使用者點選建立報告按鈕2462。在將建立報告之前,該使用者必須確認以下品質控制項目: ●   影像品質係可接受的 ●   PET及CT影像經正確對準 ●   患者研究資料係正確的 ●   參考值(血池、肝臟)係可接受的 ●   研究並非超級骨掃描To create a report, the signature user clicks the create report button 2462. Before the report will be created, the user must confirm the following quality control items: ● Image quality is acceptable ● PET and CT images are properly aligned ● Patient study data are correct ● Reference values (blood pool, liver) are acceptable ● The study is not a super bone scan

在確認品質控制項目之後,展示報告之預覽以供使用者進行電子簽名。報告包含患者概述、總的定量病變負荷及來自使用者選擇之熱點之個別病變之定量評估,以供使用者確認為病變。After confirming the quality control items, a preview of the report is displayed for the user to electronically sign. The report contains a patient overview, total quantitative lesion burden, and quantitative assessment of individual lesions from user-selected hotspots for user confirmation as lesions.

圖25展示實例性所產生報告2500。報告2500包含三個區段2502、2522及2542。FIG. 25 shows an example generated report 2500. Report 2500 includes three sections 2502, 2522, and 2542.

報告2500之區段2502提供自DICOM標籤獲得的患者資料之概述。其包含:患者之概述;患者姓名、患者ID、年齡及體重及研究資料之概述;研究日期、在注射時之經注射劑量、所使用之放射性藥物成像示蹤劑及其半衰期,及在注射示蹤劑與獲取影像資料之間的時間。Section 2502 of report 2500 provides an overview of patient data obtained from the DICOM tag. It contains: a summary of the patient; patient name, patient ID, age and weight, and a summary of study data; study date, injected dose at the time of injection, radiopharmaceutical imaging tracer used and its half-life, and The time between the tracer and the acquisition of image data.

報告2500之區段2522提供來自由使用者選擇之待包含為病變之熱點之經概述定量資訊。經概述之定量資訊顯示每病變類型(原發性前列腺腫瘤(T)、局部/區域骨盆淋巴結(N)及遠處轉移—淋巴結、骨或軟組織器官(Ma/b/c))之總病變負荷。概述區段2522亦顯示在參考器官中觀察之定量攝取(SUV平均數)。Section 2522 of report 2500 provides summarized quantitative information from hot spots selected by the user to be included as lesions. Summarized quantitative information showing total lesion burden per lesion type (primary prostate tumor (T), local/regional pelvic lymph nodes (N) and distant metastases - lymph nodes, bone or soft tissue organs (Ma/b/c)) . Summary section 2522 also shows the quantitative uptake (SUV mean) observed in the reference organ.

報告2500之區段2542係來自由使用者確認之選定熱點之各病變之詳細定量評估及位置。在查看報告時,使用者必須對他/她的患者研究查看結果(包含選定熱點及量化作為病變)進行電子簽名。接著將報告保存於裝置中且可作為JPG或DICOM檔案匯出。ii. 影像處理 預處理 DICOM 輸入資料 Section 2542 of report 2500 is a detailed quantitative assessment and location of each lesion from selected hotspots identified by the user. When viewing the report, the user must electronically sign his/her patient study review results (including selected hotspots and quantified as lesions). The report is then saved on the device and can be exported as a JPG or DICOM file. ii. Image processing to preprocess DICOM input data

影像輸入資料係以DICOM格式(其係豐富的資料表示)呈現。DICOM資料包含強度資料以及後設資料及通信結構。為最佳化資料以供aPROMISE使用,透過微服務傳遞資料,該微服務重新編碼、壓縮及移除不必要或敏感的資訊。亦自單獨DICOM系列收集強度資料且將資料編碼至具有相關聯JSON後設資訊檔案之單個無損耗PNG檔案中。Image input data is presented in DICOM format, which is a rich data representation. DICOM data includes strength data as well as meta data and communication structure. To optimize data for use by aPROMISE, data is passed through a microservice that re-encodes, compresses, and removes unnecessary or sensitive information. Intensity data is also collected from a separate DICOM series and encoded into a single lossless PNG file with an associated JSON metadata file.

PET影像資料之資料處理包含估計SUV (標準化攝取值)因數,該SUV因數係包含於JSON後設資訊檔案中。SUV因數係用於將影像強度轉換成SUV值之純量。SUV因數係根據QIBA指南(定量成像生物標記聯盟)計算。演算法影像處理 Data processing of the PET image data includes estimating SUV (normalized uptake value) factors, which are included in the JSON metadata file. The SUV factor is a scalar used to convert image intensities into SUV values. The SUV factor was calculated according to the QIBA guidelines (Quantitative Imaging Biomarkers Consortium). Algorithmic Image Processing

圖26展示實例性影像處理工作流程(程序) 2600。FIG. 26 shows an example image processing workflow (procedure) 2600.

aPROMISE使用CNN (廻旋神經網路)模型來分割2602患者骨骼及選定器官。器官分割2602容許自動化計算患者之主動脈及肝臟中之標準攝取值(SUV)參考2604。接著,在判定特定基於SUV值之定量指數(諸如病變指數(LI)及強度加權之組織病變體積(ITLV))時,將用於主動脈及肝臟之SUV參考用作參考值。定量指數之詳細描述係在下面表6中提供。aPROMISE used a CNN (Circular Neural Network) model to segment 2602 patient bones and selected organs. Organ segmentation 2602 allows automated calculation of standard uptake value (SUV) references 2604 in the patient's aorta and liver. The SUV references for the aorta and liver were then used as reference values when determining certain quantitative indices based on SUV values, such as the Lesion Index (LI) and Intensity Weighted Tissue Lesion Volume (ITLV). A detailed description of the quantitative index is provided in Table 6 below.

由使用者手動標記及報告病變2608,該使用者自藉由軟體分割之預定義之熱點選擇2608a,或自藉由使用繪製工具用於選擇體素以作為熱點包含於GUI內而製成之使用者定義之熱點來選擇2608b。預定義之熱點(具有高局部強度攝取之所關注區域)係使用針對軟組織(前列腺及淋巴結節)及骨之某些特定方法自動分割(例如,如圖28中所展示,可使用針對骨之一種特定分割方法及針對軟組織區域之另一種特定分割方法)。基於器官分割,軟體判定前列腺、淋巴或骨區域中之選定熱點之類型及位置。經判定之類型及位置係顯示於檢視器2400之面板2502中所展示之選定熱點清單中。由使用者手動新增其他區域中(例如,未定位於前列腺、淋巴或骨區域中)之選定熱點之類型及位置。使用者可在熱點選擇期間在適用情況下隨時新增及編輯所有熱點之類型及位置。熱點類型係使用miTNM系統判定,該miTNM系統係用於報告癌症擴散之臨床標準及記法系統。在此方法中,根據如下指示特定實體特徵的基於字母之代碼對個別熱點指派類型: ●   T指示原發性腫瘤 ●   N指示附近的受原發性腫瘤影響之淋巴結 ●   M指示遠處轉移Lesions are manually marked and reported 2608 by a user who selects 2608a from predefined hotspots segmented by software, or from a user made by using a drawing tool for selecting voxels for inclusion in the GUI as hotspots Defined hotspot to select 2608b. Predefined hot spots (regions of interest with high local intensity uptake) are automatically segmented using some specific method for soft tissue (prostate and lymph nodes) and bone (eg, as shown in Figure 28, a specific method for bone can be used. segmentation method and another specific segmentation method for soft tissue regions). Based on organ segmentation, the software determines the type and location of selected hotspots in the prostate, lymphatic or bony regions. The determined type and location are displayed in a list of selected hotspots shown in panel 2502 of viewer 2400. Types and locations of selected hotspots in other regions (eg, not located in prostate, lymphoid, or bone regions) are manually added by the user. The user may add and edit the types and locations of all hotspots at any time, if applicable, during the hotspot selection. Hot spot types were determined using the miTNM system, a clinical standard and notation system for reporting cancer spread. In this method, individual hotspots are assigned types according to letter-based codes that indicate specific entity characteristics as follows: ● T indicates primary tumor ● N indicates nearby lymph nodes affected by the primary tumor ● M indicates distant metastasis

對於遠處轉移病變,將局部化分組成對應於額外骨盆淋巴結(a)、骨(b)及軟組織器官(c)之a/b/c系統。For distant metastases, localization was grouped into a/b/c systems corresponding to additional pelvic lymph nodes (a), bone (b), and soft tissue organs (c).

對於經選擇以包含為病變之所有熱點,計算2610 SUV值及指數且在報告中顯示。CT 中之器官分割 For all hotspots selected to be included as lesions, 2610 SUV values and indices were calculated and displayed in the report. Organ segmentation in CT

使用CT影像作為輸入來執行器官分割2602。以來自完整影像之兩個粗分割開始,提取較小影像區段且選擇其以含有給定器官集合。在各影像區段上執行器官之細分割。最後,將來自所有影像區段之所有經分割器官組裝至在aPROMISE中顯示之完整影像分割中。成功完成之分割識別如在圖27中視覺化及在表5中呈現之52個不同骨及13個軟組織器官。粗分割程序及細分割程序皆包含三個步驟: 1.  預處理CT影像, 2.  CNN分割,及 3.  後處理分割。Organ segmentation 2602 is performed using the CT image as input. Starting with two coarse segmentations from the full image, smaller image segments are extracted and selected to contain a given set of organs. Perform fine segmentation of organs on each image segment. Finally, all segmented organs from all image segments were assembled into the complete image segmentation displayed in aPROMISE. The successfully completed segmentation identified 52 different bones and 13 soft tissue organs as visualized in Figure 27 and presented in Table 5. Both the coarse segmentation procedure and the fine segmentation procedure consist of three steps: 1. Preprocessing CT images, 2. CNN segmentation, and 3. Post-processing segmentation.

在粗分割之前預處理CT影像包含三個步驟:(1)移除僅表示空氣之影像圖塊(例如,具有<= 0亨氏單位);(2)將影像重新取樣至固定大小;及(3)基於訓練資料之平均數及標準偏差正規化影像,如下文所描述。Preprocessing CT images prior to coarse segmentation involves three steps: (1) removing image tiles that represent air only (eg, having <= 0 Heinz units); (2) resampling the image to a fixed size; and (3) ) normalized images based on the mean and standard deviation of the training data, as described below.

CNN模型執行語義分割,其中對輸入影像中之各像素指派對應於背景或其分割之器官之標記,從而導致與輸入資料相同大小之標記圖。The CNN model performs semantic segmentation, where each pixel in the input image is assigned a label corresponding to the background or its segmented organ, resulting in a label map of the same size as the input data.

後處理係在分割之後執行且包含以下步驟: -    一次吸收相鄰像素叢集。 -    吸收相鄰像素叢集直至不存在此叢集。 -    移除並非各標記之最大者之所有叢集。 -    自分割摒棄骨骼部分;一些分割模型在分割軟組織時將骨骼部分分割為參考點。在完成分割之後,意欲移除此等模型中之骨骼部分。Post-processing is performed after segmentation and includes the following steps: - Absorb clusters of adjacent pixels at a time. - Absorb adjacent pixel clusters until no such cluster exists. - Remove all clusters that are not the largest of each marker. - Self-segmentation discards skeletal parts; some segmentation models segment skeletal parts as reference points when segmenting soft tissue. After the segmentation is complete, it is intended to remove the skeletal parts in these models.

使用兩個不同粗分割神經網路及十個不同細分割神經網路,包含前列腺之分割。若患者在檢查之前已經歷前列腺切除術(在開放研究以供查看之前驗證患者研究背景時由使用者提供之資訊),則不分割前列腺。細分割及粗分割之組合及各組合提供哪一身體部位係在表5中呈現。 表5:如何組合粗分割網路及細分割網路以分割不同身體部位之概述 器官 / 分割神經網路 細分割神經網路 右肺 粗分割-02 細分割-右肺-01 左肺 粗分割-02 細分割-左肺-01 左/右股骨 左/右臀大肌 粗分割-04 細分割-腿-01 左/右髖骨 骶骨及尾骨 尿膀胱 粗分割-04 細分割-骨盆-非前列腺-01 肝臟 左/右腎 膽囊 脾 粗分割-02 細分割-腹部-02 右肋骨1-12 右肩胛骨 右鎖骨 粗分割-02 細分割-右上半身骨-02 左肋骨-12 左肩胛骨 左鎖骨 粗分割-02 細分割-左上半身骨-02 頸椎 粗分割-02 細分割-脊骨-02 胸椎1-12 腰椎1-5 胸骨 主動脈、胸部分 主動脈、腹部分 粗分割-02 細分割-主動脈-01 前列腺* 粗分割-02 細分割-骨盆區域-混合 *額外分割網路僅適用於具有前列腺之患者。Two different coarse segmentation neural networks and ten different fine segmentation neural networks were used, including segmentation of the prostate. Prostates were not segmented if the patient had undergone prostatectomy prior to examination (information provided by the user when verifying the patient's study background prior to opening the study for review). The combinations of fine and coarse segmentation and which body part each combination provides are presented in Table 5. Table 5: Overview of how to combine coarse and fine segmentation networks to segment different body parts Organ / Bone Coarse Segmentation Neural Network finely segmented neural network right lung Rough segmentation-02 Fine Segmentation - Right Lung - 01 left lung Rough segmentation-02 Fine Segmentation - Left Lung - 01 Left/Right Femur Left/Right Gluteus Maximus Rough segmentation-04 Thin Segmentation-Legs-01 Left/Right Hip Sacrum and Coccyx Urinary Bladder Rough segmentation-04 Subdivision-Pelvis-Non-Prostate-01 Liver Left/Right Kidney Gallbladder Spleen Rough segmentation-02 Fine Segmentation - Abdomen - 02 Right Ribs 1-12 Right Scapula Right Clavicle Rough segmentation-02 Fine segmentation - upper right body bone - 02 Left Rib-12 Left Scapula Left Clavicle Rough segmentation-02 Fine segmentation-left upper body bone-02 cervical spine Rough segmentation-02 Fine Segmentation-Spine-02 Thoracic vertebrae 1-12 Lumbar vertebrae 1-5 Sternal aorta, thoracic part of the aorta, abdominal part Rough segmentation-02 Fine Segmentation-Aorta-01 prostate* Rough segmentation-02 Subdivision - Pelvic Region - Mixed *Additional split network is only available for patients with prostate.

訓練CNN模型包含反覆最小化問題,其中訓練演算法更新模型參數以降低分割錯誤。分割錯誤係定義為與手動分割與CNN模型分割之間的完美重疊之偏差。用於器官分割之各神經網路經訓練以組態最佳參數及權重。如上文所描述,用於開發用於aPROMISE之神經網路之訓練資料由具有經手動分割及標記之身體部位之低劑量CT影像組成。用於訓練分割網路之CT影像係作為NIMSA項目(http://nimsa.se/)之部分且在clinicaltrials.gov (https://www.clinicaltrials.gov/ct2/show/NCT01667536?term=99mTc-MIP-1404&draw=2&rank=5)處註冊藥物候選者99mTc-MIP-1404之II期臨床試驗期間收集。NIMSA項目由184名患者組成且99mTc-MIP-1404資料由62名患者組成。運算 PSMA PET 中之參考資料 (SUV 參考 ) Training a CNN model involves an iterative minimization problem, where the training algorithm updates the model parameters to reduce segmentation errors. Segmentation error is defined as the deviation from perfect overlap between manual segmentation and CNN model segmentation. Each neural network used for organ segmentation is trained to configure optimal parameters and weights. As described above, the training data used to develop the neural network for aPROMISE consisted of low-dose CT images with manually segmented and labeled body parts. The CT images used to train the segmentation network were obtained as part of the NIMSA project (http://nimsa.se/) and are available at clinicaltrials.gov (https://www.clinicaltrials.gov/ct2/show/NCT01667536?term=99mTc -MIP-1404&draw=2&rank=5) during the Phase II clinical trial of the registered drug candidate 99mTc-MIP-1404. The NIMSA project consisted of 184 patients and the 99mTc-MIP-1404 profile consisted of 62 patients. Calculating the reference material in PSMA PET (SUV reference )

當評估PSMA示蹤劑之生理攝取時使用參考值。當前臨床實踐係使用對應於血池或在肝臟或兩種組織中之經識別體積中之SUV強度作為參考值。對於PSMA示蹤劑強度,在主動脈體積中量測血池。Reference values are used when assessing physiological uptake of PSMA tracers. Current clinical practice uses the SUV intensity in the identified volume corresponding to the blood pool or in the liver or both tissues as a reference value. For PSMA tracer intensity, blood pools were measured in the aortic volume.

在aPROMISE中,在對應於主動脈之胸部分及肝臟之體積中之SUV強度係用作參考值。在PET影像中顯示之攝取以及主動脈及肝臟體積之器官分割係用於計算各自器官中之SUV參考的基礎。In aPROMISE, the SUV intensity in the volume corresponding to the thoracic portion of the aorta and the liver was used as a reference value. The uptake shown in the PET images and the organ segmentation of the aortic and liver volumes were the basis for calculating the SUV references in the respective organs.

主動脈 。為確保對應於血管壁之影像部分不包含於用於計算主動脈區域之SUV參考之體積中,減少經分割之主動脈體積。啟發式地選擇分割減少(3 mm)以平衡在不包含血管壁區域時保持盡可能多的主動脈體積的權衡。血池之參考SUV係來自識別主動脈體積之經縮減分割遮罩內部之像素之SUV的穩健平均值。該穩健平均值經運算為四分位數間距內之值之平均數。 aorta . To ensure that the portion of the image corresponding to the vessel wall is not included in the volume used to calculate the SUV reference for the aortic region, the segmented aorta volume was reduced. The segmentation reduction (3 mm) was heuristically chosen to balance the trade-off of maintaining as much aortic volume as possible while excluding the vessel wall region. The reference SUV for the blood pool is a robust average of the SUVs from the pixels inside the reduced segmentation mask identifying the aortic volume. The robust mean is computed as the mean of the values within the interquartile range.

肝臟 。當量測肝臟體積中之參考值時,沿著邊緣減少分割以建立調整PET影像與CT影像之間的可能未對準之緩衝。使用手動觀察具有PET/CT未對準之影像來啟發式地判定減少量(9 mm)。 liver . When measuring the reference value in the liver volume, the segmentation is reduced along the edges to create a buffer to adjust for possible misalignment between the PET image and the CT image. The reduction (9 mm) was determined heuristically using manual observation of images with PET/CT misalignment.

肝臟中之囊腫或惡性腫瘤可導致肝臟中之低示蹤劑攝取之區域。為減少示蹤劑攝取之此等局部差異對SUV參考之計算的影響,使用關於圖2A之根據上文在章節B.iii中所描述之實施例之雙組分高斯混合模型方法。特定言之,雙組分高斯混合模型係擬合至來自參考器官遮罩內部之體素以及經識別之分佈之主要及次要組分之SUV。肝臟體積之SUV參考最初經運算為來自高斯混合模型之主要組分之平均SUV。若次要組分經判定為具有大於主要組分之平均SUV,則肝臟參考器官遮罩保持不變,除非次要組分之權重超過0.33,在此情況中,當次要組分之權重超過0.33時,擲回錯誤且將不計算肝臟參考值。Cysts or malignancies in the liver can lead to areas of low tracer uptake in the liver. To reduce the effect of these local differences in tracer uptake on the calculation of the SUV reference, a two-component Gaussian mixture model approach according to the embodiment described above in Section B.iii with respect to Figure 2A was used. In particular, a two-component Gaussian mixture model was fitted to voxels from inside the reference organ mask and the SUVs of the primary and secondary components of the identified distributions. The SUV reference for liver volume was initially calculated as the mean SUV of the principal components from a Gaussian mixture model. If the secondary component is judged to have an average SUV greater than the primary component, the liver reference organ mask remains unchanged unless the weight of the secondary component exceeds 0.33, in which case when the weight of the secondary component exceeds 0.33 At 0.33, an error is thrown and the liver reference will not be calculated.

若次要組分具有小於主要組分之平均SUV,則運算分離臨限值,(例如)如圖2A中所展示。定義分離臨限值使得: ●   屬於處於臨限值或更大之SUV之主要組分的概率,及; ●   屬於處於臨限值或更小之SUV之次要組分的概率; 係相同的。If the minor component has an average SUV less than the major component, then a separation threshold is calculated, eg, as shown in Figure 2A. The separation threshold is defined such that: ● Probability of being a major component of an SUV at a threshold value or greater, and; ● Probability of being a minor component of an SUV at a threshold value or less; are the same.

接著藉由移除分離臨限值以下之像素來細化參考遮罩。PSMA PET 中之熱點之預定義 The reference mask is then refined by removing pixels below the separation threshold. Predefinition of Hot Spots in PSMA PET

參考圖28,在本實例之aPROMISE實施方案中,藉由分析模型2800基於來自PET影像2802及自CT影像判定且投影於PET空間中之器官分割圖2804之輸入來執行藉由aPROMISE分割PSMA PET中之具有高局部強度之區域(所謂之預定義熱點)。為使軟體分割骨中之熱點,使用原始PET影像2802,且為分割淋巴及前列腺中之熱點,藉由抑制正常PET示蹤劑攝取2806來處理PET影像。在此實例性實施方案中所使用之分析模型之圖形概覽係在圖28中呈現。如下文進一步闡釋之分析方法經設計以找出可表示ROI之高局部攝取強度區域,而無過多不相關區域或PET示蹤劑背景雜訊。分析方法係由包括PSMA PET/CT影像之經標記資料集開發。Referring to Figure 28, in the aPROMISE implementation of this example, segmentation by aPROMISE in PSMA PET is performed by an analysis model 2800 based on input from a PET image 2802 and an organ segmentation map 2804 determined from a CT image and projected in PET space. regions of high local intensity (so-called predefined hotspots). For the soft body to segment hot spots in bone, raw PET images are used 2802, and to segment hot spots in lymph and prostate, the PET images are processed by suppressing normal PET tracer uptake 2806. A graphical overview of the analytical model used in this example implementation is presented in FIG. 28 . The analysis method, as explained further below, was designed to find regions of high local uptake intensity that could represent the ROI without excessive irrelevant regions or PET tracer background noise. The analysis method was developed from a labeled dataset including PSMA PET/CT images.

當時在一個高攝取器官中執行正常PSMA示蹤劑攝取強度之抑制2806。首先,抑制腎臟中之攝取強度,接著肝臟且最後尿膀胱。藉由將經估計之抑制圖應用於PET之高強度區域來執行抑制。該抑制圖係使用先前在CT中分割之器官圖並將其投影及調整至PET影像來建立,從而建立PET調整之器官遮罩。Inhibition of normal PSMA tracer uptake intensity 2806 was performed at that time in a high uptake organ. First, the intensity of uptake in the kidney is suppressed, then the liver and finally the bladder. Suppression is performed by applying the estimated suppression map to high intensity regions of the PET. The suppression map was built using the organ map previously segmented in CT and projected and adjusted to the PET image, creating a PET adjusted organ mask.

調整校正PET影像與CT影像之間的小未對準。使用經調整圖,計算背景影像。此背景影像係自原始PET影像減去且產生攝取估計影像。接著使用指數函數自該攝取估計影像估計抑制圖,該指數函數取決於自分割外之體素至PET調整之器官遮罩之歐幾里德距離(Euclidean distance)。因為攝取強度隨距器官之距離呈指數下降,故使用指數函數。最後,自原始PET影像減去抑制圖,從而抑制與器官中之高正常攝取相關聯的強度。Adjustment corrects small misalignments between PET images and CT images. Using the adjusted map, calculate the background image. This background image is subtracted from the original PET image and produces an uptake estimate image. An inhibition map is then estimated from the captured estimated image using an exponential function that depends on the Euclidean distance from the out-of-segmentation voxels to the PET-adjusted organ mask. Because the uptake intensity decreases exponentially with distance from the organ, an exponential function is used. Finally, the suppression map was subtracted from the original PET image, thereby suppressing the intensity associated with high normal uptake in the organ.

在抑制正常PSMA示蹤劑攝取強度之後,使用器官分割遮罩2804及由抑制步驟2806產生之經抑制PET影像2808在前列腺及淋巴中分割熱點2812。對於已進行前列腺切除術之患者不分割前列腺熱點。骨及淋巴熱點分割適用於所有患者。各熱點係使用快速行進方法分割,其中基礎PET影像係用作速度圖且輸入區域之體積判定行進時間。該輸入區域亦用作初始分割遮罩以識別快速行進方法之所關注體積且係取決於是否在骨或軟組織中執行熱點分割而不同地建立。骨熱點係使用快速行進方法及高斯差分(DoG)濾波方法2810來分割且淋巴且在適用情況下前列腺熱點係使用快速行進方法及高斯拉普拉斯算子(LoG)濾波方法2812來分割。After suppressing normal PSMA tracer uptake intensity, the organ segmentation mask 2804 and the suppressed PET image 2808 produced by the suppression step 2806 are used to segment hot spots 2812 in the prostate and lymph. Prostate hot spots are not segmented for patients who have undergone prostatectomy. Bone and lymphatic hotspot segmentation is suitable for all patients. Each hot spot is segmented using the fast-travel method, where the underlying PET image is used as the velocity map and the volume of the input region determines the travel time. This input area is also used as an initial segmentation mask to identify the volume of interest for the fast-running method and is established differently depending on whether hot spot segmentation is performed in bone or soft tissue. Bone hot spots are segmented using a fast marching method and Difference of Gaussian (DoG) filtering method 2810 and lymph and, where applicable, prostate hot spots are segmented using a fast marching method and Laplacian of Gaussian (LoG) filtering method 2812.

對於骨熱點之偵測及分割,建立骨骼區域遮罩以識別其中可偵測骨熱點之骨骼體積。骨骼區域遮罩包括以下骨骼區域:胸椎(1-12)、腰椎(1-5)、鎖骨(L+R)、肩胛骨(L+R)、胸骨肋骨(L+R, 1-12)、髖骨(L+R)、股骨(L+R)、骶骨及尾骨。基於PET影像中之健康骨組織之平均強度正規化經遮罩影像,藉由使用DoG濾波反覆正規化影像來執行。DoG中使用之濾波器大小係3 mm間距及5 mm間距。DoG濾波充當影像上之帶通濾波器,其削弱較遠離頻帶中心之信號,此強調具有相對於其等周圍較高之強度之體素叢集。對以此方式獲得的經正規化影像定限產生可與背景區分且因此經分割之體素叢集,從而建立識別定位於骨區域中之熱點體積之3D分割圖2814。For detection and segmentation of bone hotspots, a bone region mask is created to identify bone volumes in which bone hotspots can be detected. The Bone Region Mask includes the following bone regions: Thoracic (1-12), Lumbar (1-5), Clavicle (L+R), Scapula (L+R), Sternal Ribs (L+R, 1-12), Hip Bone (L+R), Femur (L+R), Sacrum and Coccyx. Normalizing the masked image based on the mean intensity of healthy bone tissue in the PET image was performed by iteratively normalizing the image using DoG filtering. The filter sizes used in the DoG are 3 mm pitch and 5 mm pitch. DoG filtering acts as a bandpass filter on the image, attenuating signals further away from the center of the frequency band, which emphasizes clusters of voxels with higher intensities relative to their surroundings. Confining the normalized image obtained in this way produces a voxel cluster that is distinguishable from the background and thus segmented, creating a 3D segmentation map 2814 that identifies the hot spot volume located in the bone region.

對於淋巴熱點之偵測及分割,建立其中可偵測對應於潛在淋巴結節之熱點之淋巴區域遮罩。該淋巴區域遮罩包含在圍封所有經分割骨及器官區域之定界框內之體素,但排除經分割器官本身內之體素(除了肺體積之外,其體素經保留)。建立其中可偵測對應於潛在前列腺腫瘤之熱點之另一、前列腺區域遮罩。此前列腺區域遮罩係自本文中所描述之器官分割步驟判定之前列腺體積之單體素擴張。將淋巴區域遮罩應用於PET影像產生包含淋巴區域內之體素(例如,且排除其他體素)之經遮罩影像且同樣地,將前列腺區域遮罩應用於PET影像產生包含前列腺體積內之體素之經遮罩影像。For the detection and segmentation of lymphatic hotspots, a lymphatic region mask is created in which hotspots corresponding to potential lymph nodes can be detected. The lymphatic region mask includes voxels within the bounding box enclosing all segmented bone and organ regions, but excludes voxels within the segmented organ itself (with the exception of lung volumes, which are preserved). Another, prostate region mask, is created in which hot spots corresponding to potential prostate tumors can be detected. This prostate region mask is a single voxel expansion of the prostate volume determined from the organ segmentation procedure described herein. Applying a lymphatic region mask to a PET image yields a masked image that includes voxels within the lymphatic region (eg, and excluding other voxels) and similarly, applying a prostate region mask to a PET image yields a masked image that includes voxels within the prostate volume. Masked image of voxels.

軟組織熱點(即,淋巴及前列腺熱點)係藉由分別對淋巴及/或前列腺遮罩之影像應用三種不同大小之LoG濾波器(一個具有4 mm/間距XYZ,一個具有8 mm/間距XYZ且一個具有12 mm/間距XYZ)來偵測,從而針對兩種軟組織類型(前列腺及淋巴)之各者產生三個經LoG濾波之影像。對於各軟組織類型,使用主動脈SUV參考之負70%之值對三個對應經Log濾波之影像定限且使用3x3x3最小濾波器找到局部最小值。此方法產生各包括對應於熱點之體素叢集之三個經濾波影像。藉由自三個影像獲取局部最小值之聯集來組合三個經濾波影像以產生熱點區域遮罩。使用位準設定方法分割該熱點區域遮罩中之各組分以判定一或多個熱點體積。此分割方法係針對前列腺及針對淋巴熱點兩者執行,從而自動分割前列腺及淋巴區域中之熱點。iii. 量化 Soft tissue hotspots (ie, lymphatic and prostate hotspots) were obtained by applying three different sized LoG filters (one with 4 mm/spacing XYZ, one with 8 mm/spacing XYZ and one with 12 mm/spacing XYZ) to detect, resulting in three LoG filtered images for each of the two soft tissue types (prostate and lymph). For each soft tissue type, three corresponding Log-filtered images were bounded using a value of minus 70% of the aortic SUV reference and a local minimum was found using a 3x3x3 minimum filter. This method produces three filtered images that each include clusters of voxels corresponding to hotspots. The three filtered images are combined by taking the union of local minima from the three images to generate a hotspot region mask. The components in the hotspot area mask are segmented using a level setting method to determine one or more hotspot volumes. This segmentation method is performed both for the prostate and for lymphatic hotspots, thereby automatically segmenting hotspots in the prostate and lymphatic regions. iii. Quantification

表6識別由軟體計算,在使用者選擇之後針對各熱點顯示之值。ITLV係概述性值且僅在報告中顯示。所有計算係來自PSMA PET/CT之SUV之變體。 6. aPROMISE 計算之值 針對各選定熱點及可疑病變報告之值: SUV最大值 表示熱點之一個體素中之最高攝取。

Figure 02_image005
SUV平均數 經計算為表示熱點之所有體素之平均攝取。
Figure 02_image007
SUV峰值 經計算為具有在SUV最大值所定位之體素之中點之5 mm內之中點的所有體素之平均數。
Figure 02_image009
體積 計算體素之數目乘以以(ml)為單位顯示之體素體積。
Figure 02_image011
LI 病變指數—基於主動脈(亦被稱為血池)及肝臟之SUV參考值計算。病變指數係基於與以下跨度中之線性內插相關之病變之SUV平均數之在0與3之間的實數:
Figure 02_image013
若無法計算肝臟或主動脈之SUV參考,或若主動脈值高於肝臟值,則將不計算病變指數且將其顯示為「-」。
在患者層級針對各病變類型報告之值: ILTV 強度加權之組織病變體積—對於各病變類型,計算ITLV。ITLV係針對特定類型之病變體積之經加總總和,其中權重係病變指數。
Figure 02_image015
iv. 基於網路之平台架構 Table 6 identifies the values calculated by the software and displayed for each hotspot after user selection. ITLV is a summary value and only displayed in the report. All calculations are from the SUV variant of PSMA PET/CT. Table 6. Values calculated by aPROMISE Values reported for each selected hot spot and suspicious lesion: SUV max Represents the highest uptake in one voxel of the hotspot.
Figure 02_image005
Average SUV Calculated as the average uptake of all voxels representing hot spots.
Figure 02_image007
SUV peak Calculated as the mean of all voxels with a midpoint within 5 mm of the midpoint of the voxel where the SUV maximum is located.
Figure 02_image009
volume Calculate the number of voxels multiplied by the voxel volume displayed in (ml).
Figure 02_image011
LI Lesion Index - Calculated based on SUV reference values for the aorta (also known as the blood pool) and liver. The Lesion Index is a real number between 0 and 3 based on the mean of the SUVs of the lesions associated with linear interpolation in the following spans:
Figure 02_image013
If the liver or aorta SUV reference cannot be calculated, or if the aortic value is higher than the liver value, the lesion index will not be calculated and will be displayed as "-".
Values reported at the patient level for each lesion type: ILTV Intensity-Weighted Tissue Lesion Volume—For each lesion type, ITLV was calculated. ITLV is the aggregated sum of lesion volumes for a specific type, where the weight is the lesion index.
Figure 02_image015
iv. Web-based platform architecture

aPROMISE利用微服務架構。至AWS之部署係在AWS程式碼儲存庫中找到之雲端形成指令檔中處理。aPROMISE雲端架構係在圖29A中提供且微服務通信設計圖係在圖29B中提供。 J. 成像劑 i.PET 成像放射性核素標記之 PSMA 結合劑 aPROMISE utilizes a microservices architecture. Deployment to AWS is handled in a cloud-formed command file found in the AWS code repository. The aPROMISE cloud architecture is provided in Figure 29A and the microservice communication design diagram is provided in Figure 29B. J. Imaging agents i. PET imaging radionuclide-labeled PSMA binders

在某些實施例中,放射性核素標記之PSMA結合劑係適用於PET成像之放射性核素標記之PSMA結合劑。In certain embodiments, the radionuclide-labeled PSMA-binding agent is a radionuclide-labeled PSMA-binding agent suitable for use in PET imaging.

在某些實施例中,放射性核素標記之PSMA結合劑包括[18F]DCFPyL (亦被稱為PyLTM ;亦被稱為DCFPyL-18F):

Figure 02_image017
[18F]DCFPyL, 或其之藥學上可接受之鹽。In certain embodiments, the radionuclide-labeled PSMA-binding agent includes [18F]DCFPyL (also known as PyL ; also known as DCFPyL-18F):
Figure 02_image017
[18F]DCFPyL, or a pharmaceutically acceptable salt thereof.

在某些實施例中,放射性核素標記之PSMA結合劑包括[18F]DCFBC:

Figure 02_image019
[18F]DCFBC, 或其之藥學上可接受之鹽。In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises [18F]DCFBC:
Figure 02_image019
[18F] DCFBC, or a pharmaceutically acceptable salt thereof.

在某些實施例中,放射性核素標記之PSMA結合劑包括68 Ga-PSMA-HBED-CC (亦被稱為68 Ga-PSMA-11):

Figure 02_image021
68 Ga-PSMA-HBED-CC, 或其之藥學上可接受之鹽。In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises68Ga-PSMA-HBED-CC (also known as68Ga - PSMA -11):
Figure 02_image021
68Ga -PSMA-HBED-CC, or a pharmaceutically acceptable salt thereof.

在某些實施例中,放射性核素標記之PSMA結合劑包括PSMA-617:

Figure 02_image023
PSMA-617 或其之藥學上可接受之鹽。在某些實施例中,放射性核素標記之PSMA結合劑包括68 Ga-PSMA-617 (其係用68 Ga標記之PSMA-617),或其之藥學上可接受之鹽。在某些實施例中,放射性核素標記之PSMA結合劑包括177 Lu-PSMA-617 (其係用177 Lu標記之PSMA-617),或其之藥學上可接受之鹽。In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises PSMA-617:
Figure 02_image023
PSMA-617 or a pharmaceutically acceptable salt thereof. In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises68Ga-PSMA-617 (which is PSMA -617 labeled with68Ga ), or a pharmaceutically acceptable salt thereof. In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises177Lu-PSMA-617 (which is PSMA -617 labeled with177Lu ), or a pharmaceutically acceptable salt thereof.

在某些實施例中,放射性核素標記之PSMA結合劑包括PSMA-I&T:

Figure 02_image025
PSMA-I&T 或其之藥學上可接受之鹽。在某些實施例中,放射性核素標記之PSMA結合劑包括68 Ga-PSMA-I&T (其係用68 Ga標記之PSMA-I&T),或其之藥學上可接受之鹽。In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises PSMA-I&T:
Figure 02_image025
PSMA-I&T or a pharmaceutically acceptable salt thereof. In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises68Ga-PSMA-I&T (which is PSMA -I&T labeled with68Ga ), or a pharmaceutically acceptable salt thereof.

在某些實施例中,放射性核素標記之PSMA結合劑包括PSMA-1007:

Figure 02_image027
PSMA-1007, 或其之藥學上可接受之鹽。在某些實施例中,放射性核素標記之PSMA結合劑包括18 F-PSMA-1007 (其係用18 F標記之PSMA-1007),或其之藥學上可接受之鹽。ii.SPECT 成像放射性核素標記之 PSMA 結合劑 In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises PSMA-1007:
Figure 02_image027
PSMA-1007, or a pharmaceutically acceptable salt thereof. In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises18F-PSMA-1007 (which is PSMA -1007 labeled with18F ), or a pharmaceutically acceptable salt thereof. ii. SPECT imaging radionuclide-labeled PSMA binders

在某些實施例中,放射性核素標記之PSMA結合劑係適用於SPECT成像之放射性核素標記之PSMA結合劑。In certain embodiments, the radionuclide-labeled PSMA-binding agent is a radionuclide-labeled PSMA-binding agent suitable for use in SPECT imaging.

在某些實施例中,放射性核素標記之PSMA結合劑包括1404 (亦被稱為MIP-1404):

Figure 02_image029
1404, 或其之藥學上可接受之鹽。In certain embodiments, the radionuclide-labeled PSMA-binding agent includes 1404 (also known as MIP-1404):
Figure 02_image029
1404, or a pharmaceutically acceptable salt thereof.

在某些實施例中,放射性核素標記之PSMA結合劑包括1405 (亦被稱為MIP-1405):

Figure 02_image031
1405, 或其之藥學上可接受之鹽。In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises 1405 (also known as MIP-1405):
Figure 02_image031
1405, or a pharmaceutically acceptable salt thereof.

在某些實施例中,放射性核素標記之PSMA結合劑包括1427 (亦被稱為MIP-1427):

Figure 02_image033
1427, 或其之藥學上可接受之鹽。In certain embodiments, the radionuclide-labeled PSMA-binding agent includes 1427 (also known as MIP-1427):
Figure 02_image033
1427, or a pharmaceutically acceptable salt thereof.

在某些實施例中,放射性核素標記之PSMA結合劑包括1428 (亦被稱為MIP-1428):

Figure 02_image035
1428, 或其之藥學上可接受之鹽。In certain embodiments, the radionuclide-labeled PSMA-binding agent includes 1428 (also known as MIP-1428):
Figure 02_image035
1428, or a pharmaceutically acceptable salt thereof.

在某些實施例中,PSMA結合劑係藉由將放射性核素螯合至金屬之放射性同位素[例如,鍀(Tc)之放射性同位素(例如,鍀99m (99m Tc));例如,錸(Re)之放射性同位素(例如,錸188 (188 Re);例如,錸186 (186 Re));例如,釔(Y)之放射性同位素(例如,90 Y);例如,鎦(Lu)之放射性同位素(例如,177 Lu);例如,鎵(Ga)之放射性同位素(例如,68 Ga;例如,67 Ga);例如,銦之放射性同位素(例如,111 In);例如,銅(Cu)之放射性同位素(例如,67 Cu)]而用該放射性核素標記。In certain embodiments, PSMA binding agents are prepared by chelating a radionuclide to a radioisotope of a metal [eg, a radioisotope of Xun (Tc) (e.g., Xun99m ( 99mTc )); for example, rhenium (Re ) radioisotopes (for example, rhenium 188 ( 188 Re); for example, rhenium 186 ( 186 Re)); for example, yttrium (Y) radioisotopes (for example, 90 Y); for example, lutetium (Lu) radioisotopes ( For example, 177 Lu); for example, a radioisotope of gallium (Ga) (for example, 68 Ga; for example, 67 Ga); for example, a radioisotope of indium (for example, 111 In); for example, a radioisotope of copper (Cu) ( For example, 67 Cu)] and label with the radionuclide.

在某些實施例中,1404係用(例如,螯合至金屬之放射性同位素之)放射性核素標記。在某些實施例中,放射性核素標記之PSMA結合劑包括99m Tc-MIP-1404,其係用99m Tc標記(例如,螯合至99m Tc)之1404:

Figure 02_image037
99m Tc-MIP-1404, 或其之藥學上可接受之鹽。在某些實施例中,1404可螯合至其他金屬放射性同位素[例如,錸(Re)之放射性同位素(例如,錸188 (188 Re);例如,錸186 (186 Re));例如,釔(Y)之放射性同位素(例如,90 Y);例如,鎦(Lu)之放射性同位素(例如,177 Lu);例如,鎵(Ga)之放射性同位素(例如,68 Ga;例如,67 Ga);例如,銦之放射性同位素(例如,111 In);例如,銅(Cu)之放射性同位素(例如,67 Cu)]以形成具有類似於上文針對99m Tc-MIP-1404所展示之結構之結構的化合物(其中用另一金屬放射性同位素代替99m Tc)。In certain embodiments, 1404 is labeled with a radionuclide (eg, of a radioisotope chelated to a metal). In certain embodiments, radionuclide-labeled PSMA-binding agents include99mTc -MIP-1404, which is 1404 labeled (eg, chelated to99mTc ) with99mTc :
Figure 02_image037
99m Tc-MIP-1404, or a pharmaceutically acceptable salt thereof. In certain embodiments, 1404 can chelate to other metal radioisotopes [eg, radioisotopes of rhenium (Re) (eg, rhenium 188 ( 188 Re); eg, rhenium 186 ( 186 Re)); eg, yttrium ( Y) radioisotopes (eg, 90 Y); eg, radioisotopes of titanium (Lu) (eg, 177 Lu); eg, radioisotopes of gallium (Ga) (eg, 68 Ga; eg, 67 Ga); eg , radioactive isotopes of indium (eg, 111In ); eg, radioactive isotopes of copper (Cu) (eg, 67Cu )] to form compounds with structures similar to those shown above for99mTc -MIP-1404 (in which99mTc is replaced by another metal radioisotope).

在某些實施例中,1405係用(例如,螯合至金屬之放射性同位素之)放射性核素標記。在某些實施例中,放射性核素標記之PSMA結合劑包括99m Tc-MIP-1405,其係用99m Tc標記(例如,螯合至99m Tc)之1405:

Figure 02_image039
99m Tc-MIP-1405, 或其之藥學上可接受之鹽。在某些實施例中,1405可螯合至其他金屬放射性同位素[例如,錸(Re)之放射性同位素(例如,錸188 (188 Re);例如,錸186 (186 Re));例如,釔(Y)之放射性同位素(例如,90 Y);例如,鎦(Lu)之放射性同位素(例如,177 Lu);例如,鎵(Ga)之放射性同位素(例如,68 Ga;例如,67 Ga);例如,銦之放射性同位素(例如,111 In);例如,銅(Cu)之放射性同位素(例如,67 Cu)]以形成具有類似於上文針對99m Tc-MIP-1405所展示之結構之結構的化合物(其中用另一金屬放射性同位素代替99m Tc)。In certain embodiments, 1405 is labeled with a radionuclide (eg, of a radioisotope chelated to a metal). In certain embodiments, radionuclide-labeled PSMA-binding agents include99mTc -MIP-1405, which is 1405 labeled (eg, chelated to99mTc ) with99mTc :
Figure 02_image039
99m Tc-MIP-1405, or a pharmaceutically acceptable salt thereof. In certain embodiments, 1405 can chelate to other metal radioisotopes [eg, radioisotopes of rhenium (Re) (eg, rhenium 188 ( 188 Re); eg, rhenium 186 ( 186 Re)); eg, yttrium ( Y) radioisotopes (eg, 90 Y); eg, radioisotopes of titanium (Lu) (eg, 177 Lu); eg, radioisotopes of gallium (Ga) (eg, 68 Ga; eg, 67 Ga); eg , a radioactive isotope of indium (eg, 111In ); eg, a radioactive isotope of copper (Cu) (eg, 67Cu )] to form compounds with structures similar to those shown above for99mTc -MIP-1405 (in which99mTc is replaced by another metal radioisotope).

在某些實施例中,1427係用金屬之放射性同位素標記(例如,螯合至金屬之放射性同位素)以形成根據以下化學式之化合物:

Figure 02_image041
螯合至金屬之1427, 或其之藥學上可接受之鹽,其中M係用其標記1427之金屬放射性同位素[例如,鍀(Tc)之放射性同位素(例如,鍀99m (99m Tc));例如,錸(Re)之放射性同位素(例如,錸188 (188 Re);例如,錸186 (186 Re));例如,釔(Y)之放射性同位素(例如,90 Y);例如,鎦(Lu)之放射性同位素(例如,177 Lu);例如,鎵(Ga)之放射性同位素(例如,68 Ga;例如,67 Ga);例如,銦之放射性同位素(例如,111 In);例如,銅(Cu)之放射性同位素(例如,67 Cu)]。In certain embodiments, 1427 is labeled with a metal radioisotope (eg, chelated to a metal radioisotope) to form a compound according to the formula:
Figure 02_image041
1427 chelated to a metal, or a pharmaceutically acceptable salt thereof, wherein M is a metal radioisotope with which 1427 is labeled [for example, a radioisotope of Xun (Tc) (for example, Xun 99m ( 99m Tc)); for example , a radioisotope of rhenium (Re) (eg, rhenium 188 ( 188 Re); eg, rhenium 186 ( 186 Re)); eg, a radioisotope of yttrium (Y) (eg, 90 Y); eg, titanium (Lu) for example, radioisotopes of gallium (Ga) (for example, 68 Ga; for example, 67 Ga); for example, radioisotopes of indium (for example, 111 In); for example, copper (Cu) radioactive isotopes (eg, 67 Cu)].

在某些實施例中,1428係用金屬之放射性同位素標記(例如,螯合至金屬之放射性同位素)以形成根據以下化學式之化合物:

Figure 02_image043
螯合至金屬之1428, 或其之藥學上可接受之鹽,其中M係用其標記1428之金屬放射性同位素[例如,鍀(Tc)之放射性同位素(例如,鍀99m (99m Tc));例如,錸(Re)之放射性同位素(例如,錸188 (188 Re);例如,錸186 (186 Re));例如,釔(Y)之放射性同位素(例如,90 Y);例如,鎦(Lu)之放射性同位素(例如,177 Lu);例如,鎵(Ga)之放射性同位素(例如,68 Ga;例如,67 Ga);例如,銦之放射性同位素(例如,111 In);例如,銅(Cu)之放射性同位素(例如,67 Cu)]。In certain embodiments, 1428 is labeled with a metal radioisotope (eg, chelated to a metal radioisotope) to form a compound according to the formula:
Figure 02_image043
1428 chelated to a metal, or a pharmaceutically acceptable salt thereof, wherein M is a metal radioisotope with which 1428 is labeled [for example, a radioisotope of Xun (Tc) (for example, Xun 99m ( 99m Tc)); for example , a radioisotope of rhenium (Re) (eg, rhenium 188 ( 188 Re); eg, rhenium 186 ( 186 Re)); eg, a radioisotope of yttrium (Y) (eg, 90 Y); eg, titanium (Lu) for example, radioisotopes of gallium (Ga) (for example, 68 Ga; for example, 67 Ga); for example, radioisotopes of indium (for example, 111 In); for example, copper (Cu) radioactive isotopes (eg, 67 Cu)].

在某些實施例中,放射性核素標記之PSMA結合劑包括PSMA I&S:

Figure 02_image045
PSMA I&S, 或其之藥學上可接受之鹽。在某些實施例中,放射性核素標記之PSMA結合劑包括99m Tc-PSMA I&S (其係用99m Tc標記之PSMA I&S),或其之藥學上可接受之鹽。 K. 電腦系統及網路架構 In certain embodiments, the radionuclide-labeled PSMA binding agent comprises PSMA I&S:
Figure 02_image045
PSMA I&S, or a pharmaceutically acceptable salt thereof. In certain embodiments, the radionuclide-labeled PSMA-binding agent comprises99mTc-PSMA I&S (which is PSMA I&S labeled with99mTc ), or a pharmaceutically acceptable salt thereof. K. Computer System and Network Architecture

如圖30中所展示,展示及描述用於提供本文中所描述之系統、方法及架構之網路環境3000的實施方案。在簡要概述中,現參考圖30,展示及描述例示性雲端運算環境3000之方塊圖。雲端運算環境3000可包含一或多個資源提供者3002a、3002b、3002c (統稱3002)。各資源提供者3002可包含運算資源。在一些實施方案中,運算資源可包含用於處理資料之任何硬體及/或軟體。例如,運算資源可包含能夠執行演算法、電腦程式及/或電腦應用程式之硬體及/或軟體。在一些實施方案中,例示性運算資源可包含具有儲存及擷取能力之應用程式伺服器及/或資料庫。各資源提供者3002可連接至雲端運算環境3000中之任何其他資源提供者3002。在一些實施方案中,資源提供者3002可經由電腦網路3008連接。各資源提供者3002可經由電腦網路3008連接至一或多個運算裝置3004a、3004b、3004c (統稱3004)。As shown in FIG. 30, an implementation of a network environment 3000 for providing the systems, methods, and architectures described herein is shown and described. In a brief overview, referring now to FIG. 30, a block diagram of an exemplary cloud computing environment 3000 is shown and described. Cloud computing environment 3000 may include one or more resource providers 3002a, 3002b, 3002c (collectively 3002). Each resource provider 3002 may contain computing resources. In some implementations, computing resources may include any hardware and/or software used to process data. For example, computing resources may include hardware and/or software capable of executing algorithms, computer programs, and/or computer applications. In some implementations, exemplary computing resources may include application servers and/or databases with storage and retrieval capabilities. Each resource provider 3002 can be connected to any other resource provider 3002 in the cloud computing environment 3000 . In some embodiments, resource providers 3002 may be connected via computer network 3008 . Each resource provider 3002 may be connected via a computer network 3008 to one or more computing devices 3004a, 3004b, 3004c (collectively 3004).

雲端運算環境3000可包含資源管理器3006。資源管理器3006可經由電腦網路3008連接至資源提供者3002及運算裝置3004。在一些實施方案中,資源管理器3006可促進藉由一或多個資源提供者3002將運算資源供應給一或多個運算裝置3004。資源管理器3006可自特定運算裝置3004接收用於運算資源之請求。資源管理器3006可識別能夠提供藉由運算裝置3004所請求之運算資源之一或多個資源提供者3002。資源管理器3006可選擇提供運算資源之資源提供者3002。資源管理器3006可促進資源提供者3002與特定運算裝置3004之間的連接。在一些實施方案中,資源管理器3006可建立特定資源提供者3002與特定運算裝置3004之間的連接。在一些實施方案中,資源管理器3006可將特定運算裝置3004重新引導至具有所請求之運算資源之特定資源提供者3002。Cloud computing environment 3000 may include resource manager 3006 . The resource manager 3006 can be connected to the resource provider 3002 and the computing device 3004 via the computer network 3008 . In some implementations, the resource manager 3006 can facilitate the provisioning of computing resources by the one or more resource providers 3002 to the one or more computing devices 3004. Resource manager 3006 may receive requests for computing resources from particular computing devices 3004 . Resource manager 3006 can identify one or more resource providers 3002 capable of providing computing resources requested by computing device 3004 . The resource manager 3006 can select a resource provider 3002 that provides computing resources. Resource manager 3006 may facilitate connections between resource providers 3002 and particular computing devices 3004. In some implementations, the resource manager 3006 can establish a connection between a particular resource provider 3002 and a particular computing device 3004. In some implementations, the resource manager 3006 can redirect a particular computing device 3004 to a particular resource provider 3002 having the requested computing resource.

圖31展示可用於實施本發明中所描述之技術之運算裝置3100及行動運算裝置3150之實例。運算裝置3100意欲表示各種形式之數位電腦,諸如膝上型電腦、桌上型電腦、工作站、個人數位助理、伺服器、刀鋒型伺服器、大型電腦及其他適當電腦。行動運算裝置3150意欲表示各種形式之行動裝置,諸如個人數位助理、蜂巢式電話、智慧型電話及其他類似運算裝置。此處所展示之該等組件、其等之連接及關係及其等之功能意欲僅供例示,且並不意欲具限制性。31 shows an example of a computing device 3100 and a mobile computing device 3150 that may be used to implement the techniques described in this disclosure. Computing device 3100 is intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blades, mainframes, and other suitable computers. Mobile computing device 3150 is intended to represent various forms of mobile devices, such as personal digital assistants, cellular phones, smart phones, and other similar computing devices. The components shown here, their connections and relationships, and their functions, are intended to be illustrative only, and are not intended to be limiting.

運算裝置3100包含處理器3102、記憶體3104、儲存裝置3106、連接至記憶體3104及多個高速擴充埠3110之高速介面3108、及連接至低速擴充埠3114及儲存裝置3106之低速介面3112。處理器3102、記憶體3104、儲存裝置3106、高速介面3108、高速擴充埠3110及低速介面3112之各者係使用各種匯流排互連,且可安裝於共同主機板上或適當地以其他方式安裝。處理器3102可處理在運算裝置3100內執行之指令,包含儲存於記憶體3104中或儲存裝置3106上用以顯示外部輸入/輸出裝置(諸如耦合至高速介面3108之顯示器3116)上之GUI之圖形資訊的指令。在其他實施方案中,可適當使用多個處理器及/或多個匯流排連同多個記憶體及多種類型之記憶體。又,多個運算裝置可與提供部分所需操作之各裝置(舉例而言,如伺服器陣列(server bank)、刀鋒型伺服器群組或多處理器系統)連接。因此,在本文中使用術語時,在將複數個功能描述為藉由「處理器」執行的情況下,此涵蓋其中藉由任何數目個運算裝置(一或多個)之任何數目個處理器(一或多個)執行該複數個功能之實施例。此外,在將功能描述為藉由「處理器」執行的情況下,此涵蓋其中藉由(例如,分散式運算系統中之)任何數目個運算裝置(一或多個)之任何數目個處理器(一或多個)執行該功能之實施例。The computing device 3100 includes a processor 3102, a memory 3104, a storage device 3106, a high-speed interface 3108 connected to the memory 3104 and a plurality of high-speed expansion ports 3110, and a low-speed interface 3112 connected to the low-speed expansion port 3114 and the storage device 3106. Each of the processor 3102, memory 3104, storage 3106, high-speed interface 3108, high-speed expansion ports 3110, and low-speed interface 3112 are interconnected using various bus bars and may be mounted on a common motherboard or otherwise as appropriate . Processor 3102 may process instructions executed within computing device 3100, including graphics stored in memory 3104 or on storage device 3106 for displaying GUIs on external input/output devices, such as display 3116 coupled to high-speed interface 3108 Information instructions. In other implementations, multiple processors and/or multiple buses may be used as appropriate, along with multiple memories and multiple types of memory. Also, multiple computing devices may be connected to each device that provides some of the required operations, such as, for example, a server bank, a group of blade-type servers, or a multiprocessor system. Thus, when terms are used herein, where a plurality of functions are described as being performed by a "processor," this includes any number of processors ( one or more) to perform an embodiment of the plurality of functions. Furthermore, where a function is described as being performed by a "processor," this includes any number of processors therein by (eg, in a distributed computing system) any number of computing device(s) An embodiment(s) that perform the function.

記憶體3104儲存運算裝置3100內之資訊。在一些實施方案中,記憶體3104係一(或若干)揮發性記憶體單元。在一些實施方案中,記憶體3104係一(或若干)非揮發性記憶體單元。記憶體3104亦可係另一形式之電腦可讀媒體,諸如磁碟或光碟。The memory 3104 stores information in the computing device 3100 . In some implementations, memory 3104 is one (or several) volatile memory cells. In some implementations, memory 3104 is one (or several) non-volatile memory cells. Memory 3104 may also be another form of computer-readable medium, such as a magnetic disk or optical disk.

儲存裝置3106能夠為運算裝置3100提供大容量儲存。在一些實施方案中,儲存裝置3106可係電腦可讀媒體或含有電腦可讀媒體,諸如軟碟裝置、硬碟裝置、光碟裝置或磁帶裝置、快閃記憶體或其他類似固態記憶體裝置,或包含在儲存區域網路或其他組態中之裝置之裝置陣列。指令可儲存於資訊載體中。指令在藉由一或多個處理裝置(例如,處理器3102)執行時執行一或多個方法(諸如上文所描述之方法)。指令亦可藉由一或多個儲存裝置儲存,諸如電腦可讀或機器可讀媒體(例如,記憶體3104、儲存裝置3106或處理器3102上之記憶體)。The storage device 3106 can provide mass storage for the computing device 3100 . In some implementations, the storage device 3106 may be or contain a computer-readable medium, such as a floppy disk device, a hard disk device, an optical or magnetic tape device, a flash memory or other similar solid state memory device, or A device array of devices included in a storage area network or other configuration. Instructions may be stored in an information carrier. The instructions, when executed by one or more processing devices (eg, processor 3102), perform one or more methods, such as those described above. Instructions may also be stored by one or more storage devices, such as computer-readable or machine-readable media (eg, memory 3104, storage device 3106, or memory on processor 3102).

高速介面3108為運算裝置3100管理頻寬密集型操作,而低速介面3112管理較低頻寬密集型操作。此等功能分配僅供例示。在一些實施方案中,高速介面3108耦合至記憶體3104、顯示器3116 (例如,透過圖形處理器或加速器),且耦合至可接受各種擴充卡(未展示)之高速擴充埠3110。在實施方案中,低速介面3112耦合至儲存裝置3106及低速擴充埠3114。可包含各種通信埠(例如,USB、Bluetooth®、乙太網路、無線乙太網路)之低速擴充埠3114可耦合至一或多個輸入/輸出裝置(諸如鍵盤、指標裝置、掃描器),或(例如,透過網路配接器)耦合至網路連結裝置(諸如交換器或路由器)。High-speed interface 3108 manages bandwidth-intensive operations for computing device 3100, while low-speed interface 3112 manages less bandwidth-intensive operations. These functional assignments are for illustration only. In some implementations, high-speed interface 3108 is coupled to memory 3104, display 3116 (eg, through a graphics processor or accelerator), and to high-speed expansion port 3110 that accepts various expansion cards (not shown). In an implementation, the low-speed interface 3112 is coupled to the storage device 3106 and the low-speed expansion port 3114 . Low-speed expansion port 3114, which may include various communication ports (eg, USB, Bluetooth®, Ethernet, wireless Ethernet), may be coupled to one or more input/output devices (such as keyboards, pointing devices, scanners) , or coupled (eg, through a network adapter) to a network connection device such as a switch or router.

運算裝置3100可以許多不同形式實施,如圖式中所展示。例如,其可實施為標準伺服器3120或在此等伺服器之群組中多次實施。另外,其可實施於個人電腦(諸如膝上型電腦3122)中。其亦可實施為機架式伺服器系統3124之部分。替代性地,來自運算裝置3100之組件可與行動裝置(諸如行動運算裝置3150)中之其他組件(未展示)組合。此等裝置之各者可含有運算裝置3100及行動運算裝置3150之一或多者,且整個系統可由彼此通信之多個運算裝置構成。Computing device 3100 may be implemented in many different forms, as shown in the figures. For example, it may be implemented as a standard server 3120 or multiple times in a group of such servers. Additionally, it may be implemented in a personal computer such as laptop 3122. It can also be implemented as part of a rack server system 3124. Alternatively, components from computing device 3100 may be combined with other components (not shown) in a mobile device, such as mobile computing device 3150. Each of these devices may include one or more of a computing device 3100 and a mobile computing device 3150, and the overall system may be made up of multiple computing devices in communication with each other.

行動運算裝置3150包含處理器3152、記憶體3164、輸入/輸出裝置(諸如顯示器3154)、通信介面3166及收發器3168,以及其他組件。行動運算裝置3150亦可具有用以提供額外儲存之儲存裝置(諸如微型硬碟機或其他裝置)。處理器3152、記憶體3164、顯示器3154、通信介面3166及收發器3168之各者係使用各種匯流排互連,且該等組件之若干者可安裝於共同主機板上或適當地以其他方式安裝。Mobile computing device 3150 includes processor 3152, memory 3164, input/output devices (such as display 3154), communication interface 3166, and transceiver 3168, among other components. The mobile computing device 3150 may also have a storage device (such as a micro-hard drive or other device) to provide additional storage. Each of the processor 3152, memory 3164, display 3154, communication interface 3166, and transceiver 3168 are interconnected using various bus bars, and some of these components may be mounted on a common motherboard or otherwise as appropriate .

處理器3152可執行行動運算裝置3150內之指令,包含儲存於記憶體3164中之指令。處理器3152可實施為包含分離及多個類比及數位處理器之晶片之晶片組。處理器3152可提供(例如)行動運算裝置3150之其他組件之協調,諸如使用者介面之控制、藉由行動運算裝置3150運行之應用程式及藉由行動運算裝置3150之無線通信。Processor 3152 can execute instructions within mobile computing device 3150 , including instructions stored in memory 3164 . The processor 3152 may be implemented as a chip set including chips for discrete and multiple analog and digital processors. The processor 3152 may provide, for example, coordination of other components of the mobile computing device 3150 , such as control of the user interface, applications run by the mobile computing device 3150 , and wireless communication by the mobile computing device 3150 .

處理器3152可透過耦合至顯示器3154之控制介面3158及顯示介面3156與使用者通信。顯示器3154可係(例如) TFT (薄膜電晶體液晶顯示器)顯示器或OLED (有機發光二極體)顯示器或其他適當顯示技術。顯示介面3156可包括用於驅動顯示器3154呈現圖形及其他資訊給使用者之適當電路系統。控制介面3158可自使用者接收命令且轉換該等命令以提交至處理器3152。另外,外部介面3162可提供與處理器3152之通信,以便實現行動運算裝置3150與其他裝置之近區通信。外部介面3162可在一些實施方案中提供(例如)有線通信,或在其他實施方案中提供無線通信,且亦可使用多個介面。The processor 3152 can communicate with a user through a control interface 3158 and a display interface 3156 coupled to the display 3154. Display 3154 may be, for example, a TFT (Thin Film Transistor Liquid Crystal Display) display or an OLED (Organic Light Emitting Diode) display or other suitable display technology. Display interface 3156 may include appropriate circuitry for driving display 3154 to present graphics and other information to the user. The control interface 3158 can receive commands from the user and convert the commands for submission to the processor 3152. In addition, the external interface 3162 may provide communication with the processor 3152 to enable near area communication between the mobile computing device 3150 and other devices. External interface 3162 may provide, for example, wired communication in some implementations, or wireless communication in other implementations, and multiple interfaces may also be used.

記憶體3164儲存行動運算裝置3150內之資訊。記憶體3164可實施為一(或若干)電腦可讀媒體、一(或若干)揮發性記憶體單元或一(或若干)非揮發性記憶體單元之一或多者。亦可提供擴充記憶體3174且透過擴充介面3172將其連接至行動運算裝置3150,擴充介面3172可包含(例如) SIMM (單排直插記憶體模組)卡介面。擴充記憶體3174可係行動運算裝置3150提供額外儲存空間,或亦可儲存用於行動運算裝置3150之應用程式或其他資訊。明確言之,擴充記憶體3174可包含執行或增補上文所描述之程序之指令,且亦可包含安全資訊。因此,例如,擴充記憶體3174可提供為行動運算裝置3150之安全模組,且可藉由准許行動運算裝置3150之安全使用之指令程式化。另外,可經由SIMM卡提供安全應用程式連同額外資訊(諸如以不可攻擊方式將識別資訊放置於SIMM卡上)。The memory 3164 stores information in the mobile computing device 3150 . Memory 3164 may be implemented as one or more of a computer-readable medium (or several), one (or several) volatile memory cells, or one (or several) non-volatile memory cells. Expansion memory 3174 may also be provided and connected to mobile computing device 3150 through expansion interface 3172, which may include, for example, a SIMM (Single Row In-Line Memory Module) card interface. The expansion memory 3174 can provide additional storage space for the mobile computing device 3150 , or can also store applications or other information for the mobile computing device 3150 . Specifically, expansion memory 3174 may contain instructions to perform or supplement the procedures described above, and may also contain security information. Thus, for example, expansion memory 3174 may be provided as a security module for mobile computing device 3150 and may be programmed with instructions that permit secure use of mobile computing device 3150. Additionally, the security application may be provided via the SIMM card along with additional information (such as placing identifying information on the SIMM card in an unhackable manner).

記憶體可包含(例如)快閃記憶體及/或NVRAM記憶體(非揮發性隨機存取記憶體),如下文所論述。在一些實施方案中,指令係儲存於資訊載體中且該等指令在藉由一或多個處理裝置(例如,處理器3152)執行時執行一或多種方法(諸如上文所描述之方法)。指令亦可藉由一或多個儲存裝置儲存,諸如一或多個電腦可讀或機器可讀媒體(例如,記憶體3164、擴充記憶體3174或處理器3152上之記憶體)。在一些實施方案中,指令可(例如)經由收發器3168或外部介面3162在經傳播信號中被接收。Memory may include, for example, flash memory and/or NVRAM memory (non-volatile random access memory), as discussed below. In some implementations, instructions are stored in an information carrier and the instructions, when executed by one or more processing devices (eg, processor 3152), perform one or more methods, such as those described above. Instructions may also be stored by one or more storage devices, such as one or more computer-readable or machine-readable media (eg, memory 3164, expansion memory 3174, or memory on processor 3152). In some implementations, instructions may be received in propagated signals, eg, via transceiver 3168 or external interface 3162.

行動運算裝置3150可透過通信介面3166無線通信,通信介面3166必要時可包含數位信號處理電路系統。通信介面3166可在各種模式或協定下提供通信,該等模式或協定諸如GSM語音電話(全球行動通信系統)、SMS (簡訊服務)、EMS (增強型訊息傳遞服務)或MMS訊息傳遞(多媒體訊息傳遞服務)、CDMA (分碼多重存取)、TDMA (分時多重存取)、PDC (個人數位蜂巢式電話)、WCDMA (寬頻分碼多重存取)、CDMA2000或GPRS (通用封包無線電服務)等。此通信可(例如)透過收發器3168使用射頻發生。另外,可(諸如)使用Bluetooth®、Wi-Fi™或其他此收發器(未展示)發生短距離通信。另外,GPS (全球定位系統)接收器模組3170可提供可適當地供運行於行動運算裝置3150上之應用程式使用之額外導航相關及位置相關之無線資料至行動運算裝置3150。The mobile computing device 3150 can communicate wirelessly through the communication interface 3166, which can include digital signal processing circuitry if necessary. The communication interface 3166 may provide communication under various modes or protocols such as GSM voice telephony (Global System for Mobile Communications), SMS (Short Messaging Service), EMS (Enhanced Messaging Service) or MMS Messaging (Multimedia Messaging) delivery service), CDMA (Code Division Multiple Access), TDMA (Time Division Multiple Access), PDC (Personal Digital Cellular Phone), WCDMA (Wideband Code Division Multiple Access), CDMA2000 or GPRS (General Packet Radio Service) Wait. This communication may occur using radio frequency, for example, through transceiver 3168. Additionally, short-range communication may occur, such as using Bluetooth®, Wi-Fi™, or other such transceivers (not shown). Additionally, the GPS (Global Positioning System) receiver module 3170 can provide additional navigation-related and location-related wireless data to the mobile computing device 3150 that can be appropriately used by applications running on the mobile computing device 3150.

行動運算裝置3150亦可使用音訊編碼解碼器3160可聽地通信,音訊編碼解碼器3160可自使用者接收口說資訊且將其轉換成可用數位資訊。音訊編碼解碼器3160可同樣諸如透過(例如)在行動運算裝置3150之聽筒中之揚聲器對使用者產生可聽聲音。此聲音可包含來自語音電話之聲音,可包含經錄製聲音(例如,語音訊息、音樂檔案等)且亦可包含藉由在行動運算裝置3150上操作之應用程式產生之聲音。Mobile computing device 3150 may also communicate audibly using audio codec 3160, which may receive spoken information from the user and convert it into usable digital information. The audio codec 3160 can also produce audible sound to the user, such as through a speaker in the handset of the mobile computing device 3150, for example. This sound may include sound from a voice call, may include recorded sound (eg, voice messages, music files, etc.) and may also include sound generated by applications operating on the mobile computing device 3150.

行動運算裝置3150可以許多不同形式實施,如圖中所展示。例如,其可實施為蜂巢式電話3180。其亦可實施為智慧型電話3182、個人數位助理或其他類似行動裝置之部分。Mobile computing device 3150 may be implemented in many different forms, as shown in the figures. For example, it may be implemented as a cellular phone 3180. It can also be implemented as part of a smart phone 3182, a personal digital assistant or other similar mobile devices.

本文所描述之系統及技術之各項實施方案可實現於數位電子電路系統、積體電路系統、專門設計之ASIC (特定應用積體電路)、電腦硬體、韌體、軟體及/或其等之組合中。此等不同實施方案可包含一或多個電腦程式中之實施方案,該一或多個電腦程式可在包含至少一個可程式化處理器(其可係專用或通用的,經耦合以自儲存系統接收資料及指令及將資料及指令傳輸至該儲存系統)、至少一個輸入裝置及至少一個輸出裝置之可程式化系統上執行及/或解譯。Various implementations of the systems and techniques described herein can be implemented in digital electronic circuitry, integrated circuits, specially designed ASICs (application-specific integrated circuits), computer hardware, firmware, software, and/or the like in the combination. These various implementations may include implementations in one or more computer programs, which may include at least one programmable processor (which may be special purpose or general purpose, coupled to a self-storage system) Execute and/or interpret on a programmable system that receives and transmits data and instructions to the storage system), at least one input device and at least one output device.

此等電腦程式(亦稱為程式、軟體、軟體應用程式或程式碼)包含用於可程式化處理器之機器指令,且可以高階程序性及/或物件導向程式設計語言,及/或以組合語言/機器語言實施。如本文中所使用,術語機器可讀媒體及電腦可讀媒體係指用於提供機器指令及/或資料至可程式化處理器(其包含接收機器指令作為機器可讀信號之機器可讀媒體)之任何電腦程式產品、設備及/或裝置(例如,磁碟、光碟、記憶體、可程式化邏輯裝置(PLD))。術語機器可讀信號係指用於提供機器指令及/或資料至可程式化處理器之任何信號。Such computer programs (also known as programs, software, software applications, or code) contain machine instructions for programmable processors, and can be written in high-level procedural and/or object-oriented programming languages, and/or in combinations of language/machine language implementation. As used herein, the terms machine-readable medium and computer-readable medium refer to a machine-readable medium for providing machine instructions and/or data to a programmable processor (which includes a machine-readable medium that receives machine instructions as machine-readable signals) any computer program product, equipment and/or device (eg, disk, optical disk, memory, Programmable Logic Device (PLD)). The term machine-readable signal refers to any signal used to provide machine instructions and/or data to a programmable processor.

為提供與使用者之互動,可在電腦上實施本文所描述之系統及技術,該電腦具有用於顯示資訊給該使用者之顯示裝置(例如,CRT (陰極射線管)或LCD (液晶顯示器)監視器)及該使用者可藉由其提供輸入至該電腦之鍵盤及指標裝置(例如,滑鼠或軌跡球)。其他種類之裝置亦可用於提供與使用者之互動;例如,提供給該使用者之回饋可係任何形式之感覺回饋(例如,視覺回饋、聽覺回饋或觸覺回饋);且來自該使用者之輸入可以任何形式被接收,包含聲音、話音或觸覺輸入。To provide interaction with a user, the systems and techniques described herein may be implemented on a computer having a display device (eg, a CRT (cathode ray tube) or LCD (liquid crystal display) for displaying information to the user) monitor) and the keyboard and pointing devices (eg, mouse or trackball) by which the user can provide input to the computer. Other types of devices may also be used to provide interaction with the user; for example, feedback provided to the user may be any form of sensory feedback (eg, visual feedback, auditory feedback, or tactile feedback); and input from the user Can be received in any form, including sound, voice or tactile input.

可在運算系統中實施本文所描述之系統及技術,該運算系統包含後端組件(例如,作為資料伺服器),或包含中間軟體組件(例如,應用程式伺服器),或包含前端組件(例如,具有使用者可透過其與本文所描述之系統及技術之實施方案互動之圖形使用者介面或網頁瀏覽器之用戶端電腦),或此等後端、中間軟體或前端組件之任何組合。該系統之該等組件可藉由任何形式或媒體之數位資料通信(例如,通信網路)互連。通信網路之實例包含區域網路(LAN)、廣域網路(WAN)及網際網路。The systems and techniques described herein may be implemented in a computing system that includes back-end components (eg, as a data server), or includes intermediate software components (eg, an application server), or includes front-end components (eg, as a data server) , a client computer with a graphical user interface or web browser through which users can interact with implementations of the systems and techniques described herein), or any combination of these backend, middleware, or frontend components. The components of the system may be interconnected by any form or medium of digital data communication (eg, a communication network). Examples of communication networks include local area networks (LANs), wide area networks (WANs), and the Internet.

運算系統可包含用戶端及伺服器。用戶端及伺服器一般彼此遠離且通常透過通信網路互動。用戶端與伺服器的關係藉由運行於各自電腦上及彼此具有用戶端-伺服器關係之電腦程式而發生。The computing system may include clients and servers. Clients and servers are generally remote from each other and usually interact through a communication network. The client-server relationship occurs by computer programs running on the respective computers and having a client-server relationship to each other.

在一些實施方案中,本文中所描述之各種模組可分離、組合或併入至單個或組合模組中。圖中所描繪之模組並不意欲將本文中所描述之系統限於其中所展示之軟體架構。In some implementations, the various modules described herein can be separated, combined, or incorporated into a single or combined module. The modules depicted in the figures are not intended to limit the systems described herein to the software architecture shown therein.

本文中所描述之不同實施方案之元件可經組合以形成上文並未明確闡述之其他實施方案。元件可在不會不利地影響其等操作的情況下被排除在本文中所描述之程序、電腦程式、資料庫等之外。另外,圖中所描繪之邏輯流程並不需要所展示之特定順序或循序順序來達成所要結果。各種分離元件可組合至一或多個個別元件中以執行本文中所描述之功能。Elements of different implementations described herein can be combined to form other implementations not expressly set forth above. Elements may be excluded from the programs, computer programs, databases, etc. described herein without adversely affecting their operation. Additionally, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. Various discrete elements may be combined into one or more individual elements to perform the functions described herein.

貫穿其中設備及系統被描述為具有、包含或包括特定組件或其中程序及方法被描述為具有、包含或包括特定步驟之描述,預期另外存在本發明之基本上由該等所敘述組件組成或由該等所敘述組件組成之設備及系統,且另外存在根據本發明之基本上由該等所敘述處理步驟組成或由該等所敘述處理步驟組成之程序及方法。Throughout descriptions in which apparatuses and systems are described as having, comprising or including particular components or in which procedures and methods are described as having, comprising or including particular steps, it is contemplated that there will be additionally that the invention consists essentially of or consists of such recited components Apparatus and systems consisting of the recited components, and additionally there are programs and methods that consist essentially of or consist of the recited processing steps in accordance with the present invention.

應理解,只要本發明保持可操作,步驟之順序或用於執行特定動作之順序就不重要。此外,可同時進行兩個或更多個步驟或動作。It should be understood that the order of steps, or order for performing a particular action, is immaterial as long as the invention remains operable. Furthermore, two or more steps or actions may be performed simultaneously.

雖然已特別參考特定較佳實施例展示及描述本發明,但熟習此項技術者應理解,可在不脫離本發明之如藉由隨附發明申請專利範圍所定義之精神及範疇的情況下在該等較佳實施例中作出形式及細節之各種改變。While the invention has been shown and described with particular reference to certain preferred embodiments, those skilled in the art will understand that the invention can be used in Various changes in form and detail were made in the preferred embodiments.

100a:程序 100b:程序 100c:程序 102:3D功能影像 102a:放射性藥物 102b:正電子發射斷層掃描攝影術(PET)影像 104:3D解剖影像 106:步驟 108:步驟 110:機器學習模組 112:機器學習模組 114:第一機器學習模組 116:第二機器學習模組 120:步驟 122:步驟 124:步驟 126:步驟 130:熱點清單 132:熱點圖 140:步驟 200:程序 202:3D功能影像 208:步驟 210:步驟 212:步驟 214:步驟 216:步驟 218:步驟 242:分離臨限值 244:主要組分 246:次要組分 252a:輪廓 252b:輪廓 300:程序 304:步驟 306:步驟 308:步驟 310:步驟 312:步驟 314:步驟 316:步驟 400:程序 404:步驟 406:步驟 408:步驟 410:步驟 500:程序 504:步驟 506:步驟 508:步驟 510:步驟 512:步驟 520:工作流程 522:步驟 524:步驟 526:步驟 526a:步驟 526b:步驟 528:步驟 530:步驟 532:步驟 902:功能影像 904:解剖影像/電腦斷層掃描攝影術(CT)影像 906:分割圖/三維分割圖/3D分割圖 908:機器學習模組 910:3D熱點圖 1000:後處理 1100:程序 1102:正電子發射斷層掃描攝影術(PET)影像 1108a:前列腺模組/機器學習模組 1108b:全身模組/機器學習模組 1110a:結果 1110b:結果 1112:最終熱點清單及/或熱點圖 1114:3D熱點圖 1120:程序 1122:分析分割方法/分析分割 1124:經分析分割之3D熱點圖 1200:程序 1202:正電子發射斷層掃描攝影術(PET)影像 1204:電腦斷層掃描攝影術(CT)影像 1206:分割圖 1208:機器學習模組 1210:3D熱點圖 1212:分析分割模型 1214:3D熱點圖 1402:熱點 1502:熱點體積 1610:圖形使用者介面(GUI)檢視器 1612:控制面板圖形介面工具集 1614:顯示特定病變之各種特徵/定量量度 1620:報告 1622:病變體積 1700:程序 1702:3D正電子發射斷層掃描攝影術(PET)影像/正電子發射斷層掃描攝影術(PET)影像 1704:3D電腦斷層掃描攝影術(CT)影像/電腦斷層掃描攝影術(CT)影像 1706:第一器官分割機器學習模組/器官分割機器學習模組 1708:3D分割圖/分割圖 1712:第一、單類別熱點分割模組/單類別熱點分割模組 1714:第二、多類別熱點分割模組/多類別熱點分割模組 1716:第一、單類別3D熱點圖/單類別熱點圖 1718:第二、多類別3D熱點圖/多類別熱點圖 1722:經合併熱點圖/熱點合併步驟 1724:經分割及分類之熱點/經分割及分類之熱點體積 1738:3D分割圖/3D器官分割圖 1800:程序 1802:初始3D熱點圖/3D熱點圖 1804:正電子發射斷層掃描攝影術(PET)影像 1806:3D器官分割圖 1808:步驟 1810:血池參考值 1832:臨限值 1840:曲線圖 1842:熱點強度 1844:熱點臨限值 1846:線性尺寸 1848:大實體病變/病變 1850:曲線圖 1852:熱點強度/峰值熱點強度/最大熱點強度 1854:熱點臨限值 1856:線性尺寸 1858:小實體病變/病變 1860:繪圖 1864:臨限值 1866:病變 1900:習知熱點分割方法 1904:基於強度之定限 1906:下游量化 1920:影像 1922:步驟 1924:所關注區域(ROI) 1926:步驟 1928:影像 1929:熱點分割 1930:影像 1931:熱點分割 1950:基於人工智慧(AI)之方法 1952:正電子發射斷層掃描攝影術(PET)影像 1954:電腦斷層掃描攝影術(CT)影像 1956:對熱點進行偵測、分割及分類 1958:分析分割方法 1960:下游量化 2002a:偽陽性 2002b:偽陽性 2004:前列腺病變 2006:前列腺病變 2300:視圖 2302:藍色符號 2304:紅色符號 2306:橙色符號 2308:黑色符號 2400:查看視窗/檢視器 2402:面板/可伸縮面板 2422:面板/中間面板 2442:面板/可伸縮右面板 2462:建立報告按鈕 2500:報告 2502:區段/面板 2522:區段/概述區段 2542:區段 2600:影像處理工作流程 2602:步驟/器官分割 2604:步驟 2608:步驟 2608a:步驟 2608b:步驟 2610:步驟 2800:分析模型 2802:正電子發射斷層掃描攝影術(PET)影像 2804:器官分割圖/器官分割遮罩 2806:抑制步驟 2808:經抑制正電子發射斷層掃描攝影術(PET)影像 2810:高斯差分(DoG)濾波方法 2812:步驟/高斯拉普拉斯算子(LoG)濾波方法 2814:3D分割圖 3000:網路環境/雲端運算環境 3002a:資源提供者 3002b:資源提供者 3002c:資源提供者 3004a:運算裝置 3004b:運算裝置 3004c:運算裝置 3006:資源管理器 3008:電腦網路 3100:運算裝置 3102:處理器 3104:記憶體 3106:儲存裝置 3108:高速介面 3110:高速擴充埠 3112:低速介面 3114:低速擴充埠 3116:顯示器 3120:標準伺服器 3124:機架式伺服器系統 3150:行動運算裝置 3152:處理器 3154:顯示器 3156:顯示介面 3158:控制介面 3160:音訊編碼解碼器 3162:外部介面 3164:記憶體 3166:通信介面 3168:收發器 3170:全球定位系統(GPS)接收器模組 3172:擴充介面 3174:擴充記憶體 3180:蜂巢式電話 3182:智慧型電話100a: Procedures 100b: Procedure 100c: Procedures 102: 3D functional images 102a: Radiopharmaceuticals 102b: Positron Emission Tomography (PET) Imaging 104: 3D Anatomy Imaging 106: Steps 108: Steps 110: Machine Learning Modules 112: Machine Learning Modules 114: The first machine learning module 116: Second Machine Learning Module 120: Steps 122: Steps 124: Steps 126: Steps 130: Hotspot List 132: Heatmap 140: Steps 200: Program 202: 3D Functional Imagery 208: Steps 210: Steps 212: Steps 214: Steps 216: Steps 218: Steps 242: Separation Threshold 244: Main Components 246: Minor components 252a: Outline 252b: Contour 300: Procedure 304: Step 306: Steps 308: Steps 310: Steps 312: Steps 314: Steps 316: Steps 400: Procedure 404: Step 406: Step 408: Step 410: Steps 500: Procedure 504: Step 506: Steps 508: Steps 510: Steps 512: Steps 520: Workflow 522: Steps 524: Steps 526: Steps 526a: Steps 526b: Steps 528: Steps 530: Steps 532: Steps 902: Functional Image 904: Anatomical Imaging/Computed Tomography (CT) Imaging 906: Segmentation Map/3D Segmentation Map/3D Segmentation Map 908: Machine Learning Modules 910: 3D heat map 1000: Postprocessing 1100: Program 1102: Positron Emission Tomography (PET) Imaging 1108a: Prostate Module/Machine Learning Module 1108b: Full Body Mods/Machine Learning Mods 1110a: Results 1110b: Results 1112: Final Hotspot List and/or Heatmap 1114: 3D Heat Maps 1120: Procedure 1122: Analytical Segmentation Methods / Analytical Segmentation 1124: Analytically segmented 3D heat map 1200: Procedure 1202: Positron Emission Tomography (PET) Imaging 1204: Computed Tomography (CT) Imaging 1206: Split Map 1208: Machine Learning Modules 1210: 3D Heat Map 1212: Analytical Segmentation Models 1214: 3D Heat Maps 1402: Hotspot 1502: Hotspot Volume 1610: Graphical User Interface (GUI) Viewer 1612: Control Panel GUI Toolset 1614: Various features/quantitative measures showing specific lesions 1620: Report 1622: Lesion volume 1700: Procedure 1702: 3D Positron Emission Tomography (PET) Imaging/Positron Emission Tomography (PET) Imaging 1704: 3D Computed Tomography (CT) Imaging/Computed Tomography (CT) Imaging 1706: First Organ Segmentation Machine Learning Module/Organ Segmentation Machine Learning Module 1708: 3D Segmentation Map / Segmentation Map 1712: First, single-category hotspot segmentation module/single-category hotspot segmentation module 1714: Second, multi-category hotspot segmentation module/multi-category hotspot segmentation module 1716: First, single-class 3D heat map / single-class heat map 1718: Second, multi-class 3D heat map / multi-class heat map 1722: Merged heatmap/hotspot merge step 1724: Segmented and Classified Hotspot/Segmented and Classified Hotspot Volume 1738: 3D Segmentation Map/3D Organ Segmentation Map 1800: Program 1802: Initial 3D Heatmap/3D Heatmap 1804: Positron Emission Tomography (PET) Imaging 1806: 3D Organ Segmentation Map 1808: Steps 1810: Blood pool reference value 1832: Threshold value 1840: Graphs 1842: Hotspot Strength 1844: Hot Spot Threshold 1846: Linear Dimensions 1848: Large solid lesions/lesions 1850: Graphs 1852: Hotspot Intensity/Peak Hotspot Intensity/Maximum Hotspot Intensity 1854: Hot Spot Threshold 1856: Linear Dimensions 1858: Small solid lesions/lesions 1860: Drawing 1864: Threshold value 1866: Lesion 1900: Known hotspot segmentation method 1904: Limits based on strength 1906: Downstream Quantization 1920: Video 1922: Steps 1924: Region of Interest (ROI) 1926: Steps 1928: Video 1929: Hotspot segmentation 1930: Video 1931: Hotspot segmentation 1950: Methods based on artificial intelligence (AI) 1952: Positron emission tomography (PET) imaging 1954: Computed tomography (CT) imaging 1956: Detection, segmentation and classification of hot spots 1958: Analytical segmentation methods 1960: Downstream Quantization 2002a: False Positives 2002b: False Positives 2004: Prostate lesions 2006: Prostate lesions 2300: View 2302: blue symbol 2304: red symbol 2306: Orange symbol 2308: Black Symbol 2400: View Window/Viewer 2402: Panel/Retractable Panel 2422: Panel/Middle Panel 2442: Panel/Retractable Right Panel 2462: Build report button 2500: Report 2502: Section/Panel 2522: Section/Overview Section 2542: Segment 2600: Image Processing Workflow 2602: Steps/Organ Segmentation 2604: Steps 2608: Steps 2608a: Procedure 2608b: Steps 2610: Steps 2800: Analytical Models 2802: Positron Emission Tomography (PET) Imaging 2804: Organ Segmentation Map/Organ Segmentation Mask 2806: Suppression step 2808: Suppressed Positron Emission Tomography (PET) Imaging 2810: Difference of Gaussian (DoG) filtering method 2812: Step/Laplacian of Gaussian (LoG) filtering method 2814: 3D segmentation map 3000: Network Environment/Cloud Computing Environment 3002a: Resource Provider 3002b: Resource Provider 3002c: Resource Providers 3004a: Computing Devices 3004b: Computing Devices 3004c: Computing Devices 3006: Resource Manager 3008: Computer Network 3100: Computing Device 3102: Processor 3104: Memory 3106: Storage Device 3108: High-speed interface 3110: High-speed expansion port 3112: Low speed interface 3114: Low speed expansion port 3116: Display 3120: Standard Server 3124: Rack Server Systems 3150: Mobile Computing Devices 3152: Processor 3154: Display 3156: Display interface 3158: Control interface 3160: Audio Codec 3162: External interface 3164: Memory 3166: Communication interface 3168: Transceiver 3170: Global Positioning System (GPS) Receiver Module 3172: Extension interface 3174: Extended memory 3180: Cellular Phone 3182: Smartphone

將藉由參考結合附圖獲得的以下描述變得更加明白及更佳理解本發明之前述及其他目的、態樣、特徵及優點,其中:The foregoing and other objects, aspects, features and advantages of the present invention will become more apparent and better understood by reference to the following description taken in conjunction with the accompanying drawings, wherein:

圖1A係根據闡釋性實施例之用於基於人工智慧(AI)之病變偵測之實例性程序的方塊流程圖。1A is a block flow diagram of an example procedure for artificial intelligence (AI)-based lesion detection, according to an illustrative embodiment.

圖1B係根據闡釋性實施例之用於基於AI之病變偵測之實例性程序的方塊流程圖。IB is a block flow diagram of an example procedure for AI-based lesion detection, according to an illustrative embodiment.

圖1C係根據闡釋性實施例之用於基於AI之病變偵測之實例性程序的方塊流程圖。1C is a block flow diagram of an example procedure for AI-based lesion detection, according to an illustrative embodiment.

圖2A係展示根據闡釋性實施例之與雙組分高斯混合模型疊對之肝臟SUV值之直方圖的曲線圖。2A is a graph showing a histogram of liver SUV values overlaid with a two-component Gaussian mixture model according to an illustrative embodiment.

圖2B係根據闡釋性實施例之疊對於CT影像上之PET影像,其展示用於計算肝臟參考值之肝臟體積之部分。2B is a PET image superimposed on a CT image showing the portion of the liver volume used to calculate the liver reference value, according to an illustrative embodiment.

圖2C係根據闡釋性實施例之用於運算參考強度值之避免/減少來自與低放射性藥物攝取相關聯的組織區域之影響之實例性程序的方塊流程圖。2C is a block flow diagram of an example procedure for calculating reference intensity values to avoid/reduce effects from tissue regions associated with low radiopharmaceutical uptake, according to an illustrative embodiment.

圖3係根據闡釋性實施例之用於校正來自與高放射性藥物攝取相關聯的一或多個組織區域之強度滲出之實例性程序的方塊流程圖。3 is a block flow diagram of an example procedure for correcting for intensity exudation from one or more tissue regions associated with high radiopharmaceutical uptake, according to an illustrative embodiment.

圖4係根據闡釋性實施例之用於自動標記對應於經偵測病變之熱點之實例性程序的方塊流程圖。4 is a block flow diagram of an example procedure for automatically marking hot spots corresponding to detected lesions, according to an illustrative embodiment.

圖5A係根據闡釋性實施例之用於互動式病變偵測之容許經由圖形使用者介面(GUI)之使用者回饋及查看之實例性程序的方塊流程圖。5A is a block flow diagram of an example process for interactive lesion detection that allows user feedback and viewing via a graphical user interface (GUI), according to an illustrative embodiment.

圖5B係根據闡釋性實施例之用於經自動偵測之病變之使用者查看、品質控制及報告的實例性程序。5B is an example procedure for user viewing, quality control, and reporting of automatically detected lesions, according to an illustrative embodiment.

圖6A係根據闡釋性實施例之用於確認肝臟參考體積之準確分割之GUI的螢幕截圖。6A is a screenshot of a GUI for confirming accurate segmentation of a liver reference volume, according to an illustrative embodiment.

圖6B係根據闡釋性實施例之用於確認主動脈部分(血池)參考體積之準確分割之GUI的螢幕截圖。6B is a screenshot of a GUI for confirming accurate segmentation of aortic section (blood pool) reference volumes, according to an illustrative embodiment.

圖6C係根據闡釋性實施例之用於對應於受試者內之經偵測病變之經自動分割之熱點的使用者選擇及/或驗證之GUI的螢幕截圖。6C is a screenshot of a GUI for user selection and/or validation of automatically segmented hotspots corresponding to detected lesions within a subject, according to an illustrative embodiment.

圖6D係根據闡釋性實施例之容許使用者手動地識別影像內之病變之GUI之部分的螢幕截圖。6D is a screen shot of a portion of a GUI that allows a user to manually identify lesions within an image, according to an illustrative embodiment.

圖6E係根據闡釋性實施例之容許使用者手動地識別影像內之病變之GUI之另一部分的螢幕截圖。6E is a screen shot of another portion of a GUI that allows a user to manually identify lesions within an image, according to an illustrative embodiment.

圖7係根據闡釋性實施例之展示品質控制檢核表之GUI之部分的螢幕截圖。7 is a screen shot of a portion of a GUI showing a quality control checklist, according to an illustrative embodiment.

圖8係根據闡釋性實施例之由使用者使用本文中所描述之自動化病變偵測工具之實施例產生之報告的螢幕截圖。8 is a screen shot of a report generated by a user using an embodiment of the automated lesion detection tool described herein, according to an illustrative embodiment.

圖9係展示根據闡釋性實施例之用於經由接收3D解剖影像、3D功能影像及3D分割圖作為輸入之機器學習模組進行熱點(病變)分割之實例性架構的方塊流程圖。9 is a block flow diagram showing an example architecture for hot spot (lesion) segmentation via a machine learning module that receives 3D anatomical images, 3D functional images, and 3D segmentation maps as input, according to an illustrative embodiment.

圖10A係展示根據闡釋性實施例之其中在熱點分割之後執行病變類型映射之實例性程序的方塊流程圖。10A is a block flow diagram showing an example procedure in which lesion type mapping is performed after hotspot segmentation, according to an illustrative embodiment.

圖10B係根據闡釋性實施例之展示其中在熱點分割之後執行病變類型映射之實例性程序,繪示3D分割圖之使用的另一方塊流程圖。10B is another block flow diagram showing the use of a 3D segmentation map, showing an example procedure in which lesion type mapping is performed after hotspot segmentation, according to an illustrative embodiment.

圖11A係展示根據闡釋性實施例之用於使用全身網路及前列腺特異性網路偵測及/或分割表示病變之熱點之程序的方塊流程圖。11A is a block flow diagram showing a procedure for detecting and/or segmenting hotspots representing lesions using a whole-body network and a prostate-specific network, according to an illustrative embodiment.

圖11B係展示根據闡釋性實施例之用於使用全身網路及前列腺特異性網路偵測及/或分割表示病變之熱點之程序的方塊流程圖。11B is a block flow diagram showing a procedure for detecting and/or segmenting hotspots representing lesions using a whole-body network and a prostate-specific network, according to an illustrative embodiment.

圖12係展示根據闡釋性實施例之在基於AI之熱點分割之後之分析分割步驟之使用的方塊流程圖。12 is a block flow diagram showing the use of the analysis segmentation step following AI-based hotspot segmentation, according to an illustrative embodiment.

圖13A係展示根據闡釋性實施例之用於熱點分割之實例性U網架構的方塊圖。13A is a block diagram showing an example U-net architecture for hotspot segmentation, according to an illustrative embodiment.

圖13B及圖13C係展示根據闡釋性實施例之用於熱點分割之實例性FPN架構的方塊圖。13B and 13C are block diagrams showing an example FPN architecture for hotspot segmentation, according to an illustrative embodiment.

圖14A、圖14B及圖14C展示根據闡釋性實施例之示範使用U網架構進行熱點分割的實例性影像。14A, 14B, and 14C show example images of hotspot segmentation using an example U-net architecture, according to an illustrative embodiment.

圖15A及圖15B展示根據闡釋性實施例之示範使用FPN架構進行熱點分割的實例性影像。15A and 15B show example images of hotspot segmentation using an FPN architecture according to an illustrative embodiment.

圖16A、圖16B、圖16C、圖16D及圖16E係根據闡釋性實施例之用於上傳、分析醫學影像資料及自醫學影像資料產生報告之實例性GUI的螢幕截圖。16A, 16B, 16C, 16D, and 16E are screenshots of example GUIs for uploading, analyzing, and generating reports from medical image data, according to an illustrative embodiment.

圖17A及圖17B係根據闡釋性實施例之用於使用兩個平行機器學習模組對熱點進行分割及分類之實例性程序的方塊流程圖。17A and 17B are block flow diagrams of an example process for segmenting and classifying hotspots using two parallel machine learning modules, according to an illustrative embodiment.

圖17C係繪示根據闡釋性實施例之在用於使用兩個平行機器學習模組對熱點進行分割及分類之程序之實例性實施方案的各種軟體模組(例如,API)之間的互動及資料流之方塊流程圖。17C depicts the interaction between various software modules (eg, APIs) of an example implementation of a procedure for segmenting and classifying hotspots using two parallel machine learning modules, according to an illustrative embodiment and Block flow diagram of data flow.

圖18A係根據闡釋性實施例之用於藉由使用自適應定限方法之分析模型分割熱點之實例性程序的方塊流程圖。18A is a block flow diagram of an example procedure for segmenting hotspots by an analytical model using an adaptive bound method, according to an illustrative embodiment.

圖18B及圖18C係展示根據闡釋性實施例之在自適應定限方法中使用之熱點特定臨限值依據熱點強度(SUVmax )之變動的曲線圖。18B and 18C are graphs showing the variation of a hotspot specific threshold value as a function of hotspot intensity ( SUVmax ) used in an adaptive bounding method, according to an illustrative embodiment.

圖18D、圖18E及圖18F係繪示根據闡釋性實施例之特定定限技術的圖式。Figures 18D, 18E, and 18F are diagrams illustrating certain definition techniques in accordance with an illustrative embodiment.

圖18G係展示根據闡釋性實施例之沿著軸向、矢狀及冠狀平面之前列腺體素之強度,以及前列腺體素強度值及臨限值比例調整因數之闡釋性設定之直方圖的圖式。18G is a graph showing a histogram of prostate voxel intensities along axial, sagittal and coronal planes, and illustrative settings of prostate voxel intensity values and threshold scaling factors, according to an illustrative embodiment .

圖19A係繪示根據闡釋性實施例之使用習知手動所關注區域(ROI)定義及習知固定及/或相對定限進行熱點分割的方塊流程圖。19A is a block flow diagram illustrating hotspot segmentation using conventional manual region of interest (ROI) definitions and conventional fixed and/or relative bounds, according to an illustrative embodiment.

圖19B係繪示根據闡釋性實施例之使用基於AI之方法結合自適應定限方法進行熱點分割的方塊流程圖。19B is a block flow diagram illustrating hot spot segmentation using an AI-based approach in combination with an adaptive bounding approach, according to an illustrative embodiment.

圖20係根據闡釋性實施例之比較僅用於定限之實例性分割結果與經由基於AI之方法結合自適應定限方法獲得的分割結果的影像集合。20 is a set of images comparing exemplary segmentation results for delimitation only with segmentation results obtained via an AI-based approach in combination with an adaptive delimiting method, according to an illustrative embodiment.

圖21A、圖21B、圖21C、圖21D、圖21E、圖21F、圖21G、圖21H及圖21I展示在腹部區域中沿著垂直方向移動之3D PET影像之2D圖塊序列。該等影像比較僅藉由定限方法執行之在腹部區域內之熱點分割結果(左手影像)與根據本文中所描述之某些實施例之機器學習方法之熱點分割結果(右手影像),且展示藉由各方法識別之在PET影像圖塊上疊對之熱點區域。21A, 21B, 21C, 21D, 21E, 21F, 21G, 21H, and 21I show a sequence of 2D blocks of 3D PET images moved in the vertical direction in the abdominal region. The images compare the results of hot spot segmentation within the abdominal region performed by the bound method only (left hand image) with the results of hot spot segmentation according to the machine learning method according to some embodiments described herein (right hand image), and are shown Hotspot regions overlaid on the PET image tiles identified by each method.

圖22係根據本文中所描述之某些實施例之用於上傳及使用提供自動化影像分析之CAD裝置分析PET/CT影像資料之程序的方塊流程圖。22 is a block flow diagram of a process for uploading and analyzing PET/CT image data using a CAD device that provides automated image analysis, according to certain embodiments described herein.

圖23係根據本文中所描述之某些實施例之容許使用者上傳影像資料以用於經由提供自動化影像分析之CAD裝置進行查看及分析之實例性GUI的螢幕截圖。23 is a screenshot of an example GUI that allows a user to upload image data for viewing and analysis via a CAD device that provides automated image analysis, according to certain embodiments described herein.

圖24係根據闡釋性實施例之容許使用者查看及分析醫學影像資料(例如,3D PET/CT影像)及自動化影像分析之結果之實例性GUI檢視器的螢幕截圖。24 is a screenshot of an example GUI viewer that allows a user to view and analyze medical image data (eg, 3D PET/CT images) and the results of automated image analysis, according to an illustrative embodiment.

圖25係根據闡釋性實施例之經自動產生之報告的螢幕截圖。25 is a screen shot of an automatically generated report according to an illustrative embodiment.

圖26係根據闡釋性實施例之用於分析醫學影像資料之提供自動化分析以及使用者輸入及查看之實例性工作流程的方塊流程圖。26 is a block flow diagram of an example workflow providing automated analysis and user input and viewing for analyzing medical image data, according to an illustrative embodiment.

圖27展示根據闡釋性實施例之具有疊對之經分割骨及軟組織體積之CT影像的三個視圖。27 shows three views of CT images with superimposed segmented bone and soft tissue volumes, according to an illustrative embodiment.

圖28係根據闡釋性實施例之用於分割熱點之分析模型的方塊流程圖。28 is a block flow diagram of an analytical model for segmenting hotspots, according to an illustrative embodiment.

圖29A係在某些實施例中使用之雲端運算架構的方塊圖。Figure 29A is a block diagram of a cloud computing architecture used in some embodiments.

圖29B係在某些實施例中使用之實例性微服務通信流程的方塊圖。29B is a block diagram of an example microservice communication flow used in certain embodiments.

圖30係在某些實施例中使用之例示性雲端運算環境的方塊圖。30 is a block diagram of an exemplary cloud computing environment used in certain embodiments.

圖31係在某些實施例中使用之實例性運算裝置及實例性行動運算裝置的方塊圖。31 is a block diagram of an example computing device and an example mobile computing device used in certain embodiments.

將自下文闡述之[實施方式]在結合圖式時變得更加明白本發明之特徵及優點,其中相同元件符號始終識別對應元件。在圖式中,相同元件符號一般指示相同、功能上類似及/或結構上類似的元件。The features and advantages of the present invention will become more apparent from the [Embodiments] set forth below when taken in conjunction with the accompanying drawings, wherein like reference numerals identify corresponding elements throughout. In the drawings, identical reference numerals generally indicate identical, functionally similar, and/or structurally similar elements.

100a:程序 100a: Procedures

102:3D功能影像 102: 3D functional images

102a:放射性藥物 102a: Radiopharmaceuticals

102b:正電子發射斷層掃描攝影術(PET)影像 102b: Positron Emission Tomography (PET) Imaging

106:步驟 106: Steps

110:機器學習模組 110: Machine Learning Modules

120:步驟 120: Steps

130:熱點清單 130: Hotspot List

132:熱點圖 132: Heatmap

140:步驟 140: Steps

Claims (148)

一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之方法,該方法包括: (a)藉由運算裝置之處理器,接收使用功能成像模態獲得的該受試者之3D功能影像; (b)藉由該處理器,使用機器學習模組自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域且表示該受試者內之潛在癌性病變,從而建立如下(i)及(ii)之一或兩者:(i)熱點清單,其針對各熱點識別該熱點之位置,及(ii) 3D熱點圖,其針對各熱點識別該3D功能影像內之對應3D熱點體積;及 (c)儲存及/或提供該熱點清單及/或該3D熱點圖,供顯示及/或進一步處理。A method for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the method comprising: (a) receiving, by the processor of the computing device, a 3D functional image of the subject obtained using a functional imaging modality; (b) by the processor, using a machine learning module to automatically detect one or more hot spots within the 3D functional image, each hot spot corresponding to a local area of increased intensity relative to its surroundings and representing a region within the subject Potentially cancerous lesions to create one or both of the following (i) and (ii): (i) a list of hotspots that identifies the location of the hotspot for each hotspot, and (ii) a 3D heat map that identifies the hotspot for each hotspot the corresponding 3D hotspot volume within the 3D functional image; and (c) storing and/or providing the hotspot list and/or the 3D heatmap for display and/or further processing. 如請求項1之方法,其中該機器學習模組接收至少一部分該3D功能影像作為輸入,且至少部分基於該3D功能影像之該經接收部分之體素之強度自動偵測該一或多個熱點。The method of claim 1, wherein the machine learning module receives at least a portion of the 3D-enabled image as input, and automatically detects the one or more hot spots based at least in part on intensities of voxels in the received portion of the 3D-enabled image . 如請求項1或2之方法,其中該機器學習模組接收3D分割圖作為輸入,該3D分割圖識別該3D功能影像內之一或多個所關注體積(VOI),各VOI對應於該受試者內之特定目標組織區域及/或特定解剖區域。The method of claim 1 or 2, wherein the machine learning module receives as input a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image, each VOI corresponding to the subject specific target tissue regions and/or specific anatomical regions within the patient. 如前述請求項中任一項之方法, 其包括藉由該處理器,接收使用解剖成像模態獲得的該受試者之3D解剖影像,其中該3D解剖影像包括該受試者內之組織之圖形表示, 且其中該機器學習模組接收至少兩個輸入通道,該等輸入通道包括對應於至少一部分該3D解剖影像之第一輸入通道及對應於至少一部分該3D功能影像之第二輸入通道。As in the method of any of the preceding claims, It includes receiving, by the processor, a 3D anatomical image of the subject obtained using an anatomical imaging modality, wherein the 3D anatomical image includes a graphical representation of tissue within the subject, And wherein the machine learning module receives at least two input channels, the input channels include a first input channel corresponding to at least a part of the 3D anatomical image and a second input channel corresponding to at least a part of the 3D functional image. 如請求項4之方法,其中該機器學習模組接收3D分割圖作為輸入,該3D分割圖在該3D功能影像及/或該3D解剖影像內識別一或多個所關注體積(VOI),各VOI對應於特定目標組織區域及/或特定解剖區域。The method of claim 4, wherein the machine learning module receives as input a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image and/or the 3D anatomical image, each VOI Corresponds to specific target tissue regions and/or specific anatomical regions. 如請求項5之方法,其包括藉由該處理器自動分割該3D解剖影像,從而建立該3D分割圖。The method of claim 5, comprising automatically segmenting, by the processor, the 3D anatomical image, thereby creating the 3D segmentation map. 如前述請求項中任一項之方法,其中該機器學習模組係區域特定機器學習模組,其接收該3D功能影像之對應於該受試者之一或多個特定組織區域及/或解剖區域之特定部分作為輸入。The method of any one of the preceding claims, wherein the machine learning module is a region-specific machine learning module that receives the 3D functional image corresponding to one or more specific tissue regions and/or anatomy of the subject A specific part of the area is used as input. 如前述請求項中任一項之方法,其中該機器學習模組產生該熱點清單作為輸出。The method of any of the preceding claims, wherein the machine learning module generates the hotspot list as output. 如前述請求項中任一項之方法,其中該機器學習模組產生該3D熱點圖作為輸出。The method of any of the preceding claims, wherein the machine learning module generates the 3D heat map as output. 如前述請求項中任一項之方法,其包括: (d)藉由該處理器,針對至少一部分該等熱點之各熱點,判定對應於該熱點所表示該受試者內病變之可能性之病變可能性分類。The method of any one of the preceding claims, comprising: (d) determining, by the processor, for each of at least a portion of the hotspots, a lesion likelihood classification corresponding to the likelihood of the intrasubject lesion represented by the hotspot. 如請求項10之方法,其中步驟(d)包括使用該機器學習模組,以針對該部分之各熱點判定該病變可能性分類。The method of claim 10, wherein step (d) includes using the machine learning module to determine the lesion likelihood classification for each of the hot spots in the portion. 如請求項10之方法,其中步驟(d)包括使用第二機器學習模組,以針對各熱點判定該病變可能性分類。The method of claim 10, wherein step (d) includes using a second machine learning module to determine the lesion likelihood classification for each hot spot. 如請求項12之方法,其包括藉由該處理器,針對各熱點判定一或多個熱點特徵之集合,及使用該一或多個熱點特徵之集合作為送至第二機器學習模組之輸入。The method of claim 12, comprising determining, by the processor, for each hotspot a set of one or more hotspot features, and using the set of one or more hotspot features as input to the second machine learning module . 如請求項10至13中任一項之方法,其包括: (e)藉由該處理器,至少部分基於該等熱點之該等病變可能性分類,選擇對應於具有對應於癌性病變高可能性的熱點之該一或多個熱點之子集。The method of any one of claims 10 to 13, comprising: (e) selecting, by the processor, a subset of the one or more hotspots corresponding to hotspots having a high probability corresponding to cancerous lesions based at least in part on the lesion likelihood classifications of the hotspots. 如前述請求項中任一項之方法,其包括: (f)藉由該處理器,調整該3D功能影像之體素之強度,以校正來自該3D功能影像之一或多個高強度體積之強度滲出,該一或多個高強度體積之各者對應於與正常情況下之高放射性藥物攝取相關聯的在該受試者內之高攝取組織區域。The method of any one of the preceding claims, comprising: (f) adjusting, by the processor, the intensities of voxels of the 3D functional image to correct for intensity bleed from one or more high intensity volumes of the 3D functional image, each of the one or more high intensity volumes Corresponds to areas of high uptake tissue within the subject associated with high radiopharmaceutical uptake under normal conditions. 如請求項15之方法,其中步驟(f)包括以循序方式一次一個地校正來自複數個高強度體積之強度滲出。The method of claim 15, wherein step (f) comprises correcting for intensity bleed from the plurality of high intensity volumes one at a time in a sequential manner. 如請求項15或16之方法,其中該一或多個高強度體積對應於選自由腎臟、肝臟及膀胱組成之群組之一或多個高攝取組織區域。The method of claim 15 or 16, wherein the one or more high intensity volumes correspond to one or more high uptake tissue regions selected from the group consisting of kidney, liver and bladder. 如前述請求項中任一項之方法,其包括: (g)藉由該處理器,針對至少一部分該一或多個熱點之各者,判定指示該熱點所對應之潛在性病變內之放射性藥物攝取之位準及/或該潛在性病變之大小的對應病變指數。The method of any one of the preceding claims, comprising: (g) determining, by the processor, for at least a portion of each of the one or more hotspots, an indicator indicative of the level of radiopharmaceutical uptake and/or the size of the underlying lesion within the underlying lesion corresponding to the hotspot Corresponding lesion index. 如請求項18之方法,其中步驟(g)包括比較與該熱點相關聯的一或多個體素之(若干)強度與一或多個參考值,各參考值與對應於該參考組織區域之參考體積之特定參考組織區域相關聯。19. The method of claim 18, wherein step (g) comprises comparing the intensity(s) of the one or more voxels associated with the hot spot with one or more reference values, each reference value and a reference corresponding to the reference tissue region The volume is associated with a specific reference tissue region. 如請求項19之方法,其中該一或多個參考值包括選自由與該受試者之主動脈部分相關聯的主動脈參考值及與該受試者之肝臟相關聯的肝臟參考值組成之群組之一或多個成員。19. The method of claim 19, wherein the one or more reference values comprise selected from the group consisting of an aortic reference value associated with an aortic portion of the subject and a liver reference value associated with the subject's liver One or more members of the group. 如請求項19或20之方法,其中針對與特定參考組織區域相關聯的至少一個特定參考值判定該特定參考值,包括將對應於該特定參考組織區域之特定參考體積內之體素之強度擬合至多組分混合模型。The method of claim 19 or 20, wherein determining the specific reference value for at least one specific reference value associated with the specific reference tissue region comprises fitting intensities of voxels within the specific reference volume corresponding to the specific reference tissue region to into a multicomponent mixture model. 如請求項18至21中任一項之方法,其包括使用該等經判定病變指數值運算指示該受試者之癌症狀態及/或風險之該受試者之整體風險指數。The method of any one of claims 18 to 21, comprising computing an overall risk index for the subject that is indicative of the subject's cancer status and/or risk using the determined lesion index values. 如前述請求項中任一項之方法,其包括藉由該處理器,針對各熱點判定對應於該受試者內之特定解剖區域及/或解剖區域群組的解剖分類,其中判定該熱點所表示該潛在癌性病變之位置。The method of any preceding claim, comprising determining, by the processor, for each hot spot an anatomical classification corresponding to a particular anatomical region and/or group of anatomical regions within the subject, wherein the hot spot is determined Indicates the location of the potential cancerous lesion. 如前述請求項中任一項之方法,其包括: (h)藉由該處理器,引起轉列至少一部分該一或多個熱點之圖形表示,以用於在圖形使用者介面(GUI)內顯示,供使用者查看。The method of any one of the preceding claims, comprising: (h) causing, by the processor, to transfer at least a portion of the graphical representation of the one or more hotspots for display within a graphical user interface (GUI) for viewing by a user. 如請求項24之方法,其包括: (i)藉由該處理器,經由該GUI,接收由使用者查看確認為有可能表示該受試者內之潛在性癌性病變的一或多個熱點之子集的使用者選擇。The method of claim 24, comprising: (i) Receive, by the processor, via the GUI, a user selection by the user to view a subset of one or more hot spots identified as likely to represent potential cancerous lesions in the subject. 如前述請求項中任一項之方法,其中該3D功能影像包括在向該受試者投予試劑之後獲得的PET或SPECT影像。The method of any of the preceding claims, wherein the 3D functional image comprises a PET or SPECT image obtained after administration of an agent to the subject. 如請求項26之方法,其中該試劑包括PSMA結合劑。The method of claim 26, wherein the agent comprises a PSMA binding agent. 如請求項26或27之方法,其中該試劑包括18 F。The method of claim 26 or 27 , wherein the reagent comprises18F. 如請求項27或28之方法,其中該試劑包括[18F]DCFPyL。The method of claim 27 or 28, wherein the agent comprises [18F]DCFPyL. 如請求項27或28之方法,其中該試劑包括PSMA-11。The method of claim 27 or 28, wherein the agent comprises PSMA-11. 如請求項26或27之方法,其中該試劑包括選自由99m Tc、68 Ga、177 Lu、225 Ac、111 In、123 I、124 I及131 I組成之群組之一或多個成員。The method of claim 26 or 27, wherein the reagent comprises one or more members selected from the group consisting of99mTc , 68Ga , 177Lu , 225Ac , 111In , 123I , 124I and131I . 如前述請求項中任一項之方法,其中該機器學習模組實施神經網路。The method of any of the preceding claims, wherein the machine learning module implements a neural network. 如前述請求項中任一項之方法,其中該處理器係基於雲端之系統之處理器。The method of any one of the preceding claims, wherein the processor is a processor of a cloud-based system. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之方法,該方法包括: (a)藉由運算裝置之處理器,接收使用功能成像模態獲得的該受試者之3D功能影像; (b)藉由該處理器,接收使用解剖成像模態獲得的該受試者之3D解剖影像,其中該3D解剖影像包括該受試者內之組織之圖形表示; (c)藉由該處理器,使用機器學習模組自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域且表示該受試者內之潛在癌性病變,從而建立如下(i)及(ii)之一或兩者:(i)熱點清單,其針對各熱點識別該熱點之位置,及(ii) 3D熱點圖,其針對各熱點識別該3D功能影像內之對應3D熱點體積, 其中該機器學習模組接收至少兩個輸入通道,該等輸入通道包括對應於至少一部分該3D解剖影像之第一輸入通道及對應於至少一部分該3D功能影像之第二輸入通道及/或自其等導出之解剖資訊;及 (d)儲存及/或提供該熱點清單及/或該3D熱點圖,供顯示及/或進一步處理。A method for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the method comprising: (a) receiving, by the processor of the computing device, a 3D functional image of the subject obtained using a functional imaging modality; (b) receiving, by the processor, a 3D anatomical image of the subject obtained using an anatomical imaging modality, wherein the 3D anatomical image includes a graphical representation of tissue within the subject; (c) automatically detect, by the processor, using a machine learning module, one or more hot spots within the 3D functional image, each hot spot corresponding to a local area of increased intensity relative to its surroundings and representing a region within the subject Potentially cancerous lesions to create one or both of the following (i) and (ii): (i) a list of hotspots that identifies the location of the hotspot for each hotspot, and (ii) a 3D heat map that identifies the hotspot for each hotspot The corresponding 3D hotspot volume in the 3D functional image, wherein the machine learning module receives at least two input channels, the input channels include a first input channel corresponding to at least a portion of the 3D anatomical image and a second input channel corresponding to at least a portion of the 3D functional image and/or from there and other derived anatomical information; and (d) storing and/or providing the hot spot list and/or the 3D heat map for display and/or further processing. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之方法,該方法包括: (a)藉由運算裝置之處理器,接收使用功能成像模態獲得的該受試者之3D功能影像; (b)藉由該處理器,使用第一機器學習模組自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域且表示該受試者內之潛在癌性病變,從而建立針對各熱點識別該熱點之位置之熱點清單; (c)藉由該處理器,使用第二機器學習模組及該熱點清單,針對該一或多個熱點之各者自動判定該3D功能影像內之對應3D熱點體積,從而建立3D熱點圖;及 (d)儲存及/或提供該熱點清單及/或該3D熱點圖,供顯示及/或進一步處理。A method for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the method comprising: (a) receiving, by the processor of the computing device, a 3D functional image of the subject obtained using a functional imaging modality; (b) automatically detecting, by the processor, using a first machine learning module, one or more hot spots in the 3D functional image, each hot spot corresponding to a local area of increased intensity relative to its surroundings and representing the subject Potential cancerous lesions within, thereby establishing a hotspot list identifying the location of the hotspot for each hotspot; (c) by the processor, using the second machine learning module and the hotspot list, to automatically determine the corresponding 3D hotspot volume in the 3D functional image for each of the one or more hotspots, thereby creating a 3D heatmap; and (d) storing and/or providing the hot spot list and/or the 3D heat map for display and/or further processing. 如請求項35之方法,其包括: (e)藉由該處理器,針對至少一部分該等熱點之各熱點,判定對應於該熱點表示該受試者內之病變之可能性之病變可能性分類。The method of claim 35, comprising: (e) determining, by the processor, for each of at least a portion of the hotspots, a lesion likelihood classification corresponding to the likelihood that the hotspot represents a lesion within the subject. 如請求項36之方法,其中步驟(e)包括使用第三機器學習模組,針對各熱點判定該病變可能性分類。The method of claim 36, wherein step (e) comprises using a third machine learning module to determine the lesion likelihood classification for each hot spot. 如請求項35至37中任一項之方法,其包括: (f)藉由該處理器,至少部分基於該等熱點之該等病變可能性分類,選擇對應於具有對應於癌性病變之高可能性的熱點之一或多個熱點之子集。The method of any one of claims 35 to 37, comprising: (f) selecting, by the processor, a subset of one or more hotspots corresponding to hotspots having a high likelihood corresponding to cancerous lesions based at least in part on the lesion likelihood classifications of the hotspots. 一種量測對應於參考組織區域之參考體積內之強度值以便避免來自與低放射性藥物攝取相關聯的組織區域之影響的方法,該方法包括: (a)藉由運算裝置之處理器,接收受試者之3D功能影像,該3D功能影像係使用功能成像模態獲得; (b)藉由該處理器,識別該3D功能影像內之該參考體積; (c)藉由該處理器,將多組分混合模型擬合至該參考體積內之體素之強度; (d)藉由該處理器,識別該多組分模型之主要模式; (e)藉由該處理器,判定對應於該主要模式之強度之量度,從而判定對應於體素之強度之量度之參考強度值,該等體素(i)在該參考組織體積內且(ii)與該主要模式相關聯; (f)藉由該處理器,在該功能影像內偵測對應於潛在癌性病變之一或多個熱點;及 (g)藉由該處理器,針對至少一部分該等經偵測熱點之各熱點,使用至少該參考強度值判定病變指數值。A method of measuring intensity values within a reference volume corresponding to a reference tissue region in order to avoid effects from tissue regions associated with low radiopharmaceutical uptake, the method comprising: (a) receiving, by the processor of the computing device, a 3D functional image of the subject obtained using a functional imaging modality; (b) identifying, by the processor, the reference volume within the 3D functional image; (c) fitting, by the processor, a multicomponent mixture model to the intensities of voxels within the reference volume; (d) identifying, by the processor, the dominant mode of the multicomponent model; (e) determining, by the processor, a measure of intensity corresponding to the principal mode, thereby determining a reference intensity value corresponding to a measure of intensity of voxels that are (i) within the reference tissue volume and ( ii) associated with the primary mode; (f) detecting, by the processor, one or more hot spots in the functional image corresponding to potential cancerous lesions; and (g) determining, by the processor, a lesion index value using at least the reference intensity value for each of at least a portion of the detected hot spots. 一種校正來自歸因於與正常情況下之高放射性藥物攝取相關聯的在受試者內之高攝取組織區域之強度滲出的方法,該方法包括: (a)藉由運算裝置之處理器,接收該受試者之3D功能影像,該3D功能影像係使用功能成像模態獲得; (b)藉由該處理器,識別該3D功能影像內之高強度體積,該高強度體積對應於其中在正常情況下發生高放射性藥物攝取之特定高攝取組織區域; (c)藉由該處理器,基於該經識別之高強度體積,識別該3D功能影像內之抑制體積,該抑制體積對應於位於該經識別之高強度體積之邊界之外且在距該經識別之高強度體積之該邊界之預定衰減距離內的體積; (d)藉由該處理器,判定對應於該3D功能影像之背景影像,其中用基於該抑制體積內之該3D功能影像之體素之強度判定之內插值來取代該高強度體積內之體素之強度; (e)藉由該處理器,由自來自該3D功能影像之體素之強度減去該背景影像之體素之強度來判定估計影像; (f)藉由該處理器,由以下來判定抑制圖: 將對應於該高強度體積之該估計影像之體素之強度外推至該抑制體積內之體素之位置以判定對應於該抑制體積之該抑制圖之體素之強度;及 將對應於該抑制體積之外之位置的該抑制圖之體素之強度設定為零;及 (g)藉由該處理器,基於該抑制圖來調整該3D功能影像之體素之強度,從而校正來自該高強度體積之強度滲出。A method of correcting for intensity exudation from areas of high uptake tissue within a subject associated with normally high radiopharmaceutical uptake, the method comprising: (a) receiving, by the processor of the computing device, a 3D functional image of the subject obtained using a functional imaging modality; (b) identifying, by the processor, a high-intensity volume within the 3D functional image, the high-intensity volume corresponding to a particular high-uptake tissue region in which high radiopharmaceutical uptake occurs under normal conditions; (c) identifying, by the processor, based on the identified high-intensity volume, a suppression volume within the 3D functional image, the suppression volume corresponding to outside the boundaries of the identified high-intensity volume and at a distance from the identified high-intensity volume the volume within the predetermined decay distance of the boundary of the identified high-intensity volume; (d) determining, by the processor, a background image corresponding to the 3D functional image, wherein the volume within the high intensity volume is replaced with an interpolation based on the intensity determination of the voxels of the 3D functional image within the suppression volume the strength of the element; (e) determining, by the processor, an estimated image by subtracting the intensities of the voxels of the background image from the intensities of the voxels from the 3D functional image; (f) By the processor, the suppression map is determined by: extrapolating the intensities of the voxels of the estimated image corresponding to the high-intensity volume to the positions of the voxels within the suppression volume to determine the intensities of the voxels of the suppression map corresponding to the suppression volume; and setting the intensities of voxels of the suppression map corresponding to locations outside the suppression volume to zero; and (g) adjusting, by the processor, the intensity of the voxels of the 3D functional image based on the suppression map to correct for intensity bleed from the high-intensity volume. 如請求項40之方法,其包括以循序方式,針對複數個高強度體積之各者執行步驟(b)至步驟(g),從而校正來自該複數個高強度體積之各者之強度滲出。The method of claim 40, comprising performing steps (b) through (g) for each of a plurality of high-intensity volumes in a sequential manner, thereby correcting for intensity bleed from each of the plurality of high-intensity volumes. 如請求項41之方法,其中該複數個高強度體積包括選自由腎臟、肝臟及膀胱組成之群組之一或多個成員。The method of claim 41, wherein the plurality of high-intensity volumes comprise one or more members selected from the group consisting of kidney, liver, and bladder. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之方法,該方法包括: (a)藉由運算裝置之處理器,接收使用功能成像模態獲得的該受試者之3D功能影像; (b)藉由該處理器,自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域,且表示該受試者內之潛在癌性病變; (c)藉由該處理器,引起轉列該一或多個熱點之圖形表示,以在互動式圖形使用者介面(GUI)內顯示; (d)藉由該處理器,經由該互動式GUI,接收包括至少一部分該一或多個經自動偵測之熱點之最終熱點集合的使用者選擇;及 (e)儲存及/或提供該最終熱點集合,供顯示及/或進一步處理。A method for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the method comprising: (a) receiving, by the processor of the computing device, a 3D functional image of the subject obtained using a functional imaging modality; (b) automatically detecting, by the processor, one or more hot spots in the 3D functional image, each hot spot corresponding to a local area of increased intensity relative to its surroundings and representing a potential cancerous lesion in the subject ; (c) causing, by the processor, a graphical representation of the one or more hotspots to be displayed for display within an interactive graphical user interface (GUI); (d) receiving, by the processor, via the interactive GUI, a user selection of a final set of hotspots comprising at least a portion of the one or more automatically detected hotspots; and (e) store and/or provide the final set of hotspots for display and/or further processing. 如請求項43之方法,其包括: (f)藉由該處理器,經由該GUI接收一或多個額外、使用者識別之熱點之使用者選擇,用於包含在該最終熱點集合中;及 (g)藉由該處理器,更新該最終熱點集合,以便包括該一或多個額外使用者識別之熱點。The method of claim 43, comprising: (f) receiving, by the processor, through the GUI, user selections of one or more additional, user-identified hotspots for inclusion in the final set of hotspots; and (g) updating, by the processor, the final set of hotspots to include the one or more additional user-identified hotspots. 如請求項43或44之方法,其中步驟(b)包括使用一或多個機器學習模組。The method of claim 43 or 44, wherein step (b) comprises using one or more machine learning modules. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之方法,該方法包括: (a)藉由運算裝置之處理器,接收使用功能成像模態獲得的該受試者之3D功能影像; (b)藉由該處理器,自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域,且表示該受試者內之潛在癌性病變; (c)藉由該處理器,針對至少一部分該一或多個熱點之各者,自動判定對應於該受試者內之特定解剖區域及/或解剖區域群組的解剖分類,其中判定該熱點所表示潛在癌性病變的位置;及 (d)儲存及/或提供該一或多個熱點之識別及針對各熱點之對應於該熱點之解剖分類,供顯示及/或進一步處理。A method for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the method comprising: (a) receiving, by the processor of the computing device, a 3D functional image of the subject obtained using a functional imaging modality; (b) automatically detecting, by the processor, one or more hot spots in the 3D functional image, each hot spot corresponding to a local area of increased intensity relative to its surroundings and representing a potential cancerous lesion in the subject ; (c) automatically determining, by the processor, for at least a portion of each of the one or more hot spots, an anatomical classification corresponding to a particular anatomical region and/or group of anatomical regions within the subject, wherein the hot spot is determined the location of the indicated potential cancerous lesion; and (d) storing and/or providing the identification of the one or more hotspots and, for each hotspot, an anatomical classification corresponding to the hotspot for display and/or further processing. 如請求項46之方法,其中步驟(b)包括使用一或多個機器學習模組。The method of claim 46, wherein step (b) includes using one or more machine learning modules. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之系統,該系統包括: 運算裝置之處理器;及 具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時,引起該處理器: (a)接收使用功能成像模態所得該受試者之3D功能影像; (b)使用機器學習模組自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域且表示該受試者內之潛在癌性病變,從而建立如下(i)及(ii)之一或兩者:(i)熱點清單,其針對各熱點識別該熱點之位置,及(ii) 3D熱點圖,其針對各熱點識別該3D功能影像內之對應3D熱點體積;及 (c)儲存及/或提供該熱點清單及/或該3D熱點圖,供顯示及/或進一步處理。A system for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the system comprising: a processor of a computing device; and memory having instructions stored thereon that, when executed by the processor, cause the processor to: (a) receive a 3D functional image of the subject using a functional imaging modality; (b) using a machine learning module to automatically detect one or more hot spots within the 3D functional image, each hot spot corresponding to a local area of increased intensity relative to its surroundings and representing a potential cancerous lesion within the subject, thereby One or both of the following (i) and (ii) are created: (i) a list of hotspots, which identifies the location of the hotspot for each hotspot, and (ii) a 3D heatmap, which identifies, for each hotspot, the corresponding to the 3D hotspot volume; and (c) storing and/or providing the hotspot list and/or the 3D heatmap for display and/or further processing. 如請求項48之系統,其中該機器學習模組接收至少一部分該3D功能影像作為輸入,且至少部分基於該3D功能影像之該經接收部分之體素之強度自動偵測該一或多個熱點。The system of claim 48, wherein the machine learning module receives at least a portion of the 3D-enabled image as input and automatically detects the one or more hot spots based at least in part on intensities of voxels in the received portion of the 3D-enabled image . 如請求項48或49之系統,其中該機器學習模組接收3D分割圖作為輸入,該3D分割圖識別該3D功能影像內之一或多個所關注體積(VOI),各VOI對應於該受試者內之特定目標組織區域及/或特定解剖區域。The system of claim 48 or 49, wherein the machine learning module receives as input a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image, each VOI corresponding to the subject specific target tissue regions and/or specific anatomical regions within the patient. 如請求項48至50中任一項之系統,其中該等指令引起該處理器: 接收使用解剖成像模態獲得的該受試者之3D解剖影像,其中該3D解剖影像包括該受試者內之組織之圖形表示, 且其中該機器學習模組接收至少兩個輸入通道,該等輸入通道包括對應於至少一部分該3D解剖影像之第一輸入通道及對應於至少一部分該3D功能影像之第二輸入通道。The system of any of claims 48 to 50, wherein the instructions cause the processor to: receiving a 3D anatomical image of the subject obtained using an anatomical imaging modality, wherein the 3D anatomical image includes a graphical representation of tissue within the subject, And wherein the machine learning module receives at least two input channels, the input channels include a first input channel corresponding to at least a part of the 3D anatomical image and a second input channel corresponding to at least a part of the 3D functional image. 如請求項51之系統,其中該機器學習模組接收3D分割圖作為輸入,該3D分割圖在該3D功能影像及/或該3D解剖影像內識別一或多個所關注體積(VOI),各VOI對應於特定目標組織區域及/或特定解剖區域。The system of claim 51, wherein the machine learning module receives as input a 3D segmentation map that identifies one or more volumes of interest (VOIs) within the 3D functional image and/or the 3D anatomical image, each VOI Corresponds to specific target tissue regions and/or specific anatomical regions. 如請求項52之系統,其中該等指令引起該處理器自動分割該3D解剖影像,從而建立該3D分割圖。The system of claim 52, wherein the instructions cause the processor to automatically segment the 3D anatomical image, thereby creating the 3D segmentation map. 如請求項48至53中任一項之系統,其中該機器學習模組係區域特定機器學習模組,其接收該3D功能影像之對應於該受試者之一或多個特定組織區域及/或解剖區域之特定部分作為輸入。The system of any one of claims 48 to 53, wherein the machine learning module is a region-specific machine learning module that receives the 3D functional image corresponding to one or more specific tissue regions of the subject and/or or a specific part of an anatomical region as input. 如請求項48至54中任一項之系統,其中該機器學習模組產生該熱點清單作為輸出。The system of any of claims 48-54, wherein the machine learning module generates the hotspot list as output. 如請求項48至55中任一項之系統,其中該機器學習模組產生該3D熱點圖作為輸出。The system of any of claims 48 to 55, wherein the machine learning module generates the 3D heat map as output. 如請求項48至56中任一項之系統,其中該等指令引起該處理器: (d)針對至少一部分該等熱點之各熱點,判定對應於該熱點表示該受試者內之病變可能性之病變可能性分類。The system of any of claims 48 to 56, wherein the instructions cause the processor to: (d) For each of at least a portion of the hotspots, determining a lesion likelihood classification corresponding to the hotspot that represents the likelihood of a lesion within the subject. 如請求項57之系統,其中在步驟(d),該等指令引起該處理器使用該機器學習模組,針對該部分之各熱點判定該病變可能性分類。The system of claim 57, wherein in step (d) the instructions cause the processor to use the machine learning module to determine the lesion likelihood classification for each of the hot spots in the portion. 如請求項57之系統,其中在步驟(d),該等指令引起該處理器使用第二機器學習模組,針對各熱點判定該病變可能性分類。The system of claim 57, wherein in step (d) the instructions cause the processor to use a second machine learning module to determine the lesion likelihood classification for each hot spot. 如請求項59之系統,其中該等指令引起該處理器針對各熱點判定一或多個熱點特徵之集合,及使用該一或多個熱點特徵之該集合作為送至該第二機器學習模組之輸入。The system of claim 59, wherein the instructions cause the processor to determine a set of one or more hotspot features for each hotspot, and use the set of the one or more hotspot features as a feed to the second machine learning module the input. 如請求項57至60中任一項之系統,其中該等指令引起該處理器: (e)至少部分基於該等熱點之該等病變可能性分類,選擇對應於具有對應於癌性病變之高可能性的熱點之該一或多個熱點之子集。The system of any of claims 57 to 60, wherein the instructions cause the processor to: (e) selecting a subset of the one or more hotspots corresponding to hotspots having a high probability of corresponding to cancerous lesions based at least in part on the lesion likelihood classifications of the hotspots. 如請求項48至61中任一項之系統,其中該等指令引起該處理器: (f)藉由該處理器,調整該3D功能影像之體素之強度,以校正來自該3D功能影像之一或多個高強度體積之強度滲出,該一或多個高強度體積之各者對應於與正常情況下之高放射性藥物攝取相關聯的在該受試者內之高攝取組織區域。The system of any of claims 48 to 61, wherein the instructions cause the processor to: (f) adjusting, by the processor, the intensities of voxels of the 3D functional image to correct for intensity bleed from one or more high intensity volumes of the 3D functional image, each of the one or more high intensity volumes Corresponds to areas of high uptake tissue within the subject associated with high radiopharmaceutical uptake under normal conditions. 如請求項62之系統,其中在步驟(f),該等指令引起該處理器以循序方式一次一個地校正來自複數個高強度體積之強度滲出。The system of claim 62, wherein in step (f) the instructions cause the processor to correct the intensity bleed from the plurality of high intensity volumes one at a time in a sequential manner. 如請求項62或63之系統,其中該一或多個高強度體積對應於選自由腎臟、肝臟及膀胱組成之群組之一或多個高攝取組織區域。The system of claim 62 or 63, wherein the one or more high intensity volumes correspond to one or more high uptake tissue regions selected from the group consisting of kidney, liver and bladder. 如請求項48至64中任一項之系統,其中該等指令引起該處理器: (g)針對至少一部分該一或多個熱點之各者,判定指示該熱點所對應之潛在性病變內之放射性藥物攝取之位準及/或該潛在性病變之大小的對應病變指數。The system of any of claims 48 to 64, wherein the instructions cause the processor to: (g) For each of at least a portion of the one or more hotspots, determining a corresponding lesion index indicative of the level of radiopharmaceutical uptake and/or the size of the latent lesion within the underlying lesion to which the hotspot corresponds. 如請求項65之系統,其中在步驟(g),該等指令引起該處理器比較與該熱點相關聯的一或多個體素之(若干)強度與一或多個參考值,各參考值與該受試者內之特定參考組織區域相關聯,且基於對應於該參考組織區域之參考體積之強度判定。The system of claim 65, wherein in step (g) the instructions cause the processor to compare the intensity(s) of one or more voxels associated with the hot spot with one or more reference values, each reference value being equal to A particular reference tissue region within the subject is associated and determined based on the intensity of the reference volume corresponding to the reference tissue region. 如請求項66之系統,其中該一或多個參考值包括選自由與該受試者之主動脈部分相關聯的主動脈參考值及與該受試者之肝臟相關聯的肝臟參考值組成之群組之一或多個成員。The system of claim 66, wherein the one or more reference values comprises a reference value selected from the group consisting of an aortic reference value associated with a portion of the aorta of the subject and a liver reference value associated with the liver of the subject One or more members of the group. 如請求項66或67之系統,其中針對與特定參考組織區域相關聯的至少一個特定參考值,該等指令引起該處理器藉由將對應於該特定參考組織區域之特定參考體積內之體素之強度擬合至多組分混合模型,來判定該特定參考值。The system of claim 66 or 67, wherein for at least one particular reference value associated with a particular reference tissue region, the instructions cause the processor to pass voxels within the particular reference volume corresponding to the particular reference tissue region The intensities of are fitted to a multi-component mixed model to determine the specific reference value. 如請求項65至68中任一項之系統,其中該等指令引起該處理器使用該等經判定病變指數值運算該受試者之整體風險指數,其指示該受試者之癌症狀態及/或風險。The system of any one of claims 65 to 68, wherein the instructions cause the processor to use the determined lesion index values to compute an overall risk index for the subject that is indicative of the subject's cancer status and/or or risk. 如請求項48至69中任一項之系統,其中該等指令引起該處理器針對各熱點判定對應於該受試者內之特定解剖區域及/或解剖區域群組的解剖分類,其中判定該熱點所表示潛在癌性病變的位置。The system of any one of claims 48 to 69, wherein the instructions cause the processor to determine, for each hot spot, an anatomical classification corresponding to a particular anatomical region and/or group of anatomical regions within the subject, wherein the determination of the The location of a potentially cancerous lesion indicated by the hot spot. 如請求項48至70中任一項之系統,其中該等指令引起該處理器: (h)引起轉列至少一部分該一或多個熱點之圖形表示,在圖形使用者介面(GUI)內顯示,供使用者查看。The system of any one of claims 48 to 70, wherein the instructions cause the processor to: (h) causing a graphical representation of at least a portion of the one or more hot spots to be displayed in a graphical user interface (GUI) for viewing by a user. 如請求項71之系統,其中該等指令引起該處理器: (i)經由該GUI,接收經由使用者查看確認為有可能表示該受試者內之潛在性癌性病變的一或多個熱點之子集的使用者選擇。The system of claim 71, wherein the instructions cause the processor to: (i) Receive, via the GUI, a user selection of a subset of one or more hotspots identified as likely to represent a potential cancerous lesion within the subject viewed by the user. 如請求項48至72中任一項之系統,其中該3D功能影像包括在向該受試者投予試劑之後獲得的PET或SPECT影像。The system of any one of claims 48 to 72, wherein the 3D functional image comprises a PET or SPECT image obtained after administration of an agent to the subject. 如請求項73之系統,其中該試劑包括PSMA結合劑。The system of claim 73, wherein the agent comprises a PSMA binding agent. 如請求項73或74之系統,其中該試劑包括18 F。 The system of claim 73 or 74, wherein the reagent comprises18F. 如請求項74之系統,其中該試劑包括[18F]DCFPyL。The system of claim 74, wherein the reagent comprises [18F]DCFPyL. 如請求項74或75之系統,其中該試劑包括PSMA-11。The system of claim 74 or 75, wherein the agent comprises PSMA-11. 如請求項73或74之系統,其中該試劑包括選自由99m Tc、68 Ga、177 Lu、225 Ac、111 In、123 I、124 I及131 I組成之群組之一或多個成員。The system of claim 73 or 74, wherein the reagent comprises one or more members selected from the group consisting of99mTc , 68Ga , 177Lu , 225Ac , 111In , 123I , 124I and131I . 如請求項48至78中任一項之系統,其中該機器學習模組實施神經網路。The system of any one of claims 48 to 78, wherein the machine learning module implements a neural network. 如請求項48至79中任一項之系統,其中該處理器係基於雲端之系統之處理器。The system of any one of claims 48 to 79, wherein the processor is a processor of a cloud-based system. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之系統,該系統包括: 運算裝置之處理器;及 具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器: (a)接收使用功能成像模態獲得的該受試者之3D功能影像; (b)接收使用解剖成像模態獲得的該受試者之3D解剖影像,其中該3D解剖影像包括該受試者內之組織之圖形表示; (c)使用機器學習模組自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域,且表示該受試者內之潛在癌性病變,從而建立如下(i)及(ii)之一或兩者:(i)熱點清單,其針對各熱點識別該熱點之位置,及(ii)3D熱點圖,其針對各熱點識別該3D功能影像內之對應3D熱點體積, 其中該機器學習模組接收至少兩個輸入通道,該等輸入通道包括對應於至少一部分該3D解剖影像之第一輸入通道及對應於至少一部分該3D功能影像之第二輸入通道,及/或自其等導出之解剖資訊;及 (d)儲存及/或提供該熱點清單及/或該3D熱點圖,供顯示及/或進一步處理。A system for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the system comprising: a processor of a computing device; and memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) receive a 3D functional image of the subject obtained using a functional imaging modality; (b) receiving a 3D anatomical image of the subject obtained using an anatomical imaging modality, wherein the 3D anatomical image includes a graphical representation of tissue within the subject; (c) using a machine learning module to automatically detect one or more hot spots within the 3D functional image, each hot spot corresponding to a local area of increased intensity relative to its surroundings and representing a potential cancerous lesion in the subject, One or both of the following (i) and (ii) are thus established: (i) a hotspot list, which identifies the location of the hotspot for each hotspot, and (ii) a 3D heatmap, which identifies, for each hotspot, within the 3D functional image corresponds to the 3D hotspot volume, Wherein the machine learning module receives at least two input channels, the input channels include a first input channel corresponding to at least a part of the 3D anatomical image and a second input channel corresponding to at least a part of the 3D functional image, and/or self- such derived anatomical information; and (d) storing and/or providing the hot spot list and/or the 3D heat map for display and/or further processing. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之系統,該系統包括: 運算裝置之處理器;及 具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器: (a)接收使用功能成像模態獲得的該受試者之3D功能影像; (b)使用第一機器學習模組自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域,且表示該受試者內之潛在癌性病變,從而建立針對各熱點識別該熱點之位置之熱點清單; (c)使用第二機器學習模組及該熱點清單,針對該一或多個熱點之各者自動判定該3D功能影像內之對應3D熱點體積,從而建立3D熱點圖;及 (d)儲存及/或提供該熱點清單及/或該3D熱點圖,供顯示及/或進一步處理。A system for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the system comprising: a processor of a computing device; and memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) receive a 3D functional image of the subject obtained using a functional imaging modality; (b) using a first machine learning module to automatically detect one or more hot spots in the 3D functional image, each hot spot corresponding to a local area of increased intensity relative to its surroundings, and representing a potential cancer in the subject lesions, thereby establishing a hotspot list identifying the location of the hotspot for each hotspot; (c) using the second machine learning module and the hotspot list to automatically determine the corresponding 3D hotspot volume within the 3D functional image for each of the one or more hotspots, thereby creating a 3D heatmap; and (d) storing and/or providing the hot spot list and/or the 3D heat map for display and/or further processing. 如請求項82之系統,其中該等指令引起該處理器: (e)針對至少一部分該等熱點之各熱點,判定對應於該熱點表示該受試者內之病變之可能性之病變可能性分類。The system of claim 82, wherein the instructions cause the processor to: (e) For each of at least a portion of the hotspots, determining a lesion likelihood classification corresponding to the likelihood that the hotspot represents a lesion within the subject. 如請求項83之系統,其中在步驟(e),該等指令引起該處理器使用第三機器學習模組,針對各熱點判定該病變可能性分類。The system of claim 83, wherein in step (e) the instructions cause the processor to use a third machine learning module to determine the lesion likelihood classification for each hot spot. 如請求項82至84中任一項之系統,其中該等指令引起該處理器: (f)至少部分基於該等熱點之該等病變可能性分類,選擇對應於具有對應於癌性病變之高可能性的熱點之該一或多個熱點之子集。The system of any one of claims 82 to 84, wherein the instructions cause the processor to: (f) selecting a subset of the one or more hotspots corresponding to hotspots having a high probability of corresponding to cancerous lesions based, at least in part, on the lesion likelihood classifications of the hotspots. 一種用於量測對應於參考組織區域之參考體積內之強度值以便避免來自與低放射性藥物攝取相關聯的組織區域之影響的系統,該系統包括: 運算裝置之處理器;及 具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器: (a)接收受試者之3D功能影像,該3D功能影像係使用功能成像模態獲得; (b)識別該3D功能影像內之該參考體積; (c)將多組分混合模型擬合至該參考體積內之體素之強度; (d)識別該多組分模型之主要模式; (e)判定對應於該主要模式之強度之量度,從而判定對應於體素之強度之量度之參考強度值,該等體素(i)在該參考組織體積內且(ii)與該主要模式相關聯; (f)在該3D功能影像內偵測對應於潛在癌性病變之一或多個熱點;及 (g)針對至少一部分該等經偵測熱點之各熱點,使用至少該參考強度值判定病變指數值。A system for measuring intensity values within a reference volume corresponding to a reference tissue region in order to avoid effects from tissue regions associated with low radiopharmaceutical uptake, the system comprising: a processor of a computing device; and memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) receive a 3D functional image of the subject obtained using a functional imaging modality; (b) identifying the reference volume within the 3D functional image; (c) fitting a multicomponent mixture model to the intensities of the voxels within the reference volume; (d) identify the main modes of the multicomponent model; (e) determine a measure of intensity corresponding to the dominant mode, thereby determining a reference intensity value corresponding to the measure of intensity of voxels that are (i) within the reference tissue volume and (ii) related to the dominant mode Associated; (f) detecting within the 3D functional image one or more hot spots corresponding to potential cancerous lesions; and (g) for each of at least a portion of the detected hot spots, determining a lesion index value using at least the reference intensity value. 一種用於校正來自歸因於與正常情況下之高放射性藥物攝取相關聯的在受試者內之高攝取組織區域之強度滲出的系統,該方法包括: (a)接收該受試者之3D功能影像,該3D功能影像係使用功能成像模態獲得; (b)識別該3D功能影像內之高強度體積,該高強度體積對應於其中在正常情況下發生高放射性藥物攝取之特定高攝取組織區域; (c)基於該經識別之高強度體積識別該3D功能影像內之抑制體積,該抑制體積對應於位於該經識別之高強度體積之邊界之外且在距該經識別之高強度體積之該邊界之預定衰減距離內的體積; (d)判定對應於該3D功能影像之背景影像,其中用基於該抑制體積內之該3D功能影像之體素之強度判定之內插值來取代該高強度體積內之體素之強度; (e)由來自該3D功能影像之體素之強度減去該背景影像之體素之強度,判定估計影像; (f)藉由以下來判定抑制圖: 將對應於該高強度體積之該估計影像之體素之強度外推至該抑制體積內之體素之位置,判定對應於該抑制體積之該抑制圖之體素之強度;及 將對應於該抑制體積之外之位置的該抑制圖之體素之強度設定為零;及 (g)基於該抑制圖來調整該3D功能影像之體素之強度,從而校正來自該高強度體積之強度滲出。A system for correcting for intensity exudation from areas of high uptake tissue in a subject associated with normally high radiopharmaceutical uptake, the method comprising: (a) receive a 3D functional image of the subject obtained using a functional imaging modality; (b) identifying high-intensity volumes within the 3D functional image, the high-intensity volumes corresponding to specific regions of high uptake tissue in which high radiopharmaceutical uptake occurs under normal conditions; (c) identifying a suppressed volume within the 3D functional image based on the identified high-intensity volume, the suppressed volume corresponding to the the volume within a predetermined decay distance of the boundary; (d) determining a background image corresponding to the 3D functional image, wherein the intensities of the voxels within the high-intensity volume are replaced by interpolation based on the determination of the intensities of the voxels of the 3D functional image within the suppression volume; (e) Determining an estimated image by subtracting the intensities of the voxels of the background image from the intensities of the voxels from the 3D functional image; (f) Determine the inhibition map by: extrapolating the intensities of the voxels of the estimated image corresponding to the high-intensity volume to the positions of the voxels within the suppression volume to determine the intensities of the voxels of the suppression map corresponding to the suppression volume; and setting the intensities of voxels of the suppression map corresponding to locations outside the suppression volume to zero; and (g) Adjusting the intensity of the voxels of the 3D functional image based on the suppression map to correct for intensity bleed from the high-intensity volume. 如請求項87之系統,其中指令引起處理器以循序方式針對複數個高強度體積之各者執行步驟(b)至步驟(g),從而校正來自該複數個高強度體積之各者之強度滲出。The system of claim 87, wherein the instructions cause the processor to perform steps (b) through (g) for each of the plurality of high-intensity volumes in a sequential manner, thereby correcting for intensity bleed from each of the plurality of high-intensity volumes . 如請求項88之系統,其中該複數個高強度體積包括選自由腎臟、肝臟及膀胱成之群組之一或多個成員。The system of claim 88, wherein the plurality of high intensity volumes comprise one or more members selected from the group consisting of kidney, liver and bladder. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之系統,該系統包括: 運算裝置之處理器;及 具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器: (a)接收使用功能成像模態獲得的該受試者之3D功能影像; (b)自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域,且表示該受試者內之潛在癌性病變; (c)引起轉列該一或多個熱點之圖形表示,以在互動式圖形使用者介面(GUI)內顯示; (d)經由該互動式GUI,接收包括至少一部分該一或多個經自動偵測之熱點之最終熱點集合的使用者選擇;及 (e)儲存及/或提供該最終熱點集合,供顯示及/或進一步處理。A system for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the system comprising: a processor of a computing device; and memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) receive a 3D functional image of the subject obtained using a functional imaging modality; (b) automatically detect one or more hot spots within the 3D functional image, each hot spot corresponding to a local area of increased intensity relative to its surroundings and representing a potential cancerous lesion in the subject; (c) cause a graphical representation of the one or more hotspots to be re-listed for display within an interactive graphical user interface (GUI); (d) receiving, via the interactive GUI, a user selection of a final set of hotspots comprising at least a portion of the one or more automatically detected hotspots; and (e) store and/or provide the final set of hotspots for display and/or further processing. 如請求項90之系統,其中該等指令引起該處理器: (f)經由該GUI接收一或多個額外、使用者識別之熱點之使用者選擇,以用於包括在該最終熱點集合中;及 (g)更新該最終熱點集合,以包括該一或多個額外之使用者識別之熱點。The system of claim 90, wherein the instructions cause the processor to: (f) receiving, via the GUI, user selections of one or more additional, user-identified hotspots for inclusion in the final set of hotspots; and (g) updating the final set of hotspots to include the one or more additional user-identified hotspots. 如請求項90或91之系統,其中在步驟(b),該等指令引起該處理器使用一或多個機器學習模組。The system of claim 90 or 91, wherein in step (b) the instructions cause the processor to use one or more machine learning modules. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之系統,該系統包括: 運算裝置之處理器;及 具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器: (a)接收使用功能成像模態獲得的該受試者之3D功能影像; (b)自動偵測該3D功能影像內之一或多個熱點,各熱點對應於相對於其周圍提高強度之局部區域,且表示該受試者內之潛在癌性病變; (c)針對至少一部分該一或多個熱點之各者,自動判定對應於該受試者內之特定解剖區域及/或解剖區域群組的解剖分類,其中判定該熱點所表示潛在癌性病變的位置;及 (d)儲存及/或提供該一或多個熱點之識別以及針對各熱點之對應於該熱點之該解剖分類,供顯示及/或進一步處理。A system for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the system comprising: a processor of a computing device; and memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) receive a 3D functional image of the subject obtained using a functional imaging modality; (b) automatically detect one or more hot spots within the 3D functional image, each hot spot corresponding to a local area of increased intensity relative to its surroundings and representing a potential cancerous lesion in the subject; (c) automatically determining, for at least a portion of each of the one or more hot spots, an anatomical classification corresponding to a particular anatomical region and/or group of anatomical regions within the subject, wherein the hot spot is determined to represent a potentially cancerous lesion the location; and (d) storing and/or providing the identification of the one or more hotspots and the anatomical classification for each hotspot corresponding to the hotspot for display and/or further processing. 如請求項93之系統,其中該等指令引起該處理器使用一或多個機器學習模組執行步驟(b)。The system of claim 93, wherein the instructions cause the processor to perform step (b) using one or more machine learning modules. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之方法,該方法包括: (a)藉由運算裝置之處理器,接收使用功能成像模態獲得的該受試者之3D功能影像; (b)藉由該處理器,接收使用解剖成像模態獲得的該受試者之3D解剖影像; (c)藉由該處理器接收3D分割圖,該3D分割圖識別該3D功能影像內及/或該3D解剖影像內之一或多個特定組織區域或組織區域群組; (d)藉由該處理器,使用一或多個機器學習模組自動偵測及/或分割該3D功能影像內之一或多個熱點之集合,各熱點對應於相對於其周圍提高強度之局部區域且表示該受試者內之潛在癌性病變,從而建立如下(i)及(ii)之一或兩者:(i)熱點清單,其針對各熱點識別該熱點之位置,及(ii)3D熱點圖,其針對各熱點識別該3D功能影像內之對應3D熱點體積, 其中該一或多個機器學習模組之至少一者接收(i)該3D功能影像、(ii)該3D解剖影像及(iii)該3D分割圖作為輸入;及 (e)儲存及/或提供該熱點清單及/或該3D熱點圖,供顯示及/或進一步處理。A method for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the method comprising: (a) receiving, by the processor of the computing device, a 3D functional image of the subject obtained using a functional imaging modality; (b) receiving, by the processor, a 3D anatomical image of the subject obtained using an anatomical imaging modality; (c) receiving, by the processor, a 3D segmentation map identifying one or more specific tissue regions or groups of tissue regions within the 3D functional image and/or within the 3D anatomical image; (d) using one or more machine learning modules to automatically detect and/or segment, by the processor, a set of one or more hotspots within the 3D functional image, each hotspot corresponding to an increased intensity relative to its surroundings local area and represent a potential cancerous lesion within the subject, thereby establishing one or both of the following (i) and (ii): (i) a list of hotspots that identifies the location of the hotspot for each hotspot, and (ii) ) a 3D heat map that identifies the corresponding 3D hotspot volume within the 3D functional image for each hotspot, wherein at least one of the one or more machine learning modules receives as input (i) the 3D functional image, (ii) the 3D anatomical image, and (iii) the 3D segmentation map; and (e) storing and/or providing the hot spot list and/or the 3D heat map for display and/or further processing. 如請求項95之方法,其包括: 藉由該處理器接收初始3D分割圖,該初始3D分割圖識別該3D解剖影像及/或該3D功能影像內之一或多個特定組織區域;及 藉由該處理器,將至少一部分該一或多個特定組織區域識別為屬於一或多個組織群組之特定者,及藉由該處理器更新該3D分割圖,以指示該等經識別特定區域為屬於該特定組織群組;及 藉由該處理器,使用該經更新之3D分割圖作為送至該一或多個機器學習模組之至少一者之輸入。The method of claim 95, comprising: receiving, by the processor, an initial 3D segmentation map that identifies one or more specific tissue regions within the 3D anatomical image and/or the 3D functional image; and identifying, by the processor, at least a portion of the one or more specific tissue regions as specific ones belonging to one or more tissue groups, and updating the 3D segmentation map by the processor to indicate the identified specific ones the area belongs to that specific group of organizations; and By the processor, the updated 3D segmentation map is used as an input to at least one of the one or more machine learning modules. 如請求項96之方法,其中該一或多個組織群組包括軟組織群組,因而識別該表示軟組織之特定組織區域為屬於該軟組織群組。The method of claim 96, wherein the one or more tissue groups comprise soft tissue groups, thereby identifying the particular tissue region representing soft tissue as belonging to the soft tissue group. 如請求項96或97之方法,其中該一或多個組織群組包括骨組織群組,因而識別該表示骨之特定組織區域為屬於該骨組織群組。The method of claim 96 or 97, wherein the one or more tissue groups comprise a bone tissue group, thereby identifying the particular tissue region representing bone as belonging to the bone tissue group. 如請求項96至98中任一項之方法,其中該一或多個組織群組包括高攝取器官群組,因而識別該與高放射性藥物攝取相關聯的一或多個器官經為屬於該高攝取群組。The method of any one of claims 96 to 98, wherein the one or more tissue groups comprise a high uptake organ group such that the one or more organs associated with high radiopharmaceutical uptake are identified as belonging to the high uptake organ Ingest groups. 如請求項95至99中任一項之方法,其包括針對各經偵測及/或分割之熱點,藉由該處理器判定該熱點之分類。The method of any one of claims 95 to 99, comprising for each detected and/or segmented hotspot, determining, by the processor, a classification of the hotspot. 如請求項100之方法,其包括使用該一或多個機器學習模組之至少一者,針對各經偵測及/或分割之病變判定該熱點之分類。The method of claim 100, comprising using at least one of the one or more machine learning modules to determine the classification of the hot spot for each detected and/or segmented lesion. 如請求項95至101中任一項之方法,其中該一或多個機器學習模組包括: (A)偵測及/或分割遍及整個身體之熱點之全身病變偵測模組;及 (B)偵測及/或分割前列腺內之熱點之前列腺病變模組。The method of any one of claims 95 to 101, wherein the one or more machine learning modules comprise: (A) a systemic lesion detection module that detects and/or segments hot spots throughout the body; and (B) Prostate lesion module to detect and/or segment hot spots in the prostate. 如請求項102之方法,其包括使用(A)及(B)之各者產生熱點清單及/或圖,且合併該等結果。The method of claim 102, comprising generating a hotspot list and/or map using each of (A) and (B), and combining the results. 如請求項95至103中任一項之方法,其中: 步驟(d)包括: 藉由以下操作對該一或多個熱點之集合進行分割及分類,建立經標記之3D熱點圖,該經標記之3D熱點圖針對各熱點識別該3D功能影像內之對應3D熱點體積,且其中各熱點體積經標記為屬於複數個熱點類別之特定熱點類別: 使用第一機器學習模組分割該3D功能影像內之一或多個熱點之第一初始集合,從而建立識別第一初始熱點體積集合之第一初始3D熱點圖,其中該第一機器學習模組根據單個熱點類別,分割該3D功能影像之熱點; 使用第二機器學習模組分割該3D功能影像內之一或多個熱點之第二初始集合,從而建立識別第二初始熱點體積集合之第二初始3D熱點圖,其中該第二機器學習模組根據該複數個不同熱點類別,分割該3D功能影像,因而該第二初始3D熱點圖係多類別3D熱點圖,其中各熱點體積經標記為屬於該複數個不同熱點類別之特定者;及 藉由該處理器,針對該第一初始3D熱點圖所識別之至少一部分該等熱點體積,進行以下操作來合併該第一初始3D熱點圖與該第二初始3D熱點圖: 識別該第二初始3D熱點圖之匹配熱點體積,該第二3D熱點圖之該匹配熱點體積已經標記為屬於該複數個不同熱點類別之特定熱點類別;及 將該第一初始3D熱點圖之該特定熱點體積標記為屬於該特定熱點類別,從而建立經合併3D熱點圖,該經合併3D熱點圖包括該第一3D熱點圖之經分割熱點體積,其已根據該第二3D熱點圖之匹配熱點體積經識別所屬類別來進行標記;及 步驟(e)包括儲存及/或提供該經合併之3D熱點圖,供顯示及/或進一步處理。The method of any one of claims 95 to 103, wherein: Step (d) includes: segmenting and classifying the set of one or more hotspots by the following operations to create a labeled 3D heatmap that identifies, for each hotspot, the corresponding 3D hotspot volume within the 3D functional image, and wherein Each hotspot volume is marked as belonging to a particular hotspot class of a plurality of hotspot classes: segmenting a first initial set of one or more hot spots within the 3D functional image using a first machine learning module to create a first initial 3D heat map identifying the first initial set of hot spot volumes, wherein the first machine learning module Segment the hotspots of the 3D functional image according to a single hotspot category; segmenting a second initial set of one or more hot spots within the 3D functional image using a second machine learning module to create a second initial 3D heat map identifying the second initial set of hot spot volumes, wherein the second machine learning module segmenting the 3D functional image according to the plurality of different hotspot categories so that the second initial 3D heatmap is a multi-category 3D heatmap, wherein each hotspot volume is marked as belonging to a specific one of the plurality of different hotspot categories; and By the processor, for at least a portion of the hot spot volumes identified by the first initial 3D heat map, the following operations are performed to merge the first initial 3D heat map and the second initial 3D heat map: identifying a matching hotspot volume of the second initial 3D heatmap that has been marked as belonging to a particular hotspot category of the plurality of different hotspot categories; and marking the specific hotspot volume of the first initial 3D heat map as belonging to the specific hotspot class, thereby creating a merged 3D heatmap that includes the segmented hotspot volume of the first 3D heatmap, which has been Labeling according to the identified class to which the matching hotspot volumes of the second 3D heatmap belong; and Step (e) includes storing and/or providing the merged 3D heat map for display and/or further processing. 如請求項104之方法,其中該複數個不同熱點類別包括選自由以下各者組成之群組之一或多個成員: (i)骨熱點,其等經判定以表示定位於骨中之病變; (ii)淋巴熱點,其等經判定以表示定位於淋巴結中之病變,及 (iii)前列腺熱點,其等經判定以表示定位於前列腺中之病變。The method of claim 104, wherein the plurality of different hotspot categories include one or more members selected from the group consisting of: (i) Bone hot spots, which are determined to represent lesions localized in the bone; (ii) lymphatic hot spots, which are determined to represent lesions localized in lymph nodes, and (iii) Prostate hot spots, which are determined to represent lesions localized in the prostate. 如請求項95至105中任一項之方法,其進一步包括: (f)接收及/或存取該熱點清單;及 (g)針對該熱點清單中之各熱點,使用分析模型分割該熱點。The method of any one of claims 95 to 105, further comprising: (f) receive and/or access the hotspot list; and (g) For each hotspot in the hotspot list, use an analytical model to segment the hotspot. 如請求項95至105中任一項之方法,其進一步包括: (h)接收及/或存取該熱點圖;及 (i)針對該熱點圖中之各熱點,使用分析模型分割該熱點。The method of any one of claims 95 to 105, further comprising: (h) receive and/or access the heat map; and (i) For each hot spot in the heat map, use an analytical model to segment the hot spot. 如請求項107之方法,其中該分析模型係自適應定限方法,且步驟(i)包括: 判定一或多個參考值,各參考值基於定位在對應於特定參考組織區域之特定參考體積內之該3D功能影像之體素之強度的量度;及 針對該3D熱點圖之各特定熱點體積: 藉由該處理器,基於該特定熱點體積內之體素之強度來判定對應熱點強度;及 藉由該處理器,針對該特定熱點,基於(i)該對應熱點強度及(ii)至少一個該一或多個參考值,判定熱點特定臨限值。The method of claim 107, wherein the analytical model is an adaptive bounding method, and step (i) comprises: determining one or more reference values, each reference value based on a measure of the intensity of the voxels of the 3D functional image positioned within a specific reference volume corresponding to a specific reference tissue region; and For each specific hotspot volume of this 3D heatmap: determining, by the processor, a corresponding hotspot intensity based on intensities of voxels within the particular hotspot volume; and By the processor, a hotspot specific threshold is determined for the specific hotspot based on (i) the corresponding hotspot intensity and (ii) at least one of the one or more reference values. 如請求項108之方法,其中該熱點特定臨限值係使用選自複數個定限函數之特定定限函數來判定,該特定定限函數係基於該對應熱點強度與該至少一個參考值之比較來選擇。The method of claim 108, wherein the hotspot specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function based on a comparison of the corresponding hotspot intensity with the at least one reference value to choose. 如請求項108或109之方法,其中該熱點特定臨限值係判定為該對應熱點強度之可變百分比,其中該可變百分比隨著熱點強度增加而減小。The method of claim 108 or 109, wherein the hotspot specific threshold is determined as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as the hotspot intensity increases. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之方法,該方法包括: (a)藉由運算裝置之處理器,接收使用功能成像模態獲得的該受試者之3D功能影像; (b)藉由該處理器,使用第一機器學習模組自動分割該3D功能影像內之一或多個熱點之第一初始集合,從而建立識別第一初始熱點體積集合之第一初始3D熱點圖,其中該第一機器學習模組根據單個熱點類別,分割該3D功能影像之熱點; (c)藉由該處理器,使用第二機器學習模組自動分割該3D功能影像內之一或多個熱點之第二初始集合,從而建立識別第二初始熱點體積集合之第二初始3D熱點圖,其中該第二機器學習模組根據複數個不同熱點類別,分割該3D功能影像,因而該第二初始3D熱點圖係多類別3D熱點圖,其中各熱點體積經標記為屬於該複數個不同熱點類別之特定者; (d)藉由該處理器,針對該第一初始3D熱點圖所識別之至少一部分該第一初始熱點體積集合之各特定熱點體積,進行以下操作來合併該第一初始3D熱點圖與該第二初始3D熱點圖: 識別該第二初始3D熱點圖之匹配熱點體積,該第二3D熱點圖之該匹配熱點體積已經標記為屬於該複數個不同熱點類別之特定熱點類別;及 將該第一初始3D熱點圖之該特定熱點體積標記為屬於該特定熱點類別,從而建立經合併3D熱點圖,該經合併3D熱點圖包括該第一3D熱點圖之經分割熱點體積,其已根據該第二3D熱點圖之匹配熱點經識別所屬類別來進行標記;及 (e)儲存及/或提供該經合併之3D熱點圖,供顯示及/或進一步處理。A method for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the method comprising: (a) receiving, by the processor of the computing device, a 3D functional image of the subject obtained using a functional imaging modality; (b) automatically segmenting, by the processor, a first initial set of one or more hot spots in the 3D functional image using a first machine learning module, thereby establishing a first initial 3D hot spot identifying the first initial hot spot volume set Figure, wherein the first machine learning module segments the hotspots of the 3D functional image according to a single hotspot category; (c) automatically segmenting, by the processor, a second initial set of one or more hotspots within the 3D functional image using a second machine learning module, thereby establishing a second initial 3D hotspot identifying the second initial set of hotspot volumes , wherein the second machine learning module segments the 3D functional image according to a plurality of different hotspot categories, so the second initial 3D heatmap is a multi-category 3D heatmap, wherein each hotspot volume is marked as belonging to the plurality of different hotspots specific ones of the hotspot category; (d) by the processor, for each specific hotspot volume of at least a portion of the first initial hotspot volume set identified by the first initial 3D heatmap, perform the following operations to merge the first initial 3D heatmap and the first initial 3D heatmap Two initial 3D heat maps: identifying a matching hotspot volume of the second initial 3D heatmap that has been marked as belonging to a particular hotspot category of the plurality of different hotspot categories; and marking the specific hotspot volume of the first initial 3D heat map as belonging to the specific hotspot class, thereby creating a merged 3D heatmap that includes the segmented hotspot volume of the first 3D heatmap, which has been The matching hot spots of the second 3D heat map are identified according to the category to which they belong; and (e) store and/or provide the merged 3D heat map for display and/or further processing. 如請求項111之方法,其中該複數個不同熱點類別包括選自由以下各者組成之群組之一或多個成員: (i)骨熱點,其等經判定以表示定位於骨中之病變; (ii)淋巴熱點,其等經判定以表示定位於淋巴結中之病變,及 (iii)前列腺熱點,其等經判定以表示定位於前列腺中之病變。The method of claim 111, wherein the plurality of different hotspot categories comprise one or more members selected from the group consisting of: (i) Bone hot spots, which are determined to represent lesions localized in the bone; (ii) lymphatic hot spots, which are determined to represent lesions localized in lymph nodes, and (iii) Prostate hot spots, which are determined to represent lesions localized in the prostate. 一種經由自適應定限方法自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之方法,該方法包括: (a)藉由運算裝置之處理器,接收使用功能成像模態獲得的該受試者之3D功能影像; (b)藉由該處理器,接收在該3D功能影像內識別一或多個初步熱點體積之初步3D熱點圖; (c)藉由該處理器判定一或多個參考值,各參考值基於定位於對應於特定參考組織區域之特定參考體積內之該3D功能影像之體素之強度的量度; (d)藉由該處理器,針對該初步3D熱點圖所識別之至少一部份該一或多個初步熱點體積的各特定初步熱點體積進行以下操作,基於該等初步熱點體積及使用基於自適應定限之分割,建立精細化之3D熱點圖: 基於該特定初步熱點體積內之體素之強度來判定對應熱點強度; 針對該特定初步熱點體積,基於(i)該對應熱點強度及(ii)至少一個該一或多個參考值,判定熱點特定臨限值; 使用基於定限之分割演算法,分割至少一部分該3D功能影像,該基於定限之分割演算法使用針對該特定初步熱點體積判定之該熱點特定臨限值執行影像分割,從而判定對應於該特定初步熱點體積之精細化、分析分割之熱點體積;及 將該精細化之熱點體積包括於該精細化之3D熱點圖中;及 (e)儲存及/或提供該精細化之3D熱點圖,供顯示及/或進一步處理。A method of automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject via an adaptive bounding method, the method comprising: (a) receiving, by the processor of the computing device, a 3D functional image of the subject obtained using a functional imaging modality; (b) receiving, by the processor, a preliminary 3D heat map identifying one or more preliminary hot spot volumes within the 3D functional image; (c) determining, by the processor, one or more reference values, each reference value based on a measure of the intensity of a voxel of the 3D functional image positioned within a specific reference volume corresponding to a specific reference tissue region; (d) performing, by the processor, the following for each specific preliminary hotspot volume of at least a portion of the one or more preliminary hotspot volumes identified by the preliminary 3D heat map, based on the preliminary hotspot volumes and using the self-based Adapt to limited segmentation and create a refined 3D heat map: determining the corresponding hot spot intensity based on the intensities of the voxels within the specific preliminary hot spot volume; for the specific preliminary hotspot volume, determining a hotspot specific threshold based on (i) the corresponding hotspot intensity and (ii) at least one of the one or more reference values; segmenting at least a portion of the 3D functional image using a threshold-based segmentation algorithm that performs image segmentation using the hotspot-specific threshold value for the specific preliminary hotspot volume determination, thereby determining the image corresponding to the specific hotspot volume Refinement of preliminary hotspot volumes, analysis of segmented hotspot volumes; and including the refined hot spot volume in the refined 3D heat map; and (e) storing and/or providing the refined 3D heat map for display and/or further processing. 如請求項113之方法,其中該熱點特定臨限值係使用選自複數個定限函數之特定定限函數來判定,該特定定限函數係依據該對應熱點強度與該至少一個參考值之比較來選擇。The method of claim 113, wherein the hotspot specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function based on a comparison of the corresponding hotspot intensity with the at least one reference value to choose. 如請求項113或114之方法,其中該熱點特定臨限值係判定為該對應熱點強度之可變百分比,其中該可變百分比隨著熱點強度增加而減小。The method of claim 113 or 114, wherein the hotspot specific threshold is determined as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as the hotspot intensity increases. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之方法,該方法包括: (a)藉由運算裝置之處理器,接收使用解剖成像模態獲得的該受試者之3D解剖影像,其中該3D解剖影像包括該受試者內之組織之圖形表示; (b)藉由該處理器自動分割該3D解剖影像,建立3D分割圖,該3D分割圖識別該3D解剖影像中之複數個所關注體積(VOI),包括對應於該受試者之肝臟之肝臟體積及對應於主動脈部分之主動脈體積; (c)藉由該處理器,接收使用功能成像模態獲得的該受試者之3D功能影像; (d)藉由該處理器自動分割該3D功能影像內之一或多個熱點,各經分割熱點對應於相對於其周圍提高強度之局部區域,且表示該受試者內之潛在癌性病變,從而識別一或多個經自動分割之熱點體積; (e)藉由該處理器,引起轉列該一或多個經自動分割之熱點體積之圖形表示,用於在互動式圖形使用者介面(GUI)內顯示; (f)藉由該處理器,經由該互動式GUI,接收包括至少一部分該一或多個經自動分割之熱點體積之最終熱點集合的使用者選擇; (g)藉由該處理器,針對該最終集合之各熱點體積,基於(i)對應於該熱點體積之該功能影像之體素之強度及(ii)使用對應於該肝臟體積及該主動脈體積之該功能影像之體素之強度判定的一或多個參考值,判定病變指數值;及 (e)儲存及/或提供該最終熱點集合及/或該等病變指數值,供顯示及/或進一步處理。A method for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the method comprising: (a) receiving, by a processor of the computing device, a 3D anatomical image of the subject obtained using an anatomical imaging modality, wherein the 3D anatomical image includes a graphical representation of tissue within the subject; (b) automatically segmenting the 3D anatomical image by the processor to create a 3D segmentation map that identifies a plurality of volumes of interest (VOIs) in the 3D anatomical image, including the liver corresponding to the subject's liver volume and the volume of the aorta corresponding to the part of the aorta; (c) receiving, by the processor, a 3D functional image of the subject obtained using a functional imaging modality; (d) automatically segmenting, by the processor, one or more hotspots within the 3D functional image, each segmented hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potential cancerous lesion within the subject , thereby identifying one or more automatically segmented hotspot volumes; (e) causing, by the processor, a graphical representation of the one or more automatically segmented hotspot volumes to be displayed for display in an interactive graphical user interface (GUI); (f) receiving, by the processor, via the interactive GUI, a user selection of a final hotspot set comprising at least a portion of the one or more automatically segmented hotspot volumes; (g) for each hotspot volume of the final set, by the processor, based on (i) the intensities of the voxels of the functional image corresponding to the hotspot volume and (ii) using the volume corresponding to the liver and the aorta One or more reference values for the determination of the intensity of the voxels of the functional image of the volume to determine the lesion index value; and (e) storing and/or providing the final hotspot set and/or the lesion index values for display and/or further processing. 如請求項116之方法,其中: 步驟(b)包括分割該解剖影像,使得該3D分割圖識別對應於該受試者之一或多個骨之一或多個骨體積,且 步驟(d)包括在該功能影像內,使用該一或多個骨體積識別骨骼體積,及分割定位於該骨骼體積內之一或多個骨熱點體積。The method of claim 116, wherein: Step (b) includes segmenting the anatomical image such that the 3D segmentation map identifies one or more bone volumes corresponding to one or more bones of the subject, and Step (d) includes, within the functional image, identifying a bone volume using the one or more bone volumes, and segmenting one or more bone hot spot volumes located within the bone volume. 如請求項116或117之方法,其中: 步驟(b)包括分割該解剖影像,使得該3D分割圖識別對應於該受試者之軟組織器官之一或多個器官體積,且 步驟(d)包括在該功能影像內,使用該一或多個經分割之器官體積識別一或多個軟組織體積,及分割定位於該軟組織體積內之一或多個淋巴及/或前列腺熱點體積。A method as in claim 116 or 117, wherein: Step (b) includes segmenting the anatomical image such that the 3D segmentation map identifies one or more organ volumes corresponding to soft tissue organs of the subject, and Step (d) includes identifying, within the functional image, one or more soft tissue volumes using the one or more segmented organ volumes, and segmenting one or more lymphoid and/or prostate hot spot volumes located within the soft tissue volume . 如請求項118之方法,其中步驟(d)進一步包括,在分割該一或多個淋巴及/或前列腺熱點體積之前,調整該功能影像之強度以抑制來自一或多個高攝取組織區域之強度。The method of claim 118, wherein step (d) further comprises, prior to segmenting the one or more lymphatic and/or prostate hot spot volumes, adjusting the intensity of the functional image to suppress intensity from one or more high uptake tissue regions . 如請求項116至119中任一項之方法,其中步驟(g)包括使用對應於該肝臟體積之該功能影像之體素之強度,判定肝臟參考值。The method of any one of claims 116 to 119, wherein step (g) includes determining a liver reference value using the intensities of the voxels of the functional image corresponding to the liver volume. 如請求項120之方法,其包括將雙組分高斯混合模型(Gaussian mixture model)擬合至對應於該肝臟體積之功能影像體素之強度的直方圖,其係使用該雙組分高斯混合模型擬合,以便從該肝臟體積識別及排除具有與異常低攝取之區域相關聯的強度的體素,及使用剩餘體素之強度判定該肝臟參考值。The method of claim 120, comprising fitting a two-component Gaussian mixture model to a histogram of intensities corresponding to functional image voxels of the liver volume, using the two-component Gaussian mixture model Fitting to identify and exclude voxels with intensities associated with regions of abnormally low uptake from the liver volume, and use the intensities of the remaining voxels to determine the liver reference value. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之系統,該系統包括: 運算裝置之處理器;及 具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器: (a)接收使用功能成像模態獲得的該受試者之3D功能影像; (b)接收使用解剖成像模態獲得的該受試者之3D解剖影像; (c)接收3D分割圖,該3D分割圖識別該3D功能影像內及/或該3D解剖影像內之一或多個特定組織區域或組織區域群組; (d)使用一或多個機器學習模組自動偵測及/或分割該3D功能影像內之一或多個熱點之集合,各熱點對應於相對於其周圍提高強度之局部區域,且表示該受試者內之潛在癌性病變,從而建立如下(i)及(ii)之一或兩者:(i)熱點清單,其針對各熱點識別該熱點之位置,及(ii)3D熱點圖,其針對各熱點識別該3D功能影像內之對應3D熱點體積, 其中該一或多個機器學習模組之至少一者接收(i)該3D功能影像、(ii)該3D解剖影像及(iii)該3D分割圖作為輸入;及 (e)儲存及/或提供該熱點清單及/或該3D熱點圖,供顯示及/或進一步處理。A system for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the system comprising: a processor of a computing device; and memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: (a) receive a 3D functional image of the subject obtained using a functional imaging modality; (b) receiving a 3D anatomical image of the subject obtained using an anatomical imaging modality; (c) receiving a 3D segmentation map identifying one or more specific tissue regions or groups of tissue regions within the 3D functional image and/or within the 3D anatomical image; (d) using one or more machine learning modules to automatically detect and/or segment a set of one or more hotspots within the 3D functional image, each hotspot corresponding to a local area of increased intensity relative to its surroundings, and representing the Potentially cancerous lesions within the subject, thereby establishing one or both of the following (i) and (ii): (i) a list of hotspots that identifies the location of that hotspot for each hotspot, and (ii) a 3D heatmap, It identifies the corresponding 3D hotspot volume in the 3D functional image for each hotspot, wherein at least one of the one or more machine learning modules receives as input (i) the 3D functional image, (ii) the 3D anatomical image, and (iii) the 3D segmentation map; and (e) storing and/or providing the hot spot list and/or the 3D heat map for display and/or further processing. 如請求項122之系統,其中該等指令引起該處理器: 接收初始3D分割圖,該初始3D分割圖識別該3D解剖影像及/或該3D功能影像內之一或多個特定組織區域; 將至少一部分該一或多個特定組織區域識別為屬於一或多個組織群組之特定者,及更新該3D分割圖,以指示該等經識別特定區域為屬於該特定組織群組;及 使用該經更新之3D分割圖作為送至該一或多個機器學習模組之至少一者之輸入。The system of claim 122, wherein the instructions cause the processor to: receiving an initial 3D segmentation map that identifies one or more specific tissue regions within the 3D anatomical image and/or the 3D functional image; identifying at least a portion of the one or more specific tissue regions as specific ones belonging to one or more tissue groups, and updating the 3D segmentation map to indicate that the identified specific regions belong to the specific tissue group; and The updated 3D segmentation map is used as an input to at least one of the one or more machine learning modules. 如請求項123之系統,其中該一或多個組織群組包括軟組織群組,因而識別該表示軟組織之特定組織區域為屬於該軟組織群組。The system of claim 123, wherein the one or more tissue groups comprise soft tissue groups, thereby identifying the particular tissue region representing soft tissue as belonging to the soft tissue group. 如請求項123或124之系統,其中該一或多個組織群組包括骨組織群組,因而識別該表示骨之特定組織區域為屬於該骨組織群組。The system of claim 123 or 124, wherein the one or more tissue groups include a bone tissue group, thereby identifying the particular tissue region representing bone as belonging to the bone tissue group. 如請求項123至125中任一項之系統,其中該一或多個組織群組包括高攝取器官群組,因而識別該與高放射性藥物攝取相關聯的一或多個器官為屬於該高攝取群組。The system of any one of claims 123 to 125, wherein the one or more tissue groups comprise a high uptake organ group, thereby identifying the one or more organs associated with high radiopharmaceutical uptake as belonging to the high uptake group. 如請求項122至126中任一項之系統,其中該等指令引起該處理器,針對各經偵測及/或分割之熱點,判定該熱點之分類。The system of any of claims 122-126, wherein the instructions cause the processor to determine, for each detected and/or segmented hotspot, a classification of the hotspot. 如請求項127之系統,其中該等指令引起該處理器使用該一或多個機器學習模組之至少一者,以針對各經偵測及/或分割之熱點判定該熱點之該分類。The system of claim 127, wherein the instructions cause the processor to use at least one of the one or more machine learning modules to determine the classification of the hotspot for each detected and/or segmented hotspot. 如請求項122至128中任一項之系統,其中該一或多個機器學習模組包括: (A)偵測及/或分割遍及整個身體之熱點之全身病變偵測模組;及 (B)偵測及/或分割前列腺內之熱點之前列腺病變模組。The system of any one of claims 122 to 128, wherein the one or more machine learning modules comprise: (A) a systemic lesion detection module that detects and/or segments hot spots throughout the body; and (B) Prostate lesion module to detect and/or segment hot spots in the prostate. 如請求項129之系統,其中該等指令引起該處理器使用(A)及(B)之各者產生該熱點清單及/或該等圖,且合併該等結果。The system of claim 129, wherein the instructions cause the processor to generate the hotspot list and/or the maps using each of (A) and (B), and to combine the results. 如請求項122至130中任一項之系統,其中: 在步驟(d),該等指令引起該處理器藉由以下操作,對該一或多個熱點之集合進行分割及分類,建立經標記之3D熱點圖,該經標記之3D熱點圖針對各熱點識別該3D功能影像內之對應3D熱點體積,且其中各熱點經標記為屬於複數個熱點類別之特定熱點類別: 使用第一機器學習模組分割該3D功能影像內之一或多個熱點之第一初始集合,從而建立識別第一初始熱點體積集合之第一初始3D熱點圖,其中該第一機器學習模組根據單個熱點類別分割該3D功能影像之熱點; 使用第二機器學習模組分割該3D功能影像內之一或多個熱點之第二初始集合,從而建立識別第二初始熱點體積集合之第二初始3D熱點圖,其中該第二機器學習模組根據該複數個不同熱點類別分割該3D功能影像,因而該第二初始3D熱點圖係多類別3D熱點圖,其中各熱點體積經標記為屬於該複數個不同熱點類別之特定者;及 藉由針對該第一初始3D熱點圖所識別之至少一部分該等熱點體積進行以下操作,合併該第一初始3D熱點圖與該第二初始3D熱點圖: 識別該第二初始3D熱點圖之匹配熱點體積,該第二3D熱點圖之該匹配熱點體積已經標記為屬於該複數個不同熱點類別之特定熱點類別;及 將該第一初始3D熱點圖之該特定熱點體積標記為屬於該特定熱點類別,從而建立經合併3D熱點圖,該經合併3D熱點圖包括該第一3D熱點圖之經分割熱點體積,其已根據該第二3D熱點圖之匹配熱點經識別所屬類別來進行標記;且 在步驟(e),該等指令引起該處理器儲存及/或提供該經合併之3D熱點圖,供顯示及/或進一步處理。The system of any one of claims 122 to 130, wherein: In step (d), the instructions cause the processor to segment and classify the set of one or more hotspots to create a labeled 3D heatmap for each hotspot by the following operations Identifying the corresponding 3D hotspot volume within the 3D functional image, and wherein each hotspot is marked as belonging to a specific hotspot category of the plurality of hotspot categories: segmenting a first initial set of one or more hot spots within the 3D functional image using a first machine learning module to create a first initial 3D heat map identifying the first initial set of hot spot volumes, wherein the first machine learning module Segment the hotspots of the 3D functional image according to a single hotspot category; segmenting a second initial set of one or more hot spots within the 3D functional image using a second machine learning module to create a second initial 3D heat map identifying the second initial set of hot spot volumes, wherein the second machine learning module The 3D functional image is segmented according to the plurality of different hotspot categories, so that the second initial 3D heatmap is a multi-category 3D heatmap, wherein each hotspot volume is marked as belonging to a specific one of the plurality of different hotspot categories; and The first initial 3D heat map and the second initial 3D heat map are merged by performing the following operations on at least a portion of the hot spot volumes identified by the first initial 3D heat map: identifying a matching hotspot volume of the second initial 3D heatmap that has been marked as belonging to a particular hotspot category of the plurality of different hotspot categories; and marking the specific hotspot volume of the first initial 3D heat map as belonging to the specific hotspot class, thereby creating a merged 3D heatmap that includes the segmented hotspot volume of the first 3D heatmap, which has been The matching hotspots of the second 3D heatmap are identified according to the category to which they belong; and In step (e), the instructions cause the processor to store and/or provide the merged 3D heat map for display and/or further processing. 如請求項131之系統,其中該複數個不同熱點類別包括選自由以下各者組成之群組之一或多個成員: (i)骨熱點,其等經判定以表示定位於骨中之病變; (ii)淋巴熱點,其等經判定以表示定位於淋巴結中之病變,及 (iii)前列腺熱點,其等經判定以表示定位於前列腺中之病變。The system of claim 131, wherein the plurality of different hotspot categories include one or more members selected from the group consisting of: (i) Bone hot spots, which are determined to represent lesions localized in the bone; (ii) lymphatic hot spots, which are determined to represent lesions localized in lymph nodes, and (iii) Prostate hot spots, which are determined to represent lesions localized in the prostate. 如請求項122至132中任一項之系統,其中該等指令進一步引起該處理器: (f)接收及/或存取該熱點清單;及 (g)針對該熱點清單中之各熱點,使用分析模型分割該熱點。The system of any of claims 122-132, wherein the instructions further cause the processor to: (f) receive and/or access the hotspot list; and (g) For each hotspot in the hotspot list, use an analytical model to segment the hotspot. 如請求項122至133中任一項之系統,其中該等指令進一步引起該處理器: (h)接收及/或存取該熱點圖; (i)針對該熱點圖中之各熱點,使用分析模型分割該熱點。The system of any one of claims 122 to 133, wherein the instructions further cause the processor to: (h) receive and/or access the heat map; (i) For each hot spot in the heat map, use an analytical model to segment the hot spot. 如請求項134之系統,其中該分析模型係自適應定限方法,且在步驟(i),該等指令引起該處理器: 判定一或多個參考值,各參考值係基於定位於對應於特定參考組織區域之特定參考體積內之該3D功能影像之體素之強度的量度;及 針對該3D熱點圖之各特定熱點體積: 基於該特定熱點體積內之體素之強度,判定對應熱點強度;及 針對該特定熱點,基於(i)該對應熱點強度及(ii)至少一個該一或多個參考值,判定熱點特定臨限值。The system of claim 134, wherein the analytical model is an adaptive bounding method, and in step (i), the instructions cause the processor to: determining one or more reference values, each reference value based on a measure of the intensity of voxels of the 3D functional image positioned within a specific reference volume corresponding to a specific reference tissue region; and For each specific hotspot volume of this 3D heatmap: Determine the corresponding hot spot intensity based on the intensities of the voxels within the particular hot spot volume; and For the specific hotspot, a hotspot specific threshold is determined based on (i) the corresponding hotspot intensity and (ii) at least one of the one or more reference values. 如請求項135之系統,其中該熱點特定臨限值係使用選自複數個定限函數之特定定限函數來判定,該特定定限函數係基於該對應熱點強度與該至少一個參考值之比較來選擇。The system of claim 135, wherein the hotspot specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function based on a comparison of the corresponding hotspot intensity with the at least one reference value to choose. 如請求項135或136之系統,其中該熱點特定臨限值係判定為該對應熱點強度之可變百分比,其中該可變百分比隨著熱點強度增加而減小。The system of claim 135 or 136, wherein the hotspot specific threshold is determined as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as the hotspot intensity increases. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之系統,該系統包括: 運算裝置之處理器;及 具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器: (a)接收使用功能成像模態獲得的該受試者之3D功能影像; (b)使用第一機器學習模組,自動分割該3D功能影像內之一或多個熱點之第一初始集合,從而建立識別第一初始熱點體積集合、該3D功能影像內之對應3D熱點體積之第一初始3D熱點圖,其中該第一機器學習模組根據單個熱點類別,分割該3D功能影像之熱點; (c)使用第二機器學習模組,自動分割該3D功能影像內之一或多個熱點之第二初始集合,從而建立識別第二初始熱點體積集合之第二初始3D熱點圖,其中該第二機器學習模組根據複數個不同熱點類別分割該3D功能影像,因而該第二初始3D熱點圖係多類別3D熱點圖,其中各熱點體積經標記為屬於該複數個不同熱點類別之特定者; (d)藉由針對該第一初始3D熱點圖所識別之至少一部分該第一初始熱點體積集合之各特定熱點體積進行以下操作,合併該第一初始3D熱點圖與該第二初始3D熱點圖: 識別該第二初始3D熱點圖之匹配熱點體積,該第二3D熱點圖之該匹配熱點體積已經標記為屬於該複數個不同熱點類別之特定熱點類別;及 將該第一初始3D熱點圖之該特定熱點體積標記為屬於該特定熱點類別,從而建立經合併3D熱點圖,該經合併3D熱點圖包括該第一3D熱點圖之經分割熱點體積,其已根據該第二3D熱點圖之匹配熱點經識別所屬類別來進行標記;及 (e)儲存及/或提供該經合併之3D熱點圖,供顯示及/或進一步處理。A system for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the system comprising: a processor of a computing device; and memory having instructions stored thereon that, when executed by the processor, cause the processor to: (a) receive a 3D functional image of the subject obtained using a functional imaging modality; (b) using a first machine learning module to automatically segment a first initial set of one or more hot spots in the 3D functional image, thereby establishing a first initial set of hot spot volumes that identify the corresponding 3D hot spot volumes in the 3D functional image the first initial 3D heat map, wherein the first machine learning module segments the hotspots of the 3D functional image according to a single hotspot category; (c) using a second machine learning module to automatically segment a second initial set of one or more hot spots in the 3D functional image, thereby creating a second initial 3D heat map identifying the second initial set of hot spot volumes, wherein the first Two machine learning modules segment the 3D functional image according to a plurality of different hotspot categories, so the second initial 3D heat map is a multi-category 3D heat map, wherein each hotspot volume is marked as belonging to a specific one of the plurality of different hotspot categories; (d) merging the first initial 3D heat map and the second initial 3D heat map by performing the following operations for each specific hotspot volume of at least a portion of the first initial hotspot volume set identified by the first initial 3D heatmap : identifying a matching hotspot volume of the second initial 3D heatmap that has been marked as belonging to a particular hotspot category of the plurality of different hotspot categories; and marking the specific hotspot volume of the first initial 3D heat map as belonging to the specific hotspot class, thereby creating a merged 3D heatmap that includes the segmented hotspot volume of the first 3D heatmap, which has been The matching hot spots of the second 3D heat map are identified according to the category to which they belong; and (e) store and/or provide the merged 3D heat map for display and/or further processing. 如請求項138之系統,其中該複數個不同熱點類別包括選自由以下各者組成之群組之一或多個成員: (i)骨熱點,其等經判定以表示定位於骨中之病變; (ii)淋巴熱點,其等經判定以表示定位於淋巴結中之病變,及 (iii)前列腺熱點,其等經判定以表示定位於前列腺中之病變。The system of claim 138, wherein the plurality of different hotspot categories include one or more members selected from the group consisting of: (i) Bone hot spots, which are determined to represent lesions localized in the bone; (ii) lymphatic hot spots, which are determined to represent lesions localized in lymph nodes, and (iii) Prostate hot spots, which are determined to represent lesions localized in the prostate. 一種經由自適應定限方法自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之系統,該系統包括: 運算裝置之處理器;及 具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器: (a)接收使用功能成像模態獲得的該受試者之3D功能影像; (b)接收在該3D功能影像內識別一或多個初步熱點體積之初步3D熱點圖; (c)判定一或多個參考值,各參考值係基於定位於對應於特定參考組織區域之特定參考體積內之該3D功能影像之體素之強度的量度; (d)藉由針對由該初步3D熱點圖所識別之至少一部分該一或多個初步熱點體積的各特定初步熱點體積進行以下操作,基於該等初步熱點體積及使用基於自適應定限之分割,建立精細化之3D熱點圖: 基於該特定初步熱點體積內之體素之強度來判定對應熱點強度;及 針對該特定初步熱點體積,基於(i)該對應熱點強度及(ii)至少一個該一或多個參考值,判定熱點特定臨限值; 使用基於定限之分割演算法分割至少一部分3D功能影像,該基於定限之分割演算法係使用針對該特定初步熱點判定之該熱點特定臨限值執行影像分割,從而判定對應於該特定初步熱點體積之精細化、分析分割之熱點體積;及 將該精細化之熱點體積包括於該精細化之3D熱點圖中;及 (e)儲存及/或提供該精細化之3D熱點圖,供顯示及/或進一步處理。A system for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject via an adaptive bounding method, the system comprising: a processor of a computing device; and memory having instructions stored thereon that, when executed by the processor, cause the processor to: (a) receive a 3D functional image of the subject obtained using a functional imaging modality; (b) receiving a preliminary 3D heat map identifying one or more preliminary hot spot volumes within the 3D functional image; (c) determining one or more reference values, each reference value based on a measure of the intensity of the voxels of the 3D functional image positioned within a specific reference volume corresponding to a specific reference tissue region; (d) by doing the following for each particular preliminary hotspot volume of at least a portion of the one or more preliminary hotspot volumes identified by the preliminary 3D heat map, based on the preliminary hotspot volumes and using adaptive bound-based segmentation , to create a refined 3D heat map: determine the corresponding hot spot intensity based on the intensities of the voxels within the particular preliminary hot spot volume; and for the specific preliminary hotspot volume, determining a hotspot specific threshold based on (i) the corresponding hotspot intensity and (ii) at least one of the one or more reference values; Segmenting at least a portion of the 3D functional image using a threshold-based segmentation algorithm that performs image segmentation using the hotspot-specific threshold determined for the specific preliminary hotspot, thereby determining the corresponding specific preliminary hotspot Volume refinement, analysis of segmented hot spot volumes; and including the refined hot spot volume in the refined 3D heat map; and (e) storing and/or providing the refined 3D heat map for display and/or further processing. 如請求項140之系統,其中該熱點特定臨限值係使用選自複數個定限函數之特定定限函數來判定,該特定定限函數係基於該對應熱點強度與該至少一個參考值之比較來選擇。The system of claim 140, wherein the hotspot specific threshold is determined using a specific threshold function selected from a plurality of threshold functions, the specific threshold function based on a comparison of the corresponding hotspot intensity with the at least one reference value to choose. 如請求項140或141之系統,其中該熱點特定臨限值係判定為該對應熱點強度之可變百分比,其中該可變百分比隨著熱點強度增加而減小。The system of claim 140 or 141, wherein the hotspot specific threshold is determined as a variable percentage of the corresponding hotspot intensity, wherein the variable percentage decreases as the hotspot intensity increases. 一種用於自動化處理受試者之3D影像以識別及/或表徵化該受試者內之癌性病變之系統,該系統包括: 運算裝置之處理器;及 具有儲存於其上之指令之記憶體,其中該等指令在藉由該處理器執行時引起該處理器: (a)接收使用解剖成像模態獲得的該受試者之3D解剖影像,其中該3D解剖影像包括該受試者內之組織之圖形表示; (b)自動分割該3D解剖影像,建立3D分割圖,該3D分割圖識別該3D解剖影像中之複數個所關注體積(VOI),包括對應於該受試者之肝臟之肝臟體積及對應於主動脈部分之主動脈體積; (c)接收使用功能成像模態獲得的該受試者之3D功能影像; (d)自動分割該3D功能影像內之一或多個熱點,各經分割熱點對應於相對於其周圍提高強度之局部區域,且表示該受試者內之潛在癌性病變,從而識別一或多個經自動分割之熱點體積; (e)引起轉列該一或多個經自動分割之熱點體積之圖形表示,用於在互動式圖形使用者介面(GUI)內顯示; (f)經由該互動式GUI,接收包括至少一部分該一或多個經自動分割之熱點體積之最終熱點集合的使用者選擇; (g)針對該最終集合之各熱點體積,基於(i)對應於該熱點體積之該功能影像之體素之強度及(ii)使用對應於該肝臟體積及該主動脈體積之該功能影像之體素之強度判定的一或多個參考值,判定病變指數值;及 (e)儲存及/或提供該最終熱點集合及/或該等病變指數值,供顯示及/或進一步處理。A system for automatically processing 3D images of a subject to identify and/or characterize cancerous lesions in the subject, the system comprising: a processor of a computing device; and memory having instructions stored thereon that, when executed by the processor, cause the processor to: (a) receiving a 3D anatomical image of the subject obtained using an anatomical imaging modality, wherein the 3D anatomical image includes a graphical representation of tissue within the subject; (b) automatically segmenting the 3D anatomical image to create a 3D segmentation map that identifies a plurality of volumes of interest (VOIs) in the 3D anatomical image, including liver volumes corresponding to the subject's liver and liver volumes corresponding to the subject's liver Aortic volume of the arterial part; (c) receive a 3D functional image of the subject obtained using a functional imaging modality; (d) automatically segment one or more hotspots within the 3D functional image, each segmented hotspot corresponding to a localized region of increased intensity relative to its surroundings and representing a potential cancerous lesion within the subject, thereby identifying an or Multiple auto-segmented hotspot volumes; (e) cause a graphical representation of the one or more automatically segmented hotspot volumes to be displayed for display in an interactive graphical user interface (GUI); (f) receiving, via the interactive GUI, a user selection of a final hotspot set comprising at least a portion of the one or more automatically segmented hotspot volumes; (g) for each hotspot volume of the final set, based on (i) the intensities of the voxels of the functional image corresponding to the hotspot volume and (ii) using the one or more reference values for the determination of the intensity of the voxel to determine the lesion index value; and (e) storing and/or providing the final hotspot set and/or the lesion index values for display and/or further processing. 如請求項143之系統,其中: 在步驟(b),該等指令引起該處理器分割該解剖影像,使得該3D分割圖識別對應於該受試者之一或多個骨之一或多個骨體積,且 在步驟(d),該等指令引起該處理器在該功能影像內使用該一或多個骨體積識別骨骼體積,及分割定位於該骨骼體積內之一或多個骨熱點體積。The system of claim 143, wherein: In step (b), the instructions cause the processor to segment the anatomical image such that the 3D segmentation map identifies one or more bone volumes corresponding to one or more bones of the subject, and In step (d), the instructions cause the processor to identify a bone volume within the functional image using the one or more bone volumes, and segment one or more bone hot spot volumes located within the bone volume. 如請求項143或144之系統,其中: 在步驟(b),該等指令引起該處理器分割該解剖影像,使得該3D分割圖識別對應於該受試者之軟組織器官之一或多個器官體積,且 在步驟(d),該等指令引起該處理器在該功能影像內使用該一或多個經分割之器官體積識別軟組織體積,及分割定位於該軟組織體積內之一或多個淋巴及/或前列腺熱點體積。A system as claimed in claim 143 or 144, wherein: In step (b), the instructions cause the processor to segment the anatomical image such that the 3D segmentation map identifies one or more organ volumes corresponding to soft tissue organs of the subject, and In step (d), the instructions cause the processor to identify a soft tissue volume within the functional image using the one or more segmented organ volumes, and segment one or more lymph nodes and/or lymph nodes located within the soft tissue volume Prostate hot spot volume. 如請求項145之系統,其中在步驟(d),該等指令引起該處理器,在分割該一或多個淋巴及/或前列腺熱點體積之前,調整該功能影像之強度,以抑制來自一或多個高攝取組織區域之強度。The system of claim 145, wherein in step (d) the instructions cause the processor to adjust the intensity of the functional image to suppress the intensity of the functional image prior to segmenting the one or more lymphatic and/or prostate hotspot volumes to suppress the Strength of multiple high uptake tissue areas. 如請求項143至146中任一項之系統,其中在步驟(g),該等指令引起該處理器使用對應於該肝臟體積之該功能影像之體素之強度,來判定肝臟參考值。The system of any one of claims 143 to 146, wherein in step (g) the instructions cause the processor to use intensities of voxels of the functional image corresponding to the liver volume to determine liver reference values. 如請求項147之系統,其中該等指令引起該處理器: 將雙組分高斯混合模型擬合至對應於該肝臟體積之功能影像體素之強度的直方圖, 使用該雙組分高斯混合模型擬合,以便從該肝臟體積識別及排除具有與異常低攝取之區域相關聯的強度的體素,及 使用剩餘體素之強度判定該肝臟參考值。The system of claim 147, wherein the instructions cause the processor to: fitting a two-component Gaussian mixture model to a histogram of the intensities of the functional image voxels corresponding to the liver volume, using the two-component Gaussian mixture model fit to identify and exclude voxels with intensities associated with regions of abnormally low uptake from the liver volume, and The liver reference value is determined using the intensities of the remaining voxels.
TW110124481A 2020-07-06 2021-07-02 Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions TW202207241A (en)

Applications Claiming Priority (8)

Application Number Priority Date Filing Date Title
US202063048436P 2020-07-06 2020-07-06
US63/048,436 2020-07-06
US17/008,411 2020-08-31
US17/008,411 US11721428B2 (en) 2020-07-06 2020-08-31 Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions
US202063127666P 2020-12-18 2020-12-18
US63/127,666 2020-12-18
US202163209317P 2021-06-10 2021-06-10
US63/209,317 2021-06-10

Publications (1)

Publication Number Publication Date
TW202207241A true TW202207241A (en) 2022-02-16

Family

ID=79552821

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110124481A TW202207241A (en) 2020-07-06 2021-07-02 Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions

Country Status (10)

Country Link
EP (1) EP4176377A1 (en)
JP (1) JP2023532761A (en)
KR (1) KR20230050319A (en)
CN (1) CN116134479A (en)
AU (1) AU2021305935A1 (en)
BR (1) BR112022026642A2 (en)
CA (1) CA3163190A1 (en)
MX (1) MX2022016373A (en)
TW (1) TW202207241A (en)
WO (1) WO2022008374A1 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109844865B (en) 2016-10-27 2021-03-30 普罗热尼奇制药公司 Network, decision support system and related Graphical User Interface (GUI) application for medical image analysis
EP3646240A4 (en) 2017-06-26 2021-03-17 The Research Foundation for The State University of New York System, method, and computer-accessible medium for virtual pancreatography
CN113272859A (en) 2019-01-07 2021-08-17 西尼诊断公司 System and method for platform neutral whole body image segmentation
CN113748443A (en) 2019-04-24 2021-12-03 普罗热尼奇制药公司 System and method for interactively adjusting intensity window settings in nuclear medicine images
TW202105407A (en) 2019-04-24 2021-02-01 美商普吉尼製藥公司 Systems and methods for automated and interactive analysis of bone scan images for detection of metastases
US11900597B2 (en) 2019-09-27 2024-02-13 Progenics Pharmaceuticals, Inc. Systems and methods for artificial intelligence-based image analysis for cancer assessment
US11721428B2 (en) 2020-07-06 2023-08-08 Exini Diagnostics Ab Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions
US20230115732A1 (en) 2021-10-08 2023-04-13 Exini Diagnostics Ab Systems and methods for automated identification and classification of lesions in local lymph and distant metastases
CN114767268B (en) * 2022-03-31 2023-09-22 复旦大学附属眼耳鼻喉科医院 Anatomical structure tracking method and device suitable for endoscope navigation system
WO2023232067A1 (en) * 2022-05-31 2023-12-07 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for lesion region identification
US20230410985A1 (en) 2022-06-08 2023-12-21 Exini Diagnostics Ab Systems and methods for assessing disease burden and progression
CN116309585B (en) * 2023-05-22 2023-08-22 山东大学 Method and system for identifying breast ultrasound image target area based on multitask learning
CN117274244B (en) * 2023-11-17 2024-02-20 艾迪普科技股份有限公司 Medical imaging inspection method, system and medium based on three-dimensional image recognition processing

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7876938B2 (en) * 2005-10-06 2011-01-25 Siemens Medical Solutions Usa, Inc. System and method for whole body landmark detection, segmentation and change quantification in digital images
US8562945B2 (en) 2008-01-09 2013-10-22 Molecular Insight Pharmaceuticals, Inc. Technetium- and rhenium-bis(heteroaryl) complexes and methods of use thereof
PL3222615T3 (en) 2008-08-01 2022-09-26 The Johns Hopkins University Psma-binding agents and uses thereof
PL2389361T3 (en) 2008-12-05 2017-02-28 Molecular Insight Pharmaceuticals, Inc. Technetium- and rhenium-bis(heteroaryl) complexes and methods of use thereof for inhibiting psma
CN109844865B (en) 2016-10-27 2021-03-30 普罗热尼奇制药公司 Network, decision support system and related Graphical User Interface (GUI) application for medical image analysis
CA3085441A1 (en) 2018-01-08 2019-07-11 Progenics Pharmaceuticals, Inc. Systems and methods for rapid neural network-based image segmentation and radiopharmaceutical uptake determination
CN113272859A (en) 2019-01-07 2021-08-17 西尼诊断公司 System and method for platform neutral whole body image segmentation

Also Published As

Publication number Publication date
CA3163190A1 (en) 2022-01-13
MX2022016373A (en) 2023-03-06
WO2022008374A1 (en) 2022-01-13
AU2021305935A1 (en) 2023-02-02
CN116134479A (en) 2023-05-16
BR112022026642A2 (en) 2023-01-24
EP4176377A1 (en) 2023-05-10
KR20230050319A (en) 2023-04-14
JP2023532761A (en) 2023-07-31

Similar Documents

Publication Publication Date Title
TW202207241A (en) Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions
US11941817B2 (en) Systems and methods for platform agnostic whole body image segmentation
US11721428B2 (en) Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions
US11937962B2 (en) Systems and methods for automated and interactive analysis of bone scan images for detection of metastases
US20210093249A1 (en) Systems and methods for artificial intelligence-based image analysis for cancer assessment
US20190209116A1 (en) Systems and methods for rapid neural network-based image segmentation and radiopharmaceutical uptake determination
CN111602174A (en) System and method for rapidly segmenting images and determining radiopharmaceutical uptake based on neural network
US20210335480A1 (en) Systems and methods for deep-learning-based segmentation of composite images
US11900597B2 (en) Systems and methods for artificial intelligence-based image analysis for cancer assessment
WO2023057411A1 (en) Systems and methods for automated identification and classification of lesions in local lymph and distant metastases
US20230351586A1 (en) Systems and methods for artificial intelligence-based image analysis for detection and characterization of lesions
TWI835768B (en) Systems and methods for rapid neural network-based image segmentation and radiopharmaceutical uptake determination
US20240127437A1 (en) Systems and methods for artificial intelligence-based image analysis for cancer assessment