WO2023228085A1 - System and method for determining pulmonary parenchyma baseline value and enhance pulmonary parenchyma lesions - Google Patents

System and method for determining pulmonary parenchyma baseline value and enhance pulmonary parenchyma lesions Download PDF

Info

Publication number
WO2023228085A1
WO2023228085A1 PCT/IB2023/055303 IB2023055303W WO2023228085A1 WO 2023228085 A1 WO2023228085 A1 WO 2023228085A1 IB 2023055303 W IB2023055303 W IB 2023055303W WO 2023228085 A1 WO2023228085 A1 WO 2023228085A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
voxel
organ
segmentation
computerized
Prior art date
Application number
PCT/IB2023/055303
Other languages
French (fr)
Inventor
Xin Gao
Longxi ZHOU
Original Assignee
King Abdullah University Of Science And Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by King Abdullah University Of Science And Technology filed Critical King Abdullah University Of Science And Technology
Publication of WO2023228085A1 publication Critical patent/WO2023228085A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30061Lung
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

A method for enhancing a computerized image (102) of an organ of a patient to detect non-visible lesions. The method includes receiving (1600) the computerized image (102) of the organ; segmenting (1602) the computerized image (102) with a neural network (1500), based on a voxel-wise weighted loss function, to generate segmentation masks (118), wherein the voxel-wise weighted loss function sums all voxel loss in the computerized image (102), and the voxel-wise weighted loss function is a function of (1) a weight w for a given voxel, (2) a predicted probability p that the given voxel is positive, and (3) a ground truth probability p' that the given voxel is positive; removing (1604) organ characteristics from the computerized image (102), based on the segmentation masks (118), to obtain a cleaned organ image (120); and generating (1606) an enhanced image (128) of the cleaned organ image (120) based on a baseline value and a variation σ of the baseline value.

Description

SYSTEM AND METHOD FOR DETERMINING PULMONARY PARENCHYMA BASELINE VALUE AND ENHANCE PULMONARY PARENCHYMA LESIONS CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority to U.S. Provisional Patent Application No.63/344,849, filed on May 23, 2022, entitled “METHOD TO DETERMINE PULMONARY PARENCHYMA BASELINE VALUE AND ENHANCE PULMONARY PARENCHYMA LESIONS,” the disclosure of which is incorporated herein by reference in its entirety. BACKGROUND OF THE INVENTION TECHNICAL FIELD [0002] Embodiments of the subject matter disclosed herein generally relate to a system and method for detecting and quantifying pulmonary parenchyma lesions on chest computer tomography (CT) images or other computerized images, and more particularly, to a Deep-LungParenchyma-Enhancing (DLPE) computer-aided detection (CAD) method which removes irrelevant tissues from the perspective of pulmonary parenchyma, and calculates the scan-level optimal window, which considerably enhances parenchyma lesions relative to the lung window. DISCUSSION OF THE BACKGROUND [0003] Some pulmonary related diseases, like COVID-19, often cause pulmonary parenchyma lesions months after discharge, such as ground glass opacities, consolidations and long-term fibrosis. Past studies quantified lesions in computerized tomography (CT) scans of COVID-19 inpatients and found that CT lesions are predictive indicators for COVID-19 inpatients’ symptoms and short-term prognosis. However, inconsistencies between respiratory sequelae and their follow- up CT scans were found among COVID-19 survivors discharged from hospitals. It was observed that survivors who had severe symptoms have much worse six- months follow-up lung function than the mild-symptom patients, whereas their six- month follow-up CT scans are very similar from almost all aspects. Further, it was observed that a large portion of COVID-19 survivors have respiratory sequelae six months after discharge. However, experienced radiologists and state-of-the-art (SOTA) artificial intelligence (AI) systems fail to detect any CT lesion for about half of the survivors, and can only detect negligible lesions (average volume < 5 cm3) on the remaining patients. [0004] Such inconsistencies raise a key question towards understanding the prognosis and rehabilitation of COVID-19 patients, which is very relevant for the post-pandemic era: are these respiratory sequelae caused by pulmonary lesions that are visually indiscernible on chest CT under the lung window, or are they caused by other reasons, such as neurological impairments and muscle weakness, whereas the patients’ lungs are mostly recovered? [0005] Artificial intelligence has shown the potential to solve the aforementioned question, as it has rich capabilities in mining subvisual image features [1-3]. To this end, existing methods train classifiers to distinguish the labelled classes (for example, CT scans from fully recovered survivors versus those from survivors with sequelae), and then extract image features that contribute to the classification performance, such as indiscernible low-level textures, image intensity distributions, grey-level co-occurrence matrix, or local image patterns that correspond to filters in convolutional neural networks (CNNs). However, such subvisual features extracted by the existing approaches have poor medical interpretability and are prone to false discoveries due to data bias. These limitations, consequently, lead to difficulties in gaining pathological insights, understanding mechanisms, developing better treatments and driving scientific discoveries, which, unfortunately, are some of the most common criticisms of AI-based computer-aided detection methods. [0006] Thus, there is a need for a new approach/method that is capable of visualizing CT lesions that were previously not seen, to help the radiologists to improve diagnostic and treatment for lung affecting diseases.
SUMMARY OF THE INVENTION [0007] According to an embodiment, there is a method for enhancing a computerized image of an organ of a patient to detect non-visible lesions, and the method includes receiving the computerized image of the organ, segmenting the computerized image with a neural network, based on a voxel-wise weighted loss function, to generate segmentation masks, wherein the voxel-wise weighted loss function sums all voxel loss in the computerized image, and the voxel-wise weighted loss function is a function of (1) a weight w for a given voxel, (2) a predicted probability p that the given voxel is positive, and (3) a ground truth probability p’ that the given voxel is positive, removing organ characteristics from the computerized image, based on the segmentation masks, to obtain a cleaned organ image, generating an enhanced image of the cleaned organ image based on a baseline value and a variation σ of the baseline value. [0008] According to another embodiment, there is a computing device configured to enhance a computerized image of an organ of a patient to detect non- visible lesions. The computing device includes an interface configured to receive the computerized image of the organ and a processor connected to the interface. The processor is configured to segment the computerized image with a neural network, based on a voxel-wise weighted loss function, to generate segmentation masks, wherein the voxel-wise weighted loss function sums all voxel loss in the computerized image, and the voxel-wise weighted loss function is a function of (1) a weight w for a given voxel, (2) a predicted probability p that the given voxel is positive, and (3) a ground truth probability p’ that the given voxel is positive, remove organ characteristics from the computerized image, based on the segmentation masks, to obtain a cleaned organ image, and generate an enhanced image of the cleaned organ image based on a baseline value and a variation σ of the baseline value. [0009] According to yet another embodiment, there is a method for enhancing a computer tomography, CT, image of a lung of a patient to detect non-visible lesions, and the method includes receiving the CT image of the lung, segmenting the CT image with a neural network, based on a voxel-wise weighted loss function, to generate segmentation masks, wherein the voxel-wise weighted loss function sums all voxel loss in the computerized image, and the voxel-wise weighted loss function is a function of (1) a weight w for a given voxel, (2) a predicted probability p that the given voxel is positive, and (3) a ground truth probability p’ that the given voxel is positive, removing airways and blood vessels from the CT image, based on the segmentation masks, to obtain a cleaned lung image, and generating an enhanced image of the cleaned lung image based on a baseline CT value and a variation σ of the CT value.
BRIEF DESCRIPTION OF THE DRAWINGS [0010] For a more complete understanding of the present invention, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which: [0011] FIG.1A is a schematic diagram illustrating a workflow for generating a lung map from CT images, FIG.1B illustrates the various characteristics of a lung, and FIG.1C illustrates the enhanced image of the lung and associated lesions that are not visible in the CT images; [0012] FIG.2 is a flow chart of a method for enhancing lesions in a CT image; [0013] FIG.3 is a schematic diagram of a two-stage segmentation protocol that uses a feature-enhanced loss for enhancing lesions in the CT images; [0014] FIG.4A illustrates a CT scan, FIG.4B illustrates a high recall mask calculated with a novel method based on the CT scan, and FIG.4C illustrates a high precession mask, also calculated with the novel method; [0015] FIG.5 schematically illustrates a segmentation process that takes into account the volume of the organ to be studied; [0016] FIG.6A shows a chest CT scan in the lung window, FIG.6B illustrates the ground-truth annotation for airways for the chest CT scan of FIG.6A, FIG.6C illustrates an enhanced image obtained with the novel segmentation model achieving an average dice score of 0.75 on the test set, and FIG.6D illustrates the results of Zheng and colleagues’ model achieving an average dice score of 0.32; [0017] FIG.7A illustrates the ground-truth annotation for blood vessels for the chest CT scan of FIG.6A, FIG 7B illustrates the image generated by the novel segmentation model, achieving an average dice score of 0.88 on the test set, and FIG.7C illustrates the results of Nam, J. G. and colleagues’ model achieving an average dice score of 0.39; [0018] FIG.8 illustrates the segmentation performance for airways and pulmonary blood vessels in a scan-level average dice score on the test set of 189 CT scans from healthy people, where BV refers to pulmonary blood vessels and asterisks indicate segmentation for tiny structures (branching level > 5); [0019] FIG.9 illustrates segmentation results of the method of FIG.2 on the test set; [0020] FIG.10A illustrates a chest CT scan from a COVID-19 survivor with a St George’s Respiratory Questionnaire (SGRQ) score of 27, FIG.10B illustrates the segmentation of lesions from the CT image of FIG.10A, using a traditional method, FIG.10C illustrates an enhanced counterpart of the CT scan using the method illustrated in FIG.2, and FIG.10D illustrates the visible and enhanced lesions obtained with the method of FIG.2; [0021] FIG.11 is a scatter plot showing the predicted SGRQ score by using radiomics quantified by the method of FIG.2 versus the true SGRQ score; [0022] FIG.12 illustrates the results of an ablation study showing the prediction performance when only using visible radiomics, using radiomics extracted by method of FIG.2 but without using R1 (the median signal difference between lesions and baseline), and without using R2 (the ratio between the lesion volume and the lung volume) to predict the SGRQ score; [0023] FIG.13 illustrates comparisons between the median lesion severity (the radiomic feature R1) with scan-level bias for baseline (a scan-level bias removed by the method of FIG.2) on a follow-up cohort; [0024] FIG.14 illustrates segmentation performances for sub-visual lesions without and with the method of FIG.2; [0025] FIG.15 schematically illustrates a computing device in which the methods and/or the neural network discussed above may be implemented; and [0026] FIG.16 is a flow chart of a method for detecting lesions in a computer tomography, CT, image of an organ of a patient.
DETAILED DESCRIPTION OF THE INVENTION [0027] The following description of the embodiments refers to the accompanying drawings. The same reference numbers in different drawings identify the same or similar elements. The following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims. The following embodiments are discussed, for simplicity, with regard to detecting lung lesions associated with COVID-19. However, the embodiments to be discussed next are not limited to only this disease, but may be applied to other lung affecting diseases, as, for example, cancer, pneumonia, tuberculosis, etc. [0028] Reference throughout the specification to “one embodiment” or “an embodiment” means that a particular feature, structure or characteristic described in connection with an embodiment is included in at least one embodiment of the subject matter disclosed. Thus, the appearance of the phrases “in one embodiment” or “in an embodiment” in various places throughout the specification is not necessarily referring to the same embodiment. Further, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments. [0029] According to an embodiment, the DLPE method uses a novel (a) segmentation for pulmonary airways and blood vessels, (b) calculation of the scan- level optimal window for observing pulmonary parenchyma lesions, and (c) removal of scan-level bias for pulmonary parenchyma to better analyze lesions. Each of these three components are discussed in more detail. [0030] According to an embodiment, to extract interpretable and predictive subvisual features from CT scans of patients in general, and COVID-19 inpatients and survivors in particular, the proposed novel DLPE method follows a different logic: instead of forcing AI models to extract features that have the best discriminative power on a given dataset, the DLPE method tries to help radiologists see the unseen by enhancing the previously visually indiscernible features to a discernible level. Radiologists can thus analyze the morphologies and the origins of previously invisible lesions, and can then provide good annotations for such lesions, which become the ground truth for further training automatic segmentation and quantification models. To this end, the DLPE method first implements novel, accurate segmentation models to exclude irrelevant tissues (such as airways and blood vessels) from the lung CT. The term “segmentation” is used herein to mean assigning labels to pixels in 2D images or to voxels in 3D images, to define regions (or segmentation regions) that share a common characteristic (for example, a lesion in a 3D image). Next, the method calculates the scan-specific optimal window for observing pulmonary parenchyma, which removes patient–patient variation and system-specific bias, and substantially enhances parenchyma abnormalities compared with the lung window. With the enhanced images, the radiologists can examine the detailed morphology for subvisual lesions and provide annotations. The method then customizes the previously proposed SOTA deep learning model [4], belonging to the authors of this application, to quantify interpretable radiomics for subvisual lesions, such as the lesion volume and the lesion severity. The predictive power of these DLPE-detected features are then used on quantifying clinical metrics and sequelae of COVID-19 patients, based on which it is possible to further infer the pathological insights of these novel lesions. [0031] FIGs.1A to 1C schematically show the overall DLPE workflow while FIG.2 is a flow chart of the same workflow. In step 200, CT scans or images 102 (see FIG.1B) are received for analysis. The CT scans 102 may be any CT scan of a patient. In one application, the CT scans may be replaced or augmented with Xray data, magnetic resonance imaging (MRI) data, positron emission tomography (PET) data, in essence any imagining data (i.e., computerized image) that includes information related to the lung or to another organ. The CT scan 102 shown in FIG. 1B illustrates the entire lung 104, i.e., the blood vessels 106, airways 108, and parenchyma 110 (the actual lung tissue). In step 202, the received CT scan 102 is normalized, to obtain a normalized CT scan 112, as illustrated in FIG.1B. In step 204, a lung tissue dataset 114 is received, and this dataset may be used to train the neural network system. The lung tissue dataset is labeled so that ground truth is known for the training step. In step 206, automatic segmentations of lungs, airways and blood vessels is performed based on the normalized CT scans 112. The segmentation models 116, which are used by a neural network for training, are discussed later with regard to FIG.3. The neural network is trained over the dataset 114, which in this embodiment, includes 3,644 CT scans collected from patients from five different hospitals. The backbone of the segmentation model is customized over the SOTA 2.5D-based segmentation model discussed in [4], which combines the three-dimensional information of multiview two-dimensional models and thus achieves an effective tradeoff between the segmentation accuracy and model complexity (see details of this method in [5], which is incorporated herein by reference). Based on the characteristics of airways and blood vessels found in the lung, corresponding masks 118 (see FIG.1A) of these elements are calculated in step 208, and applied to the input data to remove certain characteristics, for example, the blood vessels and airways in step 210, to generate a cleaned lung image 120. The generation of the clean lung image 120 with no blood vessels and airways (even the known lesions are removed from this image) completes the training of the neural network. Note that the cleaned lung image 120 may still include lesions, but they are not visible at this stage. The trained neural network, which is discussed next, generates an enhanced image 128, which reveals those initially invisible lesions 130. [0032] In step 212, the trained neural network is initiated for recognizing lesions in raw data, i.e., to estimate the parenchyma 122 of the lung. In step 214, the trained neural network is provided with the raw data (e.g., CT scans with no labels, from various patients) and the method uses in step 216 a feature-enhanced loss and a two stage-segmentation protocol 124, which achieve fast, robust and human-level segmentation, and make the segmentation of airways and blood vessels at different branching levels possible, as discussed later in more detail. In step 218, the DLPE method 126 generates a feature map 128 (enhanced image) and/or radiomic candidates 130 of the lungs, as illustrated in FIG.1C. The feature map 128 shows visible and previously invisible lesions 130 and enhanced parenchyma 132. [0033] The method discussed above advantageously removes tissues other than pulmonary parenchyma and parenchyma enhancement. Remaining tissues such as bronchiole, mediastinum and lymph glands are negligible in volume, thus inside lungs, the method only needs to remove airways and blood vessels. Parenchyma enhancement needs an accurate estimation of the baseline CT value as well as the deviation σ of the CT values for healthy parenchyma, as parenchyma voxels with outlier CT values imply abnormality. To this end, the method is configured to first remove the known lesions using the previously proposed COVID- 19 lesion segmentation model discussed in [4] and [5], and then calculate the scan- level baseline CT and the deviation σ for the healthy parenchyma (here, healthy means that the parenchyma has no visible lesions). With these scan-specific statistics, the method can then considerably enhance the parenchyma lesions compared with the lung window 102, which thus makes the previous visually indiscernible lesions visible. [0034] In one embodiment, discovery and quantification of novel subvisual lesions is achieved by this method. During the discovery of the subvisual lesions, radiologists compare the parenchyma-enhanced images 128 from COVID-19 survivors with the normal CT scans 102 of the healthy people (as control), and mark the regions that look different from healthy people. With these ground truths, the method uses the segmentation model 116/124 to gain pixel-level segmentations for the subvisual lesions, which is trained and tested over the dataset 123 containing 1,412 COVID-19 chest CT scans (1,193 inpatient scans and 219 survivor scans). Based on the segmentation, the DLPE method quantifies in step 126 several interpretable radiomics by incorporating the knowledge of radiologists, which are then evaluated in terms of their predictive power for key COVID-19 clinical metrics and sequelae. [0035] Aspects of the method discussed with regard to FIGs.1A to 2 is now discussed in more detail. The training dataset 114 used to train the DLPE method includes in this embodiment 3,644 CT scans, for lung segmentation, airway segmentation, blood vessel segmentation, heart segmentation and parenchyma baseline CT value estimation. A slice thickness ranges from 1.0 mm to 5.0 mm and all of these CT scans were not acquired from COVID-19 inpatients or survivors. [0036] After the neural network underlying the DLPE method was trained, a COVID-19 cohort 123 was analyzed. The DLPE method was applied to 69 COVID- 19 survivors and inpatients, who were under the severe or critical condition during their inpatient period (that is, they were placed in intensive care unit). For each participant, the inpatient clinical metrics, inpatient CT scans, follow-up CT scans, follow-up lung functions and follow-up laboratory tests were recorded. These survivors provided 219 CT scans collected by one of the two commercial CT scanners: Philips iCT 256 and UIH uCT 528. The slice thicknesses range from 1.0 mm to 2.5 mm. The inpatient cohort 123 contains 1,193 COVID-19 inpatient CT scans (from 633 patients) from five hospitals. The slice thickness ranges from 1.0 mm to 2.5 mm. [0037] The DLPE method is an interpretable and powerful method, which removes irrelevant tissues from the perspective of pulmonary parenchyma and intensifies the parenchyma lesions considerably compared to the lung window 102. The DLPE method can thus help radiologists discover and quantify interpretable subvisual lesions. This ability is based on precise three-dimensional segmentations of airways, blood vessels, lungs and known COVID-19 lesions, as these segmentations provide landmarks when DLPE samples healthy parenchyma, and reduces noise during lesion quantification. [0038] DLPE develops and integrates plural SOTA methods and uses multiple datasets. Four features of the DLPE method are now discussed: (1) CT data normalization, (2) segmentation models, (3) parenchyma enhancement, and (4) quantification of lesions. The CT data normalization was performed in step 202. Chest CT data from different scanners have different pixel spacing, slice thickness and optimal lung windows. Thus, the method applies spatial and signal normalizations to cast the data into a same, standard space, which has proven to be able to greatly improve both the robustness and accuracy in our previous research [4]. During the spatial normalization, a Lanczos interpolation can be used to scale each voxel of the chest CT scan to the standard resolution of 334/512 x 334/512 x 1.00 mm3, then pad the data into the standard shape 512x512x512. Note that the spatially normalized data 112 corresponds to the standard volume of 334 × 334 × 512 mm3, which is in practice big enough for almost all patients. During the signal normalization, the method linearly rescales the original data, which casts the lung window to the range of [−0.5,0.5]. Note that the optimal lung windows for different scanners have some differences, thus the signal normalization alleviates the system- specific bias in the datasets. [0039] Next, the segmentation models 116 are discussed. The DLPE method requires fast and precise segmentations for lungs, airways, blood vessels and COVID-19-visible lesions. The inventors have developed a SOTA COVID-19 lesion segmentation model in [4], [5], which uses a 2.5D segmentation algorithm. For the segmentation of lungs in the DLPE method, the inventors customized the 2.5D segmentation algorithm. To segment the airways and blood vessels, the inventors developed a two-stage segmentation protocol, which is based on the 2.5D segmentation algorithm from [4], [5], but uses a specifically designed loss function (called herein “feature-enhanced loss”) and a two-stage training and inference procedure. These approaches make the segmentations 116 greatly exceed the existing methods, especially for tiny structures, which enables the sampling of healthy parenchyma and removal of the irrelevant tissues. Thus, the segmentation procedure 116 relies on the 2.5D segmentation, the feature-enhanced loss, and the two-stage segmentation protocol. These three concepts are now discussed in turn in more detail. [0040] In the context of image segmentation, the purpose of the loss function is to quantify the dissimilarity between the predicted segmentation and the ground truth segmentation. It provides a measure of how well the model is performing and guides the learning process during training. Image segmentation involves dividing an image into different regions or segments based on certain criteria, such as object boundaries or semantic meaning. To train a segmentation model, the method typically has a dataset where each image is labeled with pixel-level annotations, indicating which parts of the image belong to specific classes or objects. During training, the model predicts a segmentation map for an input image, and the loss function compares this prediction with the ground truth segmentation. The loss function calculates the discrepancy between the predicted segmentation and the true segmentation, assigning a higher value for greater dissimilarity and a lower value for closer similarity. The choice of loss function depends on the specific segmentation task and the desired properties of the model. Some commonly used loss functions for image segmentation include Cross-Entropy Loss. This loss function measures the pixel-wise dissimilarity between the predicted segmentation and the ground truth. It is commonly used for multi-class segmentation tasks. Another loss function is the Dice Loss. Dice loss evaluates the overlap between the predicted and ground truth segmentation masks. It measures the similarity between the two masks by computing the intersection over union (IoU) metric. Yet another loss function is the Binary Cross-Entropy Loss. This loss function is used for binary segmentation tasks, where each pixel is classified as either foreground or background. It compares the predicted probability of each pixel belonging to the foreground class with the ground truth. The choice of loss function depends on the specific requirements of the segmentation task, the nature of the dataset, and the desired behavior of the model during training. [0041] The loss function calculated value in the DLPE method is used to guide the segmentation process by providing a measure of dissimilarity between the predicted segmentation and the ground truth. It quantifies how well the model is performing and serves as a feedback signal to update the model's parameters during training. An optimization algorithm, such as gradient descent, utilizes the loss function result to adjust the model's parameters in a way that reduces the discrepancy between the predicted segmentation and the ground truth. By minimizing the loss, the model aims to improve its segmentation performance and generate more accurate segmentations. During the training process, the loss function is computed for each input sample or voxel in the dataset. The gradients of the loss with respect to the model's parameters are then calculated using techniques like backpropagation. These gradients indicate the direction and magnitude of parameter updates that can reduce the loss. The model's parameters are adjusted iteratively based on the gradients of the loss function. This iterative process allows the model to learn and improve its segmentation performance by updating its internal representations and decision boundaries. By repeatedly optimizing the loss function, the model gradually learns to produce better segmentations that align with the ground truth labels. As the training progresses, the model's segmentation performance improves, and it becomes more capable of accurately delineating objects or regions of interest in new, unseen data. [0042] Thus, the loss function serves as a guiding signal for the model during training. Its result is used to compute gradients, which direct the optimization process to update the model's parameters, leading to improved segmentation performance. [0043] The 2.5D segmentation algorithm noted above combines the two- dimensional segmentation results from XY, YZ and XZ planes, and then outputs the final three-dimensional segmentation. The method used the 2D U-net to get the two- dimensional segmentation results, and further used ensemble learning to combine the 2D results from different views to generate a 3D image. The inventors used this algorithm to segment lungs for the DLPE method, and the segmentation reaches SOTA performance. The 2.5D segmentation algorithm, “the 2.5D model” herein, relies on the idea that all human tissues can be identified with several 2D images. For example, when segmenting airways from a CT slice, experienced radiologists only need information from the previous slice, current slice and posterior slice. And most lesions and organs can be identified by a single CT slice, like COVID-19 lesions, pulmonary nodules, etc. Thus, when segmenting on a 2D CT slice, it is not necessary to input all the 3D data, instead, the 2D slice and its adjacent slices contain enough information, from which experienced radiologists annotated the ground truths. [0044] Thus, the 2.5D model simplifies the 3D segmentation task into 2D segmentation tasks from different views, and then fuses these 2D segmentations into the final 3D segmentation. It is true that simplifying the 3D task into 2D may lose some information, but the information loss should be neglectable because the 3D ground truth is formed by stacking 2D ground truths, and the fusion of 2D segmentations from different views utilizes some 3D information. The 2.5D model contains three 2D segmentation models, and these three models are responsible for the 2D segmentations from x-y (transverse), y-z (coronal) and x-z (sagittal) planes, respectively. In this embodiment, 2D U-net were used for the 2D segmentations. The inputs of the 2D models with shape ℝ(m+n)x512x512, is formed by stacking m number of adjacent 2D CT slices and n number of guidance channels. Each 2D model outputs the segmentation probability mask 118 in ℝ512x512. The 2.5D model contains three 2D U-Nets, which are denoted as fxy for segmenting x-y planes, fyz for segmenting y-z planes, and fxz for segmenting x-z planes. The input for fxy is ^^^, the input for fyz is , and the input for have the same shape and
Figure imgf000022_0006
Figure imgf000022_0005
dimensions, which is
Figure imgf000022_0007
which is formed by stacking m number of adjacent CT slices and n numbers of guidance channels. All 2D models are binary prediction models, and the outputs of the 2D models are the probability map in shape
Figure imgf000022_0008
which indicates the probabilities of the pixels to be positive semantic. By stacking the probability maps, each 2D model outputs a 3D probability mask shaped
Figure imgf000022_0001
The probability mask generated by fxy is denoted as and similarly there are
Figure imgf000022_0009
Figure imgf000022_0002
Figure imgf000022_0003
. A combination function g is used to fuse these three probability masks into the final binary mask. The final 3D binary mask can be presented as
Figure imgf000022_0004
[0045] The feature-enhanced loss, which is now discussed, achieves accurate segmentation for tiny pulmonary airways and blood vessels. The “feature-enhanced loss” is a loss function for binary segmentation. A characteristic of the feature- enhanced loss is to let the summation of false-negative penalty weights w to be equal for each branching level of pulmonary airways or blood vessels. Both the airways and blood vessels are tree-like structures with affine self-similarity, and CT images allow experienced radiologists to distinguish up to level 7-9 for airways and level 10-12 for blood vessels. The feature-enhanced loss encodes the affine self- similarity of the airways and blood vessels, and enables the model to segment tiny structures to a high accuracy. [0046] In the 2.5D segmentation algorithm, the inputs of the 2D U-nets are cross-sections of the human chest. In these cross-section images, the masks 118 of airways and blood vessels are presented as disconnected regions. The size of these regions varies greatly: cross-sections for aortas are hundreds of pixels, whereas cross-sections for tiny blood vessels are only of a few pixels. However, traditional loss functions that are based on voxel-wise performance (like voxel-level cross- entropy loss, dice loss and so on) give too little focus for tiny regions, as the area summation of all tiny regions are far less than that of big ones, which will lead to misdetections for tiny structures. Thus, the inventors introduced the novel feature- enhanced loss that helps the 2D U-nets extract features of tiny structures. [0047] The feature-enhanced loss is a voxel-level balanced cross-entropy loss function (it can also be a weighted dice-loss). It is the summation of all voxel loss values. For each voxel, the loss function is defined as:
Figure imgf000023_0001
where p is the predicted probability that the voxel is positive (inside the structures to be segmented), p′ is the ground truth probability that the voxel is positive, which is a binary value, w is a weight indicating the penalty for the false negative prediction of this voxel, which quantifies the model’s “focus” for a positive voxel, and penalty weights for the false positives are selected to be 1 for all voxels in this embodiment (other values may also be used). Every positive voxel (p′ = 1) has a specific w, which quantifies the focus for the voxel: with higher w, the model will put more focus on the voxel. In one application, the inventors calculated the branching level for each voxel, and determined ^ to let the summation of w for all voxels belonging to a branching level to be equal, which forms the “feature-enhanced loss.” [0048] The existing methods for segmenting tiny pulmonary airways/blood vessels are mostly based on novel model architectures. For example, [6] proposed the AirwayNet architecture to capture the connectiveness of airways. The authors in [7] proposed the SGNet architecture that used a GCN to incorporate the structural prior for airways. The authors in [8] proposed the WingsNet architecture that can solve the problems of gradient erosion and dilation of the neighborhood voxels. Compared to existing solutions for the misdetection of tiny structures, the solution proposed in this embodiment is only based on the novel loss function, i.e., the “feature-enhanced loss,” and does not need any modification on the model structures. One or more of the advantages of this solution is that: 1) the powerful “feature-enhanced loss” enables the method to apply the classic 2.5D segmentation architecture [4], which has undergone excessive evaluations and is proved to be very fast, robust and accurate; 2) a same protocol (“feature-enhanced loss” + classic 2.5D model) is used to achieve state-of-the-art (SOTA) segmentation for pulmonary airways/blood vessels. By comparison, existing methods considered the segmentation of airways and blood vessels as two different problems, and a method for airways/blood vessels cannot directly apply to blood vessel/airways, like AirwayNet [6], such a model may not quite fit blood vessels. [0049] The idea to calculate w is as follows: airways and blood vessels have affine self-similarity, thus the method requires the summation of w for each voxel belonging to a given branching level to be the same, because the features from different branching levels are similar while different in the scale (they are associated by affine transformations). The branching level is defined as follows: the biggest tube (airway or blood vessel in the lung), which is a level 0, splits into several (for example, 2) big tubes (level 1), and the level 1 tubes further split into a number of (for example, 4) level 2 tubes, and so on. In practice, CT images allow experienced radiologists to distinguish up to levels 7–9 for airways and levels 10–12 for blood vessels. The cross-section pixel number of a tube was used to approximate its branching level. Thus, the inventors found that Al, the average cross-section pixel number for the branching level l, roughly satisfies the relationship Al = A0αl, for example, for blood vessels, A0 = 589, α = 0.411. In other words, the regions which have the area (number of pixels) within [Ai+1, Ai) are considered from branching level i, and the summation of the weights w for all regions from branching level i is required to be a constant. [0050] Let A be the region area (measured in pixels) which is an integer equal to the number of pixels of the region area, f be the number of regions with area A in the dataset, and r be the Pearson coefficient score. The inventors found that the power law function is a good fit for the f–A relationship, that is, f = c0A−γ, as their log– log plot can be considered as a straight line. For blood vessels, the inventors analyzed 1,594,446 regions and found ^ to be 1.92, and thus the natural logarithm of the power law function is
Figure imgf000025_0001
[0051] For airways, the inventors analyzed 402,667 regions and found ^ to be 1.75, and thus, the ln of the power law function is
Figure imgf000025_0002
[0052] The cross-section area between the branching level i and i + 1 belongs to ; the total area for the branching level i to i + 1 is therefore given by:
Figure imgf000026_0007
Figure imgf000026_0001
[0053] If y = 2, then
Figure imgf000026_0002
[0054] The physical meaning of y is: y < 2 means that the total area for tiny regions is small; means that the total area for big regions is small; and y = 2
Figure imgf000026_0010
means that the total area is a constant for each branching level. [0055] If denoting the average w of a region with area A as , then the
Figure imgf000026_0009
sum of
Figure imgf000026_0006
is given by:
Figure imgf000026_0003
[0056] As discussed above, this feature imposes the condition that the sum of the weight w for each branching level is a constant. Thus, a solution for this constraint is
Figure imgf000026_0005
where C1 is any positive constant. Then, equation (7) becomes:
Figure imgf000026_0004
where the quantity is a constant.
Figure imgf000026_0008
[0057] Using equations (3), (4), and (8),
Figure imgf000027_0002
for airways follows
Figure imgf000027_0001
and for blood vessels follows . The total focus for one region is
Figure imgf000027_0003
Figure imgf000027_0004
and considering that the boundary pixels contain more information than inside pixels for the segmentation task, the boundary pixels are set to have higher w, in other words, the first half of is equally allocated to all pixels and then the other half of
Figure imgf000027_0005
is equally added to the boundary pixels. In one application, for class balance consideration, all w is multiplied by a constant to make the total focus for positives (sum of w) equal to the total focus for negatives (sum of penalty for negatives, which equals to the number of negatives). [0058] The two-stage segmentation protocol is now discussed. Using the 2.5D segmentation algorithm with the feature-enhanced loss (both of which have been discussed above), the DLPE method achieved SOTA dice score (0.86 for airway segmentations and 0.89 for blood vessel segmentations). However, the segmentations (traditional segmentations) for small tubes are not very natural: the segmented boundaries may zigzag, and are not smooth or continuous, and the dice score for tiny structures that have branching level > 5 is not satisfactory: 0.52 for tiny airways and 0.80 for tiny blood vessels. Thus, the proposed two-stage segmentation protocol is implemented to further refine the results of the 2.5D segmentation algorithm, which dramatically improves the segmentations for tiny structures. [0059] The segmentations of the airways and the blood vessels are large- scene-small-object problem. For example, the average airway volume is 41.7 cm3, which only constitutes 0.073% of the total volume of the standard embedding space (334x334x512 mm3). For the large-scene-small-object problem, the spatial scales for the features of the background and the target vary greatly, thus the traditional segmentation model has difficulties to simultaneously extract features for background and the target. In addition, the target is very small in size, which means that its features may be influenced more by noise and thus difficult to be extracted by the model. This conforms to the experimental results: the 2.5D model with feature- enhanced loss achieves state-of-the-art performance, however, the segmentations for the tiny structures are not very natural. [0060] The novel two-stage segmentation protocol is an instance of a coarse- to-fine approach. The first-stage model 116A (or stage-one model), which is schematically illustrated in FIG.3, is a coarse model, which outputs two coarse masks: a high recall mask 118A and a high precision mask 118B. The coarse masks narrow down the search space for the second-stage model 116B by thousands of times. Thus, guided by these coarse masks, the second-stage model 116B is able to output the human comparable segmentations 128. [0061] The high recall mask 118A is gained by picking out a large number of voxels (as the initial domain for this mask is the entire CT image 102) that are predicted with highest probabilities, while the high precision mask 118B is gained by picking out a small number of voxels (as the calculated domain, from the high recall mask 118A, is much smaller than the entire CT image 102) that are predicted with highest probabilities. The segmentations of the lungs and the heart are much easier, as the inventors found that despite the volumes for airways and blood vessels of different people vary a lot, their relative volume ratio to the lungs and to the heart is stable. This data indicates that for airways, the protocol should use the lung volume to get the high recall mask and the high precision mask, while for blood vessels the protocol should use the heart volume. [0062] Thus, the two-stage protocol 116 includes two 2.5D segmentation models 116A, 116B, which use the feature-enhanced loss function discussed above, and the two 2.5D segmentation models correspond to the first-stage model and the second-stage model. The first-stage model 116A takes the normalized CT 112 (see FIG.4A) as input, and outputs a high recall mask (recall = 0.95) 118A (see FIG.4B) and a high precision mask (precision = 0.93) 118B (see FIG.4C) separately, which narrow down the search space of the second-stage model 116B by thousands of times. The second-stage model 116B takes the normalized CT 112, the high recall mask 118A and the high precision mask 118B as inputs, and outputs the final segmentation results 128. [0063] When segmenting tiny structures, the second-stage model 116B only needs to search in a very small search space guided by the high recall 118A and the high precision 118B masks. Thus, the second-stage model 116B gives better segmentation performance, especially for tiny structures, which look natural and very similar to human segmentations. The two-stage segmentation protocol 116 reaches a mean dice score of 0.87 for airways and 0.91 for blood vessels. For tiny structures that have a branching level > 5, the dice score improvements are substantial: mean dice improves to 0.78 for tiny airways and 0.85 for tiny blood vessels. In addition, the two-stage protocol 116 considerably improves the robustness of the segmentations for airways and blood vessels. [0064] From the above results, it can be seen that the purpose for the “two- stage segmentation protocol” 116 is to achieve robust segmentation for pulmonary airways and blood vessels for chest CT with intensive lesions, e.g., CT scan from COVID-19 severe patients. The stage-one model 116A is configured to generate a precise region-of-interest (ROI) 500 for pulmonary airways/blood vessels, as schematically illustrated in FIG.5. Note that ROI 500 in FIG.5 corresponds to the high recall mask 118A and high precision mask 118, which are schematically illustrated in FIG.3. More specifically, the stage-one model 116A applies the segmentation 502 to the CT data 112 to estimate the volume of the lungs Vlungs, and heart Vheart. The stage-one model 116A also applies a dense prediction model 504 to generate a probability map 506 for pulmonary airways/blood vessels. Using the voxels with the highest probabilities in step 508, and the volumes of the lungs Vlungs, and heart Vheart in step 510, the model calculates the ROI 500. Then, based on the ROI 500, the stage-two model 116B can get robust segmentation. In one application, the stage-two model 116B is configured to work as illustrated in FIG.5, with the exception that the input is the ROI 500 instead of the image 112. [0065] An advantageous characteristic of the two-stage segmentation protocol 116 lays in the novel method to calculate the ROI 500 for pulmonary airways/blood vessels, as illustrated in FIG.5, i.e., determine the volume of ROI based on lung volume or heart volume, then select voxels with highest probabilities to form the ROI 500. With regard to how to determine the ROI volume, in one embodiment, the distribution for the volume ratio between pulmonary airways/blood vessels and the lung/heart was calculated, and it was found that the ratio vary in small sections. [0066] The inventors calculated two ROIs: the high-recall ROI (corresponding to the high recall mask 118A) and the high-precision ROI (corresponding to the high precision mask 118B). For airways, the volume for high-recall ROI is of 0.02458 of the lung volume, while the volume for high-precision ROI is of 0.00453 of the lung volume. For blood vessels, the volume for high-recall ROI is of 0.7030 of the heart volume, while the volume for high-precision ROI is of 0.2748 of the heart volume. [0067] The two-stage segmentation protocol 116 was developed because the inventors wanted to segment airways and blood vessels for CT scans with severe lesions, like COVID-19 severe inpatient and the existing methods were not capable to deal with such difficult cases. Thus, the two-stage segmentation protocol can achieve robust segmentation for pulmonary airways/blood vessels even for CT scans with intensive lesions. [0068] Parenchyma enhancement 128, illustrated in FIG.1C, was obtained due to the above noted novel features, i.e., use of the feature-enhanced loss, and two-stage segmentation protocol. Based on the segmentations, the DLPE method was able to remove irrelevant tissues other than pulmonary parenchyma as well as regions with known lesions. The DLPE enhancement 126 is achieved by clipping the CT data into two windows, a first window defined by [baseline, baseline + 3σ] and a second window [baseline, baseline - 3σ]. The term “baseline” means a median CT value for healthy parenchyma, which varies from -600 HU (Hounsfield unit) to -900 HU, and “σ” means the variation of the CT values for the healthy parenchyma, which is usually around 50 HU. For these calculations, the inventors randomly sampled 20,000 voxels from the remaining parenchyma of each scan. Note that when blood vessels, airways and known lesions are removed from pulmonary parenchyma, there remain few tissues such as bronchiole, mediastinum, lymph glands and so on; however, they are negligible in the volume, which means that the median of the sampled CT signals can efficiently remove such noise and provide the baseline CT value for healthy parenchyma. During the calculation of the standard deviation of the healthy parenchyma CT, the inventors discarded 20% of the largest and the lowest CT values to remove outliers and potential subvisual lesions. The optimal window center and window width for inspecting the subtle parenchyma lesions are determined by baseline CT and the standard deviation, σ. [0069] When the method discussed above was applied to COVID-19 patients, the results shown in FIGs.6A to 7D were obtained. FIG.6A shows a representative CT scan from a critically ill COVID-19 patient, for which the segmentation task is very challenging due to the strong lesion signals. FIG.6B shows the ground truth for the airways, FIG.7A shows the ground truth for the blood vessels, and FIGs.6C and 6D show the segmentation results of the DLPE and SOTA methods, respectively. Although the segmentation model in DLPE (FIG.6C) was not specifically designed and trained for the two tasks separately, it considerably outperformed both recent SOTA methods (FIG.6D) on airways (average dice score of 0.75 versus 0.32) and blood vessel segmentation (see FIGs.7B and 7C, average dice score of 0.88 versus 0.39) for critically ill COVID-19 inpatients, which demonstrates its robustness and generalization power. [0070] When segmenting airways and blood vessels for CT scans with clear parenchyma, the DLPE method also achieved a SOTA dice score, especially for tiny structures. FIG.8 shows the average dice score when segmenting CT scans from healthy people. Deep-LungParenchyma-Enhancing detected substantially more tiny structures for airways and blood vessels than recent SOTA methods. FIG.9 shows representative segmentation results of DLPE for blood vessels and airways. [0071] The inventors used the DLPE method to analyze the COVID-19 survivor follow-up dataset (including 69 survivors, three or six months after discharge) and found substantial subvisual lesions: without DLPE, radiologists only found 3.5 cm3 of lesions on average for each survivor, whereas after being enhanced by DLPE, they found 109 cm3 of abnormalities on average. FIG.10A shows one example of a CT section of a survivor with severe respiratory sequelae (most metrics for lung functions are substantially lower than the reference value). However, the follow-up CT scan (FIG.10B) has nearly no visible lesion under the original lung window, except for the small lesions 1010. After being processed and enhanced by DLPE, as shown in FIG.10C, there are easily visible lesions 1012 while FIG.10D shows the visible and sub visual lesions 1014. [0072] It is believed that the follow-up subvisual lesions 1012 reflect mild pulmonary fibrosis. These subvisual lesions have strong correlations with sequelae related to fibrosis: more subvisual lesions means lower lung capacity, less alveolar- capillary gas conductance and a worse SGRQ score, which are all typical consequences of pulmonary fibrosis. Furthermore, pulmonary fibrosis provides good explanations for the morphologies and formations of the follow-up subvisual lesions. Similar to existing studies, the inventors observed pulmonary fibrosis under the lung window in the studied cohort. However, these fibroses (visible under the original lung window) are actually enclosed by much more subvisual lesions (invisible without DLPE). Considering that fibrosis is caused by the accumulation of fibroblasts and collagen, it is likely that only the most severe accumulation can be seen in the lung window of FIG.10A, while DLPE (FIG.10C) can unveil mild accumulation and provide much more information. [0073] In the studied cohort, the most prevalent sequela is the abnormal SGRQ score. The SGRQ score is the most frequently used and the most comprehensive quantity of life assessment for respiratory sequelae. It has 50 items with 76 weighted responses and its score ranges from 0 to 100. A higher score corresponds with a lower quality of life and the score should be less than 1 for healthy people. On the follow-up cohort, 46 survivors completed the SGRQ questionnaire, and among them 43 survivors self-reported respiratory sequelae that impacted their life quality, with an average SGRQ score of 18.6. [0074] Radiomics quantified by DLPE predict the SGRQ score with high accuracy, and the subvisual lesions provide nearly all the dominant features in the prediction. DLPE 126 quantified six interpretable radiomics and the inventors used XGBoost to predict the SGRQ score based on these features. As shown in FIG.11, the Pearson correlation coefficient (PCC) between the predicted and the ground- truth SGRQ score is 0.723 (P < 0.0001), and DLPE radiomics explain 52.3% of the variance of the SGRQ score. Only few methods in the art reported their performance for SGRQ prediction, but the SOTA model for predicting chronic obstructive pulmonary disease assessment test (a good surrogate for SGRQ17) only explained <50% of its variance with their features. As shown in FIG.12, if the six radiomics are calculated by visible lesions only, the PCC is 0.243 (P = 0.130), which means that DLPE plays a critical role in extracting subvisual radiomics that are essential for COVID-19 follow-up CT analysis. [0075] The inventors found that two radiomics of DLPE detected lesions are crucial for predicting the SGRQ score: the median signal difference between lesions and baseline (R1, or the median lesion severity), and the ratio between the lesion volume and the lung volume (R2). The mean absolute error (MAE) will significantly increase if either R1 (P < 0.001) or R2 (P < 0.0001) is removed from the predictive model. In addition, when predicting most of the other follow-up sequelae, DLPE radiomics consistently have one of the best discriminative powers among all features. Thus, these results strongly suggest that the subvisual lesions identified by DLPE 126 are not artefacts, but true characteristics of long-term sequelae of COVID- 19, whose radiomics can be effective indicators for quantitative analysis of COVID- 19 sequelae. [0076] To evaluate the generalization power of DLPE on other tasks, the inventors further trained and tested the DLPE 126 on the COVID-19 inpatient cohort 123, which contains 1,193 CT scans. On the inpatient CT dataset, DLPE found novel subvisual lesions (not shown) that resemble fainter ground-glass opacities, which may reflect mild plasma fluid leakages due to disruption of the epithelium of alveolar. Plasma fluid leakages usually decrease the PaO2/FiO2 ratio (PFR), which is the definitive metric when classifying COVID-19 inpatients. The inventors used clinical metrics and radiomics to predict PFR, and it was found that subvisual lesions provide important information for predicting PFR. If the radiomics are quantified only with lesions visible under the lung window, the PCC between the predicted and ground- truth PFR decreases from 0.853 to 0.760, and the mean absolute error (MAE) increases significantly, from 29.2 to 36.8 (P = 0.0040). Using DLPE, the MAE is only 29.2, which is an outstanding performance; by comparison, the MAE between the SOTA minimally invasive PFR measurement and the invasive PFR ground truth is 26.4. These results also found that the lactate dehydrogenase (LDH) and C-reactive protein (CRP) greatly decrease the MAE during PFR prediction (P < 0.0001), which conforms with previous studies. [0077] The inventors also performed ablation studies to test the novel method. The DLPE enhancement removes scan-level bias and thus, enables precise quantification of sub-visual lesions. FIG.13 compares the median lesion severity (R1, an important radiomic) with the baseline CT signal (a scan-level bias, removed by DLPE) on the follow-up cohort. Without DLPE enhancement, R1 (the left distribution of FIG.13) is dominated by the scan-level bias of the baseline (the right distribution of FIG.13). On the follow-up cohort, the variation of the baseline CT signal is 22.28-times greater than that of median lesion severity, which justifies the necessity of removing the scan-level bias. This implies that without DLPE, many features for subvisual lesions may be concealed by the scan-level bias. Each data point on the left shows R1 of a scan, whereas each data point on the right shows the baseline CT value of a scan subtracting the average baseline CT values of all scans. [0078] Further ablation studies show how the scan-level bias can hamper the quantification of subvisual lesions. Even with the ground-truth annotation of subvisual lesions, the DLPE enhancement is still crucial for their segmentation: the inventors compared the main segmentation model used in DLPE, that is, the 2.5D model, with other SOTA models such as MPU-net and 3D U-net, and found that their performance difference is much smaller than the one caused by whether or not to use the DLPE enhancement to remove scan-level bias. Specifically, after DLPE enhancement, almost all existing segmentation models can segment the subvisual lesions (see FIG.14, third column, best average dice score of 0.886), whereas without the DLPE enhancement, the best average dice score for all segmentation models is only 0.612 (FIG.14, second column). It was further found that after various models were trained with the same ground-truth annotations, only models that were inputted with DLPE-enhanced CT scan can accurately segment subvisual lesions. Furthermore, without the DLPE enhancement to remove scan-level bias and noise for the radiomics, the PCCs of predicting several key respiratory sequelae significantly decreases. This means when radiomics are quantified without DLPE enhancement, their explanatory power significantly decreased. [0079] Thus, the DLPE method discussed herein combines the strengths of medical experts and AI through a human-in-the-loop training scheme to extract fully interpretable subvisual CT features for pulmonary parenchyma. Deep- LungParenchyma-Enhancing can help radiologists discover, annotate and quantify novel parenchyma lesions under many scenarios, by customizing the known lesion segmentation model in the second step of DLPE workflow for different tasks. For example, the inventors applied the DLPE scheme to the segmentation task of seven different lung diseases, including different pneumonia, tuberculosis, pulmonary nodules and lung cancers. It was found that the DLPE can make robust enhancement and critical segmentation for various lung diseases, which demonstrates its generalization power and potential clinical usefulness. [0080] In the above discussed embodiments, the inventors applied DLPE on COVID-19 inpatient and follow-up datasets, and discovered interpretable subvisual lesions. The pathological explanations of these novel COVID-19 lesions mutually authenticate with analyses between radiomics and key clinical metrics. On the studied follow-up cohort, 97% of lesions are sub-visual, which is one of the most important culprits of COVID-19 respiratory sequelae. [0081] The term “about” is used in this application to mean a variation of up to 20% of the parameter characterized by this term. It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first object or step could be termed a second object or step, and, similarly, a second object or step could be termed a first object or step, without departing from the scope of the present disclosure. The first object or step, and the second object or step, are both, objects or steps, respectively, but they are not to be considered the same object or step. [0082] The terminology used in the description herein is for the purpose of describing particular embodiments and is not intended to be limiting. As used in this description and the appended claims, the singular forms "a," "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and encompasses any possible combinations of one or more of the associated listed items. It will be further understood that the terms "includes," "including," "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Further, as used herein, the term "if" may be construed to mean "when" or "upon" or "in response to determining" or "in response to detecting," depending on the context. [0083] The above discussed methods and algorithms can be implemented in a computing system 1500 as illustrated in FIG.15. The depiction of system 1500 is not intended to limit or otherwise confine the embodiments described and contemplated herein to any particular configuration of elements or systems, nor is it intended to exclude any alternative configurations or systems for the set of configurations and systems that can be used in connection with embodiments of the present invention. Rather, Figure 15 and the computing device 1500 disclosed therein is merely presented to provide an example basis and context for the facilitation of some of the features, aspects, and uses of the methods, apparatuses, and computer program products disclosed and contemplated herein. In other words, the computing device 1500 may implement a neural network. It will be understood that while many of the aspects and components presented in Figure 15 are shown as discrete, separate elements, other configurations may be used in connection with the methods, apparatuses, and computer programs described herein, including configurations that combine, omit, and/or add aspects and/or components. [0084] It will be appreciated that all of the components shown in Figure 15 may be configured to communicate over any wired or wireless communication network, including a wired or wireless local area network (LAN), personal area network (PAN), metropolitan area network (MAN), wide area network (WAN), or the like, as well as interface with any attendant hardware, software and/or firmware required to implement said networks (such as network routers and network switches, for example). For example, networks such as a cellular telephone, an 802.11, 802.16, 802.20 and/or WiMax network, as well as a public network, such as the Internet, a private network, such as an intranet, or combinations thereof, and any networking protocols now available or later developed including, but not limited to, TCP/IP based networking protocols may be used in connection with system 1500 and embodiments of the invention that may be implemented therein or participate therein. [0085] The above-discussed procedures and methods may be implemented in a computing device as illustrated in Figure 15. Hardware, firmware, software or a combination thereof may be used to perform the various steps and operations described herein. The computing device 1500 is suitable for performing the activities described in the above embodiments and may include a server 1501. Such a server 1501 may include a central processor (CPU) 1502 coupled to a random access memory (RAM) 1504 and to a read-only memory (ROM) 1506. ROM 1506 may also be other types of storage media to store programs, such as programmable ROM (PROM), erasable PROM (EPROM), etc. Processor 1502 may communicate with other internal and external components through input/output (I/O) circuitry 1508 and bussing 1510 to provide control signals and the like. Processor 1502 carries out a variety of functions as are known in the art, as dictated by software and/or firmware instructions. [0086] Server 1501 may also include one or more data storage devices, including hard drives 1512, CD-ROM drives 1514 and other hardware capable of reading and/or storing information, such as DVD, etc. In one embodiment, software for carrying out the above-discussed steps may be stored and distributed on a CD- ROM or DVD 1516, a USB storage device 1518 or other form of media capable of portably storing information. These storage media may be inserted into, and read by, devices such as CD-ROM drive 1514, disk drive 1512, etc. Server 1501 may be coupled to a display 1520, which may be any type of known display or presentation screen, such as LCD, plasma display, cathode ray tube (CRT), etc. A user input interface 1522 is provided, including one or more user interface mechanisms such as a mouse, keyboard, microphone, touchpad, touch screen, voice-recognition system, etc. [0087] Server 1501 may be coupled to other devices, such as CT scan, MRI machine, or any other data imaging systems. The server may be part of a larger network configuration as in a global area network (GAN) such as the Internet 1528, which allows ultimate connection to various landline and/or mobile computing devices. [0088] As described above, the apparatus 1500 may be embodied by a computing device. However, in some embodiments, the apparatus may be embodied as a chip or chip set. In other words, the apparatus may comprise one or more physical packages (e.g., chips) including materials, components and/or wires on a structural assembly (e.g., a baseboard). The structural assembly may provide physical strength, conservation of size, and/or limitation of electrical interaction for component circuitry included thereon. The apparatus may therefore, in some cases, be configured to implement an embodiment of the present invention on a single chip or as a single “system on a chip.” As such, in some cases, a chip or chipset may constitute means for performing one or more operations for providing the functionalities described herein. [0089] The processor 1502 may be embodied in a number of different ways. For example, the processor may be embodied as one or more of various hardware processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing element with or without an accompanying DSP, or various other processing circuitry including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. As such, in some embodiments, the processor may include one or more processing cores configured to perform independently. A multi-core processor may enable multiprocessing within a single physical package. Additionally or alternatively, the processor may include one or more processors configured in tandem via the bus to enable independent execution of instructions, pipelining and/or multithreading. [0090] In an example embodiment, the processor 1502 may be configured to execute instructions stored in the memory device 1504 or otherwise accessible to the processor. Alternatively or additionally, the processor may be configured to execute hard coded functionality. As such, whether configured by hardware or software methods, or by a combination thereof, the processor may represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to an embodiment of the present invention while configured accordingly. Thus, for example, when the processor is embodied as an ASIC, FPGA or the like, the processor may be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, when the processor is embodied as an executor of software instructions, the instructions may specifically configure the processor to perform the algorithms and/or operations described herein when the instructions are executed. However, in some cases, the processor may be a processor of a specific device (e.g., a pass-through display or a mobile terminal) configured to employ an embodiment of the present invention by further configuration of the processor by instructions for performing the algorithms and/or operations described herein. The processor may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processor. [0091] A method for enhancing a computerized image of an organ of a patient to detect non-visible lesions is now discussed with regard to FIG.16. The method includes a step 1600 of receiving the computerized image of the organ, a step 1602 of segmenting the computerized image with a neural network, based on a voxel-wise weighted loss function, to generate segmentation masks, wherein the voxel-wise weighted loss function sums all voxel loss in the computerized image, and the voxel- wise weighted loss function is a function of (1) a weight w for a given voxel, (2) a predicted probability p that the given voxel is positive, and (3) a ground truth probability p’ that the given voxel is positive, a step 1604 of removing organ characteristics from the computerized image, based on the segmentation masks, to obtain a cleaned organ image, and a step 1606 of generating an enhanced image of the cleaned organ image based on a baseline value and a variation σ of the baseline value. [0092] The step of segmenting may include applying a stage-one segmentation model to the computerized image to generate a high recall mask and a high precision mask, and applying a stage-two segmentation model to the computerized image combined with the high recall mask and the high precision mask to generate the enhanced image. In one application, the stage-one segmentation model calculates a region of interest based on a heart or lung volume of the patient. [0093] The method may further include selecting the voxels with highest probabilities p to form the region of interest, and/or calculating a branching level for each voxel of the computerized image, and setting a sum of the weights w for all voxels having the same branching level to be a constant. The organ characteristic represents blood vessels of the organ or pulmonary airways. In one application, the same voxel-wise weighted loss function works for each of the blood vessels and the pulmonary airways. In the same application or another one, the weight w represents a false-negative penalty and a false-positive penalty is selected to be one for the voxel-wise weighted loss function. [0094] In one application, the enhanced image has a center at the baseline value, and the enhanced image has a width equal to +3 times the variation σ of the baseline value, or the enhanced image has a center at the baseline value, and the enhanced image has a width equal to -3 times the variation σ of the value. In this application or another application, the baseline value is a median signal of sampled voxels in the cleaned organ image, which is obtained by removing the organ characteristics from the computerized image, and the variation σ of the baseline value is a standard deviation value. The computerized image is a computer tomography image, or a magnetic resonance image, or a positron emission tomography image. [0095] The disclosed embodiments provide a method and system that can be used for determining lung diseases. The method and systems may also be used as a pre-processing model for pulmonary parenchyma lesion analysis. The method calculates the baseline CT value for healthy pulmonary parenchyma, and removes irrelevant tissues. The method can segment pulmonary blood vessels and airways, and many clinical diagnoses require these segmentations. For diagnosing pulmonary nodules, doctors need to know how the nodule interacts with the blood vessels. For diagnosing a pulmonary embolism, doctors need to visualize the 3D structure of blood vessels, and then can directly see embolism location from the blood vessel 3D visualization. The methods discussed above provide a better way to observe pulmonary parenchyma lesions, which may aide the diagnosis, especially for lung nodules or cancers, as they may be quite faint. Thus, the methods discussed herein can better quantify the lesions and can help those in the medical profession with regard to more accurately diagnosing lung related diseases. It should be understood that this description is not intended to limit the invention. On the contrary, the embodiments are intended to cover alternatives, modifications and equivalents, which are included in the spirit and scope of the invention as defined by the appended claims. Further, in the detailed description of the embodiments, numerous specific details are set forth in order to provide a comprehensive understanding of the claimed invention. However, one skilled in the art would understand that various embodiments may be practiced without such specific details. [0096] Although the features and elements of the present embodiments are described in the embodiments in particular combinations, each feature or element can be used alone without the other features and elements of the embodiments or in various combinations with or without other features and elements disclosed herein. [0097] This written description uses examples of the subject matter disclosed to enable any person skilled in the art to practice the same, including making and using any devices or systems and performing any incorporated methods. The patentable scope of the subject matter is defined by the claims, and may include other examples that occur to those skilled in the art. Such other examples are intended to be within the scope of the claims.
References The entire content of all the publications listed herein is incorporated by reference in this patent application. [1] Mannil, M. et al. Texture analysis and machine learning for detecting myocardial infarction in noncontrast low-dose computed tomography: unveiling the invisible. Invest. Radiol.53, 338–343 (2018). [2] Savadjiev, P. et al. Demystification of AI-driven medical image interpretation: past, present and future. Eur. Radiol.29, 1616–1624 (2019). [3] Pesapane, F. et al. Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur. Radiol. Exp.2, 35 (2018). [4] Zhou, L. et al. A rapid, accurate and machine-agnostic segmentation and quantification method for CT-based COVID-19 diagnosis. IEEE Trans. Med. Imaging 39, 2638–2652 (2020). [5] International Patent Application WO 2021/209887. [6] Qin Y. et al. (2019) AirwayNet: A Voxel-Connectivity Aware Approach for Accurate Airway Segmentation Using Convolutional Neural Networks. In: Shen D. et al. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. [7] Tan, Z., Feng, J., Zhou, J. (2021). SGNet: Structure-Aware Graph-Based Network for Airway Semantic Segmentation, in Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. [8] H. Zheng et al., "Alleviating Class-Wise Gradient Imbalance for Pulmonary Airway Segmentation," in IEEE Transactions on Medical Imaging, vol.40, no.9, pp.2452- 2462, Sept.2021.

Claims

WHAT IS CLAIMED IS: 1. A method for enhancing a computerized image (102) of an organ of a patient to detect non-visible lesions, the method comprising: receiving (1600) the computerized image (102) of the organ; segmenting (1602) the computerized image (102) with a neural network (1500), based on a voxel-wise weighted loss function, to generate segmentation masks (118), wherein the voxel-wise weighted loss function sums all voxel loss in the computerized image (102), and the voxel-wise weighted loss function is a function of (1) a weight w for a given voxel, (2) a predicted probability p that the given voxel is positive, and (3) a ground truth probability p’ that the given voxel is positive; removing (1604) organ characteristics from the computerized image (102), based on the segmentation masks (118), to obtain a cleaned organ image (120); and generating (1606) an enhanced image (128) of the cleaned organ image (120) based on a baseline value and a variation σ of the baseline value.
2. The method of Claim 1, wherein the step of segmenting comprises: applying a stage-one segmentation model to the computerized image to generate a high recall mask and a high precision mask; and applying a stage-two segmentation model to the computerized image combined with the high recall mask and the high precision mask to generate the enhanced image.
3. The method of Claim 2, wherein the stage-one segmentation model calculates a region of interest based on a heart or lung volume of the patient.
4. The method of Claim 3, further comprising: selecting the voxels with highest probabilities p to form the region of interest.
5. The method of Claim 1, further comprising: calculating a branching level for each voxel of the computerized image; and setting a sum of the weights w for all voxels having the same branching level to be a constant.
6. The method of Claim 1, wherein the organ characteristic represents blood vessels of the organ or pulmonary airways.
7. The method of Claim 6, wherein the same voxel-wise weighted loss function works for each of the blood vessels and the pulmonary airways.
8. The method of Claim 5, wherein the weight w represents a false-negative penalty.
9. The method of Claim 8, wherein a false-positive penalty is selected to be one for the voxel-wise weighted loss function.
10. The method of Claim 1, wherein the enhanced image has a center at the baseline value, and the enhanced image has a width equal to +3 times the variation σ of the baseline value, or the enhanced image has a center at the baseline value, and the enhanced image has a width equal to -3 times the variation σ of the value.
11. The method of Claim 1, wherein the baseline value is a median signal of sampled voxels in the cleaned organ image, which is obtained by removing the organ characteristics from the computerized image, and the variation σ of the baseline value is a standard deviation value.
12. The method of Claim 1, wherein the computerized image is a computer tomography image, or a magnetic resonance image, or a positron emission tomography image.
13. A computing device (1500) configured to enhance a computerized image (102) of an organ of a patient to detect non-visible lesions, the computing device (1500) comprising: an interface (1508) configured to receive (1600) the computerized image (102) of the organ; and a processor (1502) connected to the interface (1508) and configured to, segment (1602) the computerized image (102) with a neural network (1500), based on a voxel-wise weighted loss function, to generate segmentation masks (118), wherein the voxel-wise weighted loss function sums all voxel loss in the computerized image (102), and the voxel-wise weighted loss function is a function of (1) a weight w for a given voxel, (2) a predicted probability p that the given voxel is positive, and (3) a ground truth probability p’ that the given voxel is positive; remove (1604) organ characteristics from the computerized image (102), based on the segmentation masks (118), to obtain a cleaned organ image (120); and generate (1606) an enhanced image (128) of the cleaned organ image (120) based on a baseline value and a variation σ of the baseline value.
14. The computing device of Claim 13, wherein the processor is further configured to: apply a stage-one segmentation model to the computerized image to generate a high recall mask and a high precision mask; and apply a stage-two segmentation model to the computerized image combined with the high recall mask and the high precision mask, to generate the enhanced image.
15. The computing device of Claim 13, wherein the processor is further configured to, calculate a branching level for each voxel of the computerized image; and set a sum of the weights w for all voxels having the same branching level to be a constant.
16. The computing system of Claim 13, wherein the organ characteristic represents blood vessels of the organ or pulmonary airways.
17. The computing device of Claim 13, wherein the enhanced image has a center at the baseline value, and the enhanced image has a width equal to +3 times the variation σ of the baseline value, or the enhanced image has a center at the baseline value, and the enhanced image has a width equal to -3 times the variation σ of the value.
18. The computing device of Claim 13, wherein the baseline value is a median signal of sampled voxels in the cleaned organ image, which is obtained by removing the organ characteristics from the computerized image, and the variation σ of the baseline value is a standard deviation value.
19. The computing device of Claim 13, wherein the computerized image is a computer tomography image, or a magnetic resonance image, or a positron emission tomography image.
20. A method for enhancing a computer tomography, CT, image (102) of a lung of a patient to detect non-visible lesions, the method comprising: receiving (1600) the CT image (102) of the lung; segmenting (1602) the CT image (102) with a neural network (1500), based on a voxel-wise weighted loss function, to generate segmentation masks (118), wherein the voxel-wise weighted loss function sums all voxel loss in the computerized image (102), and the voxel-wise weighted loss function is a function of (1) a weight w for a given voxel, (2) a predicted probability p that the given voxel is positive, and (3) a ground truth probability p’ that the given voxel is positive; removing (1604) airways and blood vessels from the CT image (102), based on the segmentation masks (118), to obtain a cleaned lung image (120); and generating (1606) an enhanced image (128) of the cleaned lung image (120) based on a baseline CT value and a variation σ of the CT value.
PCT/IB2023/055303 2022-05-23 2023-05-23 System and method for determining pulmonary parenchyma baseline value and enhance pulmonary parenchyma lesions WO2023228085A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263344849P 2022-05-23 2022-05-23
US63/344,849 2022-05-23

Publications (1)

Publication Number Publication Date
WO2023228085A1 true WO2023228085A1 (en) 2023-11-30

Family

ID=86895805

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2023/055303 WO2023228085A1 (en) 2022-05-23 2023-05-23 System and method for determining pulmonary parenchyma baseline value and enhance pulmonary parenchyma lesions

Country Status (1)

Country Link
WO (1) WO2023228085A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021209887A1 (en) 2020-04-13 2021-10-21 King Abdullah University Of Science And Technology Rapid, accurate and machine-agnostic segmentation and quantification method and device for coronavirus ct-based diagnosis
CN114820571A (en) * 2022-05-21 2022-07-29 东北林业大学 Pneumonia fibrosis quantitative analysis method based on DLPE algorithm

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021209887A1 (en) 2020-04-13 2021-10-21 King Abdullah University Of Science And Technology Rapid, accurate and machine-agnostic segmentation and quantification method and device for coronavirus ct-based diagnosis
CN114820571A (en) * 2022-05-21 2022-07-29 东北林业大学 Pneumonia fibrosis quantitative analysis method based on DLPE algorithm

Non-Patent Citations (13)

* Cited by examiner, † Cited by third party
Title
H. ZHENG ET AL.: "Alleviating Class-Wise Gradient Imbalance for Pulmonary Airway Segmentation", IEEE TRANSACTIONS ON MEDICAL IMAGING, vol. 40, no. 9, September 2021 (2021-09-01), pages 2452 - 2462, XP011875317, DOI: 10.1109/TMI.2021.3078828
HARTMANN I J C ET AL: "Imaging of acute pulmonary embolism using multi-detector CT angiography: An update on imaging technique and interpretation", EUROPEAN JOURNAL OF RADIOLOGY, ELSEVIER SCIENCE, NL, vol. 74, no. 1, 1 April 2010 (2010-04-01), pages 40 - 49, XP026999450, ISSN: 0720-048X, [retrieved on 20100401] *
MANNIL, M ET AL.: "Texture analysis and machine learning for detecting myocardial infarction in noncontrast low-dose computed tomography: unveiling the invisible", INVEST. RADIOL., vol. 53, 2018, pages 338 - 343, XP055688387, DOI: 10.1097/RLI.0000000000000448
PESAPANE, F ET AL.: "Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine", EUR. RADIOL. EXP., vol. 2, 2018, pages 35, XP055610408, DOI: 10.1186/s41747-018-0061-6
PU JIANTAO ET AL: "Automated quantification of COVID-19 severity and progression using chest CT images", EUROPEAN RADIOLOGY, vol. 31, no. 1, 13 August 2020 (2020-08-13), pages 436 - 446, XP037319344, ISSN: 0938-7994, DOI: 10.1007/S00330-020-07156-2 *
QIN Y ET AL.: "Medical Image Computing and Computer Assisted Intervention - MICCAI 2019", 2019, article "AirwayNet: A Voxel-Connectivity Aware Approach for Accurate Airway Segmentation Using Convolutional Neural Networks"
SAVADJIEV, P ET AL.: "Demystification of Al-driven medical image interpretation: past, present and future", EUR. RADIOL., vol. 29, 2019, pages 1616 - 1624, XP036688887, DOI: 10.1007/s00330-018-5674-x
SHRUTI JADON: "A survey of loss functions for semantic segmentation", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 3 September 2020 (2020-09-03), XP081753867 *
TAN, ZFENG, JZHOU, J: "SGNet: Structure-Aware Graph-Based Network for Airway Semantic Segmentation", MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, 2021
WEIYI XIE ET AL: "Dense Regression Activation Maps For Lesion Segmentation in CT scans of COVID-19 patients", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 18 November 2021 (2021-11-18), XP091090681 *
ZHOU LONGXI ET AL: "An interpretable deep learning workflow for discovering subvisual abnormalities in CT scans of COVID-19 inpatients and survivors", 23 May 2022 (2022-05-23), pages 1 - 21, XP093065296, Retrieved from the Internet <URL:https://www.nature.com/articles/s42256-022-00483-7.pdf> [retrieved on 20230719], DOI: 10.1038/s42256-022-00483-7 *
ZHOU LONGXI ET AL: "LongxiZhou/DLPE-method: A comprehensive platform for analyzing pulmonary parenchyma lesions on chest CT", 27 March 2022 (2022-03-27), pages 1 - 196, XP093067711, Retrieved from the Internet <URL:https://zenodo.org/record/6387701> [retrieved on 20230726], DOI: 10.5281/zenodo.6387701 *
ZHOU, L ET AL.: "A rapid, accurate and machine-agnostic segmentation and quantification method for CT-based COVID-19 diagnosis", IEEE TRANS. MED. IMAGING, vol. 39, 2020, pages 2638 - 2652, XP011801694, DOI: 10.1109/TMI.2020.3001810

Similar Documents

Publication Publication Date Title
Santos et al. Artificial intelligence, machine learning, computer-aided diagnosis, and radiomics: advances in imaging towards to precision medicine
KR102491988B1 (en) Methods and systems for using quantitative imaging
Yousef et al. A holistic overview of deep learning approach in medical imaging
US8175348B2 (en) Segmenting colon wall via level set techniques
US20230154006A1 (en) Rapid, accurate and machine-agnostic segmentation and quantification method and device for coronavirus ct-based diagnosis
Hussain et al. Cascaded regression neural nets for kidney localization and segmentation-free volume estimation
US11915822B2 (en) Medical image reading assistant apparatus and method for adjusting threshold of diagnostic assistant information based on follow-up examination
Jadhav et al. COVID-view: Diagnosis of COVID-19 using Chest CT
He et al. Automatic segmentation and quantification of epicardial adipose tissue from coronary computed tomography angiography
Dodia et al. Recent advancements in deep learning based lung cancer detection: A systematic review
US20100266173A1 (en) Computer-aided detection (cad) of a disease
WO2020110519A1 (en) Similarity determination device, method, and program
KR20210060923A (en) Apparatus and method for medical image reading assistant providing representative image based on medical use artificial neural network
Suinesiaputra et al. Deep learning analysis of cardiac MRI in legacy datasets: multi-ethnic study of atherosclerosis
Dong et al. A novel end‐to‐end deep learning solution for coronary artery segmentation from CCTA
CN116230237B (en) Lung cancer influence evaluation method and system based on ROI focus features
Parascandolo et al. Computer aided diagnosis: state-of-the-art and application to musculoskeletal diseases
Liu et al. RPLS-Net: pulmonary lobe segmentation based on 3D fully convolutional networks and multi-task learning
Pal et al. A fully connected reproducible SE-UResNet for multiorgan chest radiographs segmentation
Mukherjee et al. Fully automated longitudinal assessment of renal stone burden on serial CT imaging using deep learning
WO2023228085A1 (en) System and method for determining pulmonary parenchyma baseline value and enhance pulmonary parenchyma lesions
Su et al. Res-DUnet: A small-region attentioned model for cardiac MRI-based right ventricular segmentation
Ibrahim et al. Liver Multi-class Tumour Segmentation and Detection Based on Hyperion Pre-trained Models.
Suji et al. A survey and taxonomy of 2.5 D approaches for lung segmentation and nodule detection in CT images
Bui et al. DeepHeartCT: A fully automatic artificial intelligence hybrid framework based on convolutional neural network and multi-atlas segmentation for multi-structure cardiac computed tomography angiography image segmentation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23732673

Country of ref document: EP

Kind code of ref document: A1