AU2021100007A4 - Deep Learning Based System for the Detection of COVID-19 Infections - Google Patents

Deep Learning Based System for the Detection of COVID-19 Infections Download PDF

Info

Publication number
AU2021100007A4
AU2021100007A4 AU2021100007A AU2021100007A AU2021100007A4 AU 2021100007 A4 AU2021100007 A4 AU 2021100007A4 AU 2021100007 A AU2021100007 A AU 2021100007A AU 2021100007 A AU2021100007 A AU 2021100007A AU 2021100007 A4 AU2021100007 A4 AU 2021100007A4
Authority
AU
Australia
Prior art keywords
covid
infection
disease
infections
lung
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2021100007A
Inventor
Divya Preetha Aravindan
Prabhu kavin B
Dinku Worku Debele
Preetha Dulles
Venkataramana Sagar G
Mudassir Khan
Madiajagan M
Ramakrishna MM
Sobhana Mummaneni
Subramani T.
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dulles Preetha Mrs
Aravindan Divya Preetha Miss
B Prabhu Kavin Dr
G Venkataramana Sagar Dr
Mummaneni Sobhana Dr
Original Assignee
Dulles Preetha Mrs
Aravindan Divya Preetha Miss
B Prabhu Kavin Dr
G Venkataramana Sagar Dr
M Madiajagan Dr
Mummaneni Sobhana Dr
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dulles Preetha Mrs, Aravindan Divya Preetha Miss, B Prabhu Kavin Dr, G Venkataramana Sagar Dr, M Madiajagan Dr, Mummaneni Sobhana Dr filed Critical Dulles Preetha Mrs
Priority to AU2021100007A priority Critical patent/AU2021100007A4/en
Application granted granted Critical
Publication of AU2021100007A4 publication Critical patent/AU2021100007A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/50Clinical applications
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2505/00Evaluating, monitoring or diagnosing in the context of a particular type of medical care
    • A61B2505/01Emergency care
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2505/00Evaluating, monitoring or diagnosing in the context of a particular type of medical care
    • A61B2505/03Intensive care
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus for radiation diagnosis, e.g. combined with radiation therapy equipment
    • A61B6/02Devices for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computerised tomographs
    • A61B6/032Transmission computed tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/031Recognition of patterns in medical or anatomical images of internal organs
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/80ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for detecting, monitoring or modelling epidemics or pandemics, e.g. flu

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Public Health (AREA)
  • Radiology & Medical Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Veterinary Medicine (AREA)
  • Data Mining & Analysis (AREA)
  • Animal Behavior & Ethology (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Quality & Reliability (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Physiology (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Deep Learning Based System for the Detection of COVID-19 Infections Coronavirus disease has spread to millions of people worldwide and has increasingly contaminated them, putting high pressure on the health system's facilities. PCR screening is the adopted diagnostic testing method for COVID-19 identification. Even in asymptotic patients, CT imaging has shown its ability to diagnose the disease, making it a trustworthy diagnostic support tool for COVID-19. In CT slices, the incidence of COVID-19 infections provides a high potential to support disease evolution tracking using automated infection segmentation methods. However, COVID-19 infection areas contain high variations in the homogeneity of scale, shape, contrast and intensity, which pose a significant challenge to the method of segmentation. This invention proposes an automated segmentation method for deep learning to detect and delineate the infections of COVID-19 in CT scans. Also this proposed invention will consume less time and less cost to detect the infection and in the process of annotating the areas of infection, compared with the existing approaches. This invention will help the doctors to determine the progression of the covid-19 disease, to measure the burden and seriousness of the disease by segmenting the lung organ from the CT scan as an area of concern, and then segmenting the infections within it. Target CT Slice Hrie HC Lung ROI Trained FCN EED Infection Covid- 19 Intensity Infection Enhancement Fig. 1

Description

Target CT Slice Hrie HC Lung ROI
Trained FCN EED Infection Covid- 19 Intensity Infection Enhancement
Fig. 1
TITLE OF THE INVENTION Deep Learning Based System for the Detection of COVID-19 Infections
FIELD OF THE INVENTION
[001]. The present disclosure is generally related to a Deep Learning Based System for Detection of COVID-19 Infections in humans that will help the doctors to determine the progression of the covid-19 disease, to measure the burden, and seriousness of the disease.
BACKGROUND OF THE INVENTION
[002]. Coronavirus disease has spread to millions of people worldwide and has increasingly contaminated them, putting high pressure on the health system's facilities. PCR screening is the adopted diagnostic testing method for COVID-19 identification. Even in asymptotic patients, CT imaging has shown its ability to diagnose the disease, making it a trustworthy diagnostic support tool for COVID-19. In CT slices, the incidence of COVID-19 infections provides a high potential to support disease evolution tracking using automated infection segmentation methods. However, COVID-19 infection areas contain high variations in the homogeneity of scale, shape, contrast and intensity, which pose a significant challenge to the method of segmentation. This invention proposes an automated segmentation method for deep learning to detect and delineate the infections of COVID-19 in CT scans.
[003]. The primary objective of this invention is to propose a system for detecting the infection of covid-19 for helping the doctors to determine the progression of the covid-19 disease, to measure the burden,and seriousness of the disease.
SUMMARY OF THE INVENTION
[004]. The novel coronavirus disease (COVID-19) pandemic affects 213 countries and territories worldwide, with 2 foreign transmissions. According to statistics from the COVID-19 dashboard at the Johns Hopkins University Centre for Systems Science and Engineering (CSSE), more than 19,172,505 cases of COVID-19 have been registered (the number is increasing), including 716,327 fatalities. The high level of COVID-19 infection is the explanation for the rapid rise in reported cases. COVID-19 leads to severe respiratory problems, including fever, cough and exhaustion, with various symptoms. In particular, these symptoms can develop into severe pneumonia in people with weakened immune systems.
[005]. The most widely used test to detect viral RNA using a nasopharyngeal swab is the reverse transcription polymerase chain reaction (RT-PCR) test. However, numerous studies have indicated that the RT-PCR test has elevated false negative rates, where repeated testing is needed for accurate diagnosis. In addition, due to the scarcity of manufacturing content, the RT-PCR test has availability limitations, and the testing process is also time consuming, restricting rapid and reliable screening. The computed tomography (CT) is considered as a supporting method to RT-PCR for COVID-19 screening, where high proportions of CT scans were obtained from the infected patients.
[006]. CT scanning is regarded as a promising and effective alternative method for the detection and control of COVID-19 disease compared to other forms of tests. For COVID-19 diagnosis, CT imaging was recommended and chest CT screening was explicitly used as a routine diagnostic method for pneumonia. Chest CT scanning has shown efficacy in the diagnosis of coronavirus disease, including follow-up evaluation and monitoring of disease progression. Diagnostic studies using CT scanning on patients with COVID-19 disease suggest that areas of infection can appear on CT scans before symptoms of the disease appear.
[007]. These COVID-19 infection areas may therefore be identified in asymptomatic patients by examining ground glass opacity (GGO) and signs of pulmonary consolidation that may occur at different stages of the disease. By suggesting methods to classify the prevalent trends of infections such as ground glass opacity (GGO) and pulmonary consolidations, visual CT imaging analysis may assist in COVID-19 disease diagnosis. Three separate picture processing tasks are provided by systems in this area. Second, CT classification, where the patient is marked as having or not having the disease.
[008]. Second, the identification of disease infections, where bounding boxes highlight the infection areas. The third task is segmentation of the infection region and measurement of disease burden by applying pixel level classification. Manual segmentation of infections with COVID-19 is a time-consuming and tedious process. Moreover, it relies heavily on the abilities of the physician or doctor who conducts the task of segmentation. The automated approach is beneficial for COVID-19 infection segmentation, as it is, preferably, more objective and reduces reliance on human skills. The development of deep completely convolutionary networks (FCN) has recently improved the performance of semantic segmentation due to the advancement in computer vision, leading to other competitors in the field of medical imaging. General FCN focuses its task on classifying images, where an image is input and one mark is output.
[009]. However, in the COVID-19 chest CT study, the field of abnormality needs to be localised and segmented in addition to the classification. In order to assist clinicians and radiologists in diagnostic and prognostic activities, researchers began using FCN for COVID-19 disease, which resulted in enhancing accuracy and reducing inspection time. Many researchers have suggested deep learning systems to help tackle the elevated spread of COVID-19 disease.
[0010]. Most of the recent proposed deep learning systems are based on the identification (classification) by CT screening of patients infected with COVID-19 disease, which is due to the availability of a large number of CT scans and does not need very rare radiological annotations. COVID-Net, introduced in the literature, is a deep neural network tailored for the identification of COVID-19 cases from chest X-ray images that are open source and accessible to the general public as an example of the classification of COVID-19 diagnostic systems.
[0011]. Other research proposed a 3D deep learning system equipped to differentiate COVID-19 pneumonia from Influenza-A viral pneumonia from healthy cases using pulmonary CT images. To detect COVID-19, another weakly-supervised deep learning-based software system was developed using 3D CT volumes. To help COVID-19 diagnosis in clinical practice, several deep learning systems have been suggested, but few of them are linked to the delineation of infection from CT scans. Most of these suggested techniques use the implementation of U-net FCN as a backbone in their strategies. Some other works have suggested deep COVID-19 oriented networks of their own.
[0012]. For most of the proposed approaches, the common problem is inadequate labeled CT scans for deep network training, which can not be accessed in a short time. Since the process of annotating the areas of infection is time consuming, costly and relies on the expertise of the radiologist. This Invention proposes an automated segmentation method for deep learning to detect and delineate the infections of COVID-19 in CT scans. The method begins by segmenting the lung organ from the CT scan as an area of concern, and then segmenting the infections within it.
The method allows doctors to determine the progression of the disease, to measure the burden and seriousness of the disease.
DETAILED DESCRIPTION OF THE INVENTION
[0013]. The novel coronavirus disease (COVID-19) pandemic affects 213 countries and territories worldwide, with 2 foreign transmissions. According to statistics from the COVID-19 dashboard at the Johns Hopkins University Centre for Systems Science and Engineering (CSSE), more than 19,172,505 cases of COVID-19 have been registered (the number is increasing), including 716,327 fatalities. The high level of COVID-19 infection is the explanation for the rapid rise in reported cases. COVID-19 leads to severe respiratory problems, including fever, cough and exhaustion, with various symptoms. In particular, these symptoms can develop into severe pneumonia in people with weakened immune systems. The most widely used test to detect viral RNA using a nasopharyngeal swab is the reverse transcription polymerase chain reaction (RT-PCR) test.
[0014]. However, numerous studies have indicated that the RT-PCR test has elevated false negative rates, where repeated testing is needed for accurate diagnosis. In addition, due to the scarcity of manufacturing content, the RT-PCR test has availability limitations, and the testing process is also time consuming, restricting rapid and reliable screening. The computed tomography (CT) is considered as a supporting method to RT-PCR for COVID-19 screening, where high proportions of CT scans were obtained from the infected patients. CT scanning is regarded as a promising and effective alternative method for the detection and control of COVID-19 disease compared to other forms of tests. For COVID-19 diagnosis, CT imaging was recommended and chest CT screening was explicitly used as a routine diagnostic method for pneumonia. Chest CT scanning has shown efficacy in the diagnosis of coronavirus disease, including follow-up evaluation and monitoring of disease progression.
[0015]. Diagnostic studies using CT scanning on patients with COVID-19 disease suggest that areas of infection can appear on CT scans before symptoms of the disease appear. These COVID-19 infection areas may therefore be identified in asymptomatic patients by examining ground glass opacity (GGO) and signs of pulmonary consolidation that may occur at different stages of the disease. By suggesting methods to classify the prevalent trends of infections such as ground glass opacity (GGO) and pulmonary consolidations, visual CT imaging analysis may assist in COVID-19 disease diagnosis.
[0016]. Three separate picture processing tasks are provided by systems in this area. Second, CT classification, where the patient is marked as having or not having the disease. Second, the identification of disease infections, where bounding boxes highlight the infection areas. The third task is segmentation of the infection region and measurement of disease burden by applying pixel-level classification. Manual segmentation of infections with COVID-19 is a time-consuming and tedious process.
[0017]. Moreover, it relies heavily on the abilities of the physician or doctor who conducts the task of segmentation. The automated approach is beneficial for COVID-19 infection segmentation, as it is, preferably, more objective and reduces reliance on human skills. The development of deep completely convolutionary networks (FCN) has recently improved the performance of semantic segmentation due to the advancement in computer vision, leading to other competitors in the field of medical imaging. General FCN focuses its task on classifying images, where an image is input and one mark is output.
[0018]. However, in the COVID-19 chest CT study, the field of abnormality needs to be localised and segmented in addition to the classification. In order to assist clinicians and radiologists in diagnostic and prognostic activities, researchers began using FCN for COVID-19 disease, which resulted in enhancing accuracy and reducing inspection time. Many researchers have suggested deep learning systems to help tackle the elevated spread of COVID-19 disease.
[0019]. Most of the recent proposed deep learning systems are based on the identification (classification) by CT screening of patients infected with COVID-19 disease, which is due to the availability of a large number of CT scans and does not need very rare radiological annotations. COVID-Net, introduced by the authors in the literature, is a deep neural network tailored for the identification of COVID-19 cases from chest X ray images that are open source and accessible to the general public as an example of the classification of COVID-19 diagnostic systems. Other research proposed a 3D deep learning system equipped to differentiate COVID-19 pneumonia from Influenza-A viral pneumonia from healthy cases using pulmonary CT images. To detect COVID-19, another weakly-supervised deep learning-based software system was developed using 3D CT volumes.
[0020]. To help COVID-19 diagnosis in clinical practice, several deep learning systems have been suggested, but few of them are linked to the delineation of infection from CT scans. Most of these suggested techniques use the implementation of U-net FCN as a backbone in their strategies. Some other works have suggested deep COVID-19 oriented networks of their own. For most of the proposed approaches, the common problem is inadequate labeled CT scans for deep network training, which can not be accessed in a short time. Since the process of annotating the areas of infection is time consuming, costly and relies on the expertise of the radiologist.
[0021]. This Invention proposes an automated segmentation method for deep learning to detect and delineate the infections of COVID-19 in CT scans. The method begins by segmenting the lung organ from the CT scan as an area of concern, and then segmenting the infections within it. The method allows doctors to determine the progression of the disease, to measure the burden and seriousness of the disease.
[0022]. proposed framework: The flowchart of the proposed segmentation method is shown in Fig. 1 and consists of two main phases that are implemented sequentially. The first step is the step of lung segmentation from the slices of plain chest CT, and then the COVID-19 infection follows it. The proposed framework relies on the use of two completely convolutionary cascading networks (FCNs). The first FCN is constructed to segment the lung organ that is used to concentrate and segment the COVID-19 infection areas using the second FCN as a region of interest (ROI). Using various datasets from various public sources, the two constructed FCNs are evaluated and trained.
[0023]. COVID-19 Datasets: Publicly accessible COVID-19 chest CT datasets, obtained from various sources, are used in this Invention to train and test the proposed FCN networks. The required training classes were labeled with all CT datasets: history, lung organ, and COVID-19 infection. The Italian Society of Medical and Interventional Radiology is compiling the first open access COVID-19 dataset (SIRM). The COVID 19 CT segmentation dataset consists of 100 axial slices containing radiologist-segmented lung infections, where the segmented slices are derived from over 40 separate CT scans infected with COVID-19.
[0024]. The second group of public datasets, consisting of 9 COVID-19 chest volumetric CT with corresponding ground-truth, is from Radiopedia. Approximately 373 slices of all 9 datasets is diagnosed by a radiologist as positive and delineated. Ma et al. are gathering and presenting a third newly published public dataset consisting of 20 annotated COVID-19 chest CT scans, 10 Coronacases Initiative CT scans and another 10 Radiopedia CT scans. These CT scans are freely accessible with a CC BY-NC-SA license and are labelled by two radiologists and checked by an experienced radiologist on all 20 COVID 19 CT scans.
[0025]. COVID-19 Infections appearance Enhancement: The segmentation of infection areas within the lung ROI poses multiple challenges, such as the fuzzy edges and severity of inhomogeneity within infection areas. In the chest CT images, the COVID-19 infection region has a low contrast; the surrounding tissues do not have a definite boundary. In addition, infection areas have high variability in texture, size and location in CT slices. The segmented lung (ROI) from the CT image is improved by tensor-based Edge Enhancing Diffusion (EED) filtering to improve the detection of infection areas.
[0026]. EED filtering utilizes a diffusion tensor to adapt the diffusion to the structure of the image. EED filters help to increase contrast, filter noise to enhance homogeneity of strength, and maintain form boundaries. EED filtering is used to increase the contrast of infection areas in order to enhance the identification and segmentation of COVID-19 infection areas by improving the homogeneity of severity within these areas and maintaining the limits with regard to lung parenchyma. This move aims to enhance the process of FCN training to isolate and learn the key features that distinguish the areas of infection from the surrounding tissues.
[0027]. Training Patches Extraction: The distribution and location of the infection areas inside the CT scans is unknown, which is considered to be the primary concern of the proposed deep learning FCN for better training. Furthermore, if the CT slice has areas of infection, the distribution of infection inside the slice is largely distorted because the infection may be part of a small percentage of the slice. Therefore, using the entire CT slices as training patches could lead to a strong bias against the context, which in medical imaging is considered a common problem of semantic segmentation.
[0028]. The created training slices are extracted from those slices containing infection with random distinct patch sizes to combat this issue. The lung slices that are derived only from lung ROI are the training datasets (not the whole CT slice). Since the lung organ appears in various slices of different sizes, the patches removed come with different resolutions.
[0029]. Network Architecture (ResDense FCN): Two cascaded deep FCNs are linked sequentially in this work to segment the lung organ and then the infection areas of COVID-19. An adjustment of the U-net architecture, with 5 levels which is the backbone of the proposed FCN network. A U-net consists of a path for encoding and a path for decoding. Three operations are applied at each step of the encoding: convolution, activation function (ReLU) and batch normalization. In each level block, these operations are applied twice consecutively, followed by a max pooling operation before moving to the next point.
[0030]. For convolutions and 2x2 for max-pooling, the kernel size is 3x3. After each stage, the feature's resolution is reduced to half. By applying the same operation sequence conyy, ReLU, BN) but replacing the max pooling with up-sampling at each step, the network decoding path recovers the original input size. In addition, the corresponding function is concatenated to the input of each decoding stage from the encoding path. The last stage in the decoding path ends with a 1xi convolution with a sigmoid activation function to use the dice coefficient metric to classify the feature map and produce the final binary prediction map.
[0031]. The increase in network depth in FCN is unavoidable, but it leads to the problem of the disappearing gradient, as more layers are piled together and the gradient information is washed out, which slows down the training and degrades efficiency. Different deep network architectures have been proposed to address this problem, but in terms of performance, DensNet and ResNet are regarded as breakthroughs. Each layer is linked to all forward layers in DensNet, where the feature maps created from various filter sizes are concatenated from previous layers, making the model much thicker as channels are joined after each convolution operation.
[0032]. On the other hand, since the addition operation is used to combine the previous input identity with the output function map, this does not occur in ResNet. In the ResNet block, a shortcut (skip link) from the block input (identity) bypasses the stacked layers and connects to the block output feature. The DensNet is also known to be memory hungry networks, as back-propagation allows the entire layer outputs to be stored, which costs more memory and runs slowly. On the other hand, ResNet's proposal is the inclusion of tensors, but it has been argued that the direct addition of function maps damages the gradient flow through the network, since it sums up the values of the features.
[0033]. Therefore, since it retains the function maps, the concatenation operation is favoured, while the summation corrupts the feature maps for both the convolution operation and the source of the skip connections. The ResDense block is the key contribution to the proposed network architecture. The dense ties (concatenation) are used between residual blocks rather than convolution layers in the proposed ResDense. In terms of flow and memory feature maps, the proposed ResDense block appropriately refines the feature values by providing residual blocks and intermittently memorizes the refined feature values by dense connections between residual blocks.
[0034]. The proposed FCN network's encoder and decoder paths depend on ResDense blocks in this work. Each level in the contracting and expanding paths of the proposed network is constructed using the ResDense block. Hence, at the end of each stage in the encoding path, the depth size of the feature map is doubled and concatenated with block input.
[0035]. Network Implementation and Training: With resized annotated 2D slices with a patch size of (256 x 256), the proposed network is trained. Using zero mean and unit variance normalization, all CT slices were normalized patch-wise. With the TensorFlow backend, the ResDense FCN architecture is implemented using Keras1. Two ResDense networks were designed to sequentially segment both the lung and then the area of infection within it. The network is trained and the parameters are modified with a learning rate set to 0.0001 using the Adam optimizer. With 20 epochs, both networks are trained with batch size 32, 16 for segmentations of lungs and infections, respectively.
[0036]. In order to update the model parameters and to track the network training convergence, the soft dice coefficient loss was used. In network design, the Batch Normalization (BN) layer is used to prevent unrealistic increases or decreases in the produced values between network layers. The final layer for the network utilizes the feature of pixel-wise sigmoid activation. Because of the lung tissue imbalance class distribution and COVID-19 infections, several steps have been taken to enhance the efficiency of the qualified network.
[0037]. First, in the training process, the soft dice loss metric is used to calculate the overlap between the ground-truth patches and the area within the lung ROI labeled as infection by the network. In addition, the second FCN network is only trained within the lung ROI to learn features that only differentiate COVID-19 infections from the context of lung tissues. The training patches are then extracted only from the CT slice by cropping the lung section. In addition, the training patches used are ensured to have the corresponding mask, and to remove from the training phase patches without annotation mask.
[0038]. Performance Measures: A group of performance tests are used to test the segmentation of COVID-19 infection and the lung organ from CT scans in order to evaluate the proposed method quantitatively. First, the Dice coefficient (DSC) is an overlap indicator that measures the ratio of the correctly segmented class between the segmentation output and the ground truth with respect to the average scale. Equation DSC = (2*TP)/(2*TP+FN+FP) gives the dice metric (DSC), where TP (true positive) and TN (true negative) indicate the number of voxels properly classified in the segmented class and in the context. FN (false negative) and FP (false positive) reflect the number of voxels neither as context nor properly classified as segmented.
[0039]. The second measure is sensitivity, which detects the ratio of voxels in the correctly segmented class (TP) to ground truth (TP+FN), Sensitivity = TP/(TP+FN). The sensitivity measure demonstrates the ability of the system to correctly segment the intended class voxels. The third metric is the precision of the ratio of correctly segmented non-class voxels (TN) to the total number of non-class voxels measured. The specificity measure indicates how Specificity = TN/(TN+FP) is to demonstrate the method capable of segmenting voxels that do not belong to the intended class.
[0040]. This Invention introduced a deep learning framework in chest CT scans for COVID-19 lung infection segmentation. With each level in the encoding and decoding paths, the developed FCN used U-net architecture as a backbone using suggested ResDense blocks. The feature maps of the infection areas and lung history flow through the network without significant change of their values due to concatenation skip relation in the each ResDense blocks, which improved network learning and enhanced the segmentation efficiency. In addition, the method includes an EED phase to improve the appearance of infection areas in CT slices by improving their homogeneity in contrast and strength.
[0041]. The results of the qualitative and quantitative assessment show the system's efficacy and ability to segment COVID-19 infection areas from CT images. The system is trained and validated using numerous datasets from different sources, which have demonstrated its generalization ability and are promising tools for automated COVID-19 infection detection and clinical routine analysis.

Claims (5)

  1. CLAIMS: We Claim: 1. We claim that this disclosed invention introduces a deep learning based framework in chest CT scans for COVID-19 lung infection segmentation.
  2. 2. As claimed in 1, this invention proposes an automated segmentation method for deep learning to detect and delineate the infections of COVID-19 in CT scans.
  3. 3. As we claimed in 1 and 2, this proposed invention will consume less time and less cost to detect the infection and in the process of annotating the areas of infection, compared with the existing approaches.
  4. 4. We claim that, the invented system detects the infection by segmenting the lung organ from the CT scan as an area of concern, and then segmenting the infections within it.
  5. 5. As we claimed in 1, 2, and 4 this invention will helps the doctors to determine the progression of the covid-19 disease, to measure the burden and seriousness of the disease.
    Trained FCN Target CT Slice Lung ROI Lung Segment
    Trained FCN EED Infection Covid-19 Intensity Infection Enhancement Segment
    Fig. 1
AU2021100007A 2021-01-02 2021-01-02 Deep Learning Based System for the Detection of COVID-19 Infections Ceased AU2021100007A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2021100007A AU2021100007A4 (en) 2021-01-02 2021-01-02 Deep Learning Based System for the Detection of COVID-19 Infections

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2021100007A AU2021100007A4 (en) 2021-01-02 2021-01-02 Deep Learning Based System for the Detection of COVID-19 Infections

Publications (1)

Publication Number Publication Date
AU2021100007A4 true AU2021100007A4 (en) 2021-03-25

Family

ID=75093603

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021100007A Ceased AU2021100007A4 (en) 2021-01-02 2021-01-02 Deep Learning Based System for the Detection of COVID-19 Infections

Country Status (1)

Country Link
AU (1) AU2021100007A4 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022225794A1 (en) * 2021-04-23 2022-10-27 The Johns Hopkins University Systems and methods for detecting and characterizing covid-19
CN115831329A (en) * 2022-12-21 2023-03-21 青海大学附属医院 Infusorian classification model construction method, system and medium fusing doctor attention image

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022225794A1 (en) * 2021-04-23 2022-10-27 The Johns Hopkins University Systems and methods for detecting and characterizing covid-19
CN115831329A (en) * 2022-12-21 2023-03-21 青海大学附属医院 Infusorian classification model construction method, system and medium fusing doctor attention image
CN115831329B (en) * 2022-12-21 2023-08-18 青海大学附属医院 Method, system and medium for constructing bag worm classification model fusing doctor focused image

Similar Documents

Publication Publication Date Title
Gozes et al. Rapid ai development cycle for the coronavirus (covid-19) pandemic: Initial results for automated detection & patient monitoring using deep learning ct image analysis
Müller et al. Automated chest ct image segmentation of covid-19 lung infection based on 3d u-net
Soomro et al. Artificial intelligence (AI) for medical imaging to combat coronavirus disease (COVID-19): A detailed review with direction for future research
Acar et al. Improving effectiveness of different deep learning-based models for detecting COVID-19 from computed tomography (CT) images
Hassan et al. Review and classification of AI-enabled COVID-19 CT imaging models based on computer vision tasks
CN113744183B (en) Pulmonary nodule detection method and system
Abdulkareem et al. Automated system for identifying COVID-19 infections in computed tomography images using deep learning models
AU2021100007A4 (en) Deep Learning Based System for the Detection of COVID-19 Infections
Li et al. MultiR-net: a novel joint learning network for COVID-19 segmentation and classification
Alruwaili et al. COVID-19 diagnosis using an enhanced inception-ResNetV2 deep learning model in CXR images
Alirr Automatic deep learning system for COVID-19 infection quantification in chest CT
Nurmaini et al. Automated Detection of COVID-19 Infected Lesion on Computed Tomography Images Using Faster-RCNNs.
Ye et al. Severity assessment of COVID-19 based on feature extraction and V-descriptors
Weikert et al. Prediction of patient management in COVID-19 using deep learning-based fully automated extraction of cardiothoracic CT metrics and laboratory findings
Hasanzadeh et al. Segmentation of covid-19 infections on ct: Comparison of four unet-based networks
Ananth et al. An Advanced Low-cost Blood Cancer Detection System.
Afif et al. Deep learning-based technique for lesions segmentation in CT scan images for COVID-19 prediction
Rehman Khan et al. Cloud-based framework for COVID-19 detection through feature fusion with bootstrap aggregated extreme learning machine
Ghomi et al. Segmentation of COVID-19 pneumonia lesions: A deep learning approach
AU2021103578A4 (en) A Novel Method COVID -19 infection using Deep Learning Based System
Wen et al. A novel lesion segmentation algorithm based on U-net network for tuberculosis CT image
Safdarian et al. Detection and classification of COVID-19 by lungs computed tomography scan image processing using intelligence algorithm
Zainab et al. Detection of COVID-19 using CNN's Deep Learning Method
Suganya et al. Automated COVID-19 diagnosis using Deep Multiple Instance Learning with CycleGAN
Sadeghi et al. A systematic review on the use of artificial intelligence techniques in the diagnosis of COVID-19 from chest X-ray images

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry