AU2021103578A4 - A Novel Method COVID -19 infection using Deep Learning Based System - Google Patents

A Novel Method COVID -19 infection using Deep Learning Based System Download PDF

Info

Publication number
AU2021103578A4
AU2021103578A4 AU2021103578A AU2021103578A AU2021103578A4 AU 2021103578 A4 AU2021103578 A4 AU 2021103578A4 AU 2021103578 A AU2021103578 A AU 2021103578A AU 2021103578 A AU2021103578 A AU 2021103578A AU 2021103578 A4 AU2021103578 A4 AU 2021103578A4
Authority
AU
Australia
Prior art keywords
covid
infection
illness
segmentation
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2021103578A
Inventor
Rayachoti Eswaraiah
Sanjay Gandhi Gundabatini
Madhu Babu Janjanam
Suresh Babu Kolluru
Sri Hari Nallamala
Siva Prasad Pinnamaneni
N Lakshmi Prasanna
Jeevana Jyothi Pujari
Sudhakar Putheti
Sudhir Tirumalasetty
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Prasanna N Lakshmi Dr
Original Assignee
Prasanna N Lakshmi Dr
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Prasanna N Lakshmi Dr filed Critical Prasanna N Lakshmi Dr
Priority to AU2021103578A priority Critical patent/AU2021103578A4/en
Application granted granted Critical
Publication of AU2021103578A4 publication Critical patent/AU2021103578A4/en
Assigned to Nallamala, Sri Hari, Eswaraiah, Rayachoti, Gundabatini, Sanjay Gandhi, Janjanam, Madhu Babu, Kolluru, Suresh Babu, Pinnamaneni, Siva Prasad, Prasanna, N Lakshmi, Pujari, Jeevana Jyothi, Putheti, Sudhakar, Tirumalasetty, Sudhir reassignment Nallamala, Sri Hari Amend patent request/document other than specification (104) Assignors: Eswaraiah, Rayachoti, Gundabatini, Sanjay Gandhi, Janjanam, Madhu Babu, Kolluru, Suresh Babu, Nallamala, Sri Hari, Pinnamaneni, Siva Prasad, Prasann, N. Lakshmi, Pujari, Jeevana Jyothi, Putheti, Sudhakar, Tirumalasetty, Sudhir
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/02Arrangements for diagnosis sequentially in different planes; Stereoscopic radiation diagnosis
    • A61B6/03Computed tomography [CT]
    • A61B6/032Transmission computed tomography [CT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/80ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for detecting, monitoring or modelling epidemics or pandemics, e.g. flu
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • A61B2576/02Medical imaging apparatus involving image processing or analysis specially adapted for a particular organ or body part
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Optics & Photonics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Physiology (AREA)
  • Pulmonology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

Coronavirus illness has infected millions of individuals globally and is contaminating them at an alarming rate, putting a strain on the health system's capabilities. PCR screening is the analytic technique of choice for COVID-19 discovery. CT imaging has demonstrated its diagnostic competence in asymptotic patients, establishing it as a reliable diagnostic support tool for COVID-19. The high prevalence of COVID-19 contaminations in CT slices enables the tracking of illness progression using automated contamination segmentation techniques. Nevertheless, COVID-19 infection regions exhibit a great degree of heterogeneity in terms of scale, form, contrast, and concentration, posing a substantial challenge to the segmentation approach. The present invention provides an automated segmentation technique based on deep learning for the discovery and delineation of COVID-19 infections in CT images. Additionally, as compared to existing techniques, this suggested innovation will need less time and money to identify the infection and annotate the infection regions. This innovation will assist physicians in determining the course of covid-19 illness, as well as quantifying the disease's load and severity, by segmenting the lung organ from the CT scan as a region of concern and then segmenting the infections contained inside it.

Description

TITLE OF THE INVENTION
A novel method COVID -19 infection using Deep Learning Based System
FIELD OF THE INVENTION
The current disclosure is mainly linked to a Deep Learning Based System for Detection of COVID-19 Infections in Humans, which will aid doctors in determining the course of the covid-19 sickness, as well as measuring the burden and seriousness of the condition.
BACKGROUND OF THE INVENTION
Coronavirus illness has infected millions of individuals globally and is contaminating them at an alarming rate, putting a strain on the healthiness system's capabilities. PCR showing is the diagnostic technique of choice for COVID-19 recognition. CT imaging has demonstrated its diagnostic competence in asymptotic patients, establishing it as a reliable diagnostic support tool for COVID-19. The high prevalence of COVID-19 infections in CT slices enables the tracking of illness progression using automated infection dissection techniques. However, COVID-19 infection regions exhibit a great degree of heterogeneity in terms of scale, shape, contrast, and intensity, posing a substantial challenge to the segmentation approach. The present invention provides an automated segmentation technique based on deep learning for the detection and delineation of COVID-19 infections in CT images. The major goal of this invention is to present a method for detecting covid-19 infection in order to assist physicians in determining the course of covid-19 disease, as well as to quantify the disease's burden and severity.
SUMMARY OF THE INVENTION
The new coronavirus illness (COVID-19) pandemic has spread to 213 nations and territories, with two transmissions occurring outside. Based on the current COVID-19 dashboard at Johns Hopkins University's Center for Systems Science and Engineering (CSSE), more than 19,172,505 instances of COVID-19 have been described, including 716,327 fatalities. The high prevalence of COVID-19 infection accounts for the fast increase in reported cases. COVID-19 causes significant respiratory difficulties, such as fever, coughing, and fatigue, as well as a variety of other symptoms. These symptoms, in particular, can progress to serious pneumonia in patients with compromised immune systems. The reverse transcription polymerase chain reaction (RT-PCR) test is the most often used method for detecting viral RNA using a nasopharyngeal swab. Numerous investigations, however, have demonstrated that the RT-PCR test has a high percentage of false negatives, necessitating repeated testing for proper diagnosis. Additionally, the RT-PCR test is limited in availability owing to the shortage of manufacturing material, and the testing method is time intensive, preventing quick and reliable screening. Computed tomography (CT) is used in conjunction with real time polymerase chain reaction (RT-PCR) for COVID-19 screening, as a significant proportion of CT scans were acquired from infected individuals.
Thus, these regions of COVID-19 infection can be detected in asymptomatic individuals by evaluating ground glass opacity (GGO) and indications of lung consolidation that may develop at various phases of the illness. Visual CT imaging analysis may aid in the identification of COVID-19 illness by providing techniques for classifying prevalent infection patterns such as ground glass opacity (GGO) and pulmonary consolidations. Systems in this field perform three distinct image processing tasks. Second, CT categorization, in which the patient is classified as having the illness or not having it. Second, illness infection identification, with bounding boxes highlighting infection regions. Segmentation of the infection area and quantification of disease load by pixel level classification is the third job. Manual segmentation of COVID-19 infections is a lengthy and time-consuming operation. Additionally, it is very dependent on the talents of the physician or doctor performing the segmentation process. The purpose of the general FCN is to categorise pictures, where an image is input and one mark is output.
However, in the COVID-19 chest CT examination, in addition to categorization, the field of abnormality must be localised and segmented. Researchers began utilising FCN for COVID-19 illness to aid physicians and radiologists in diagnostic and prognostic activities, which led in increased accuracy and decreased inspection time. Numerous researchers have proposed the use of deep learning systems to aid in the fight against the increased spread of COVID-19 illness. The majority of recently suggested deep learning methods are based on the identification (classification) of patients infected with COVID-19 illness by CT screening, which is possible owing to the widespread availability of CT scans and does not need extremely uncommon radiological annotations. COVID-Net, as described in the literature, is a deep neural network optimised for identifying COVID 19 instances from chest X-ray pictures that are open source and accessible to the general public. It serves as an example of the categorization of COVID-19 diagnostic systems. Using pulmonary CT scans, another study suggested a 3D deep learning system capable of differentiating COVID-19 pneumonia from Influenza-A viral pneumonia and healthy patients. To identify COVID-19, another software system based on weakly-supervised deep learning was created utilising 3D CT volumes. Numerous deep learning algorithms have been proposed to aid in the diagnosis of COVID-19 in clinical practise, however only a handful are connected to the delineation of infection from CT images. The majority of these recommended approaches rely on the installation of U-net FCN as its backbone. Other works have proposed their own deep COVID-19-oriented networks. For the majority of proposed methods, the common issue is insufficiently labelled CT images for deep network training that cannot be retrieved quickly. Because the procedure of marking infection sites is time consuming, expensive, and reliant on the radiologist's skill. This invention offers an automated segmentation technique based on deep learning for the purpose of detecting and delineating COVID-19 infections in CT images. The approach begins by segmenting the lung organ from the CT image as a potential source of infection, followed by segmenting the infections contained inside it. The approach enables physicians to track the course of the disease, as well as to quantify the disease's burden and severity.
DETAILED DESCRIPTION OF THE INVENTION
The new coronavirus illness (COVID-19) pandemic has spread to 213 nations and territories, with two transmissions occurring outside. According to the COVID-19 dashboard at Johns Hopkins University's Center for Systems Science and Engineering (CSSE), more than 19,172,505 instances of COVID-19 have been reported (the number is rising), including 716,327 fatalities. The high prevalence of COVID-19 infection accounts for the fast increase in reported cases. COVID-19 causes significant respiratory difficulties, such as fever, coughing, and fatigue, as well as a variety of other symptoms. These symptoms, in particular, can progress to serious pneumonia in patients with compromised immune systems. The reverse transcription polymerase chain reaction (RT-PCR) test is the most often used method for detecting viral RNA using a nasopharyngeal swab. Numerous investigations, however, have demonstrated that the RT-PCR test has a high percentage of false negatives, necessitating repeated testing for proper diagnosis. Additionally, the RT-PCR test is limited in availability owing to the shortage of manufacturing material, and the testing method is time intensive, preventing quick and reliable screening. Computed tomography (CT) is used in conjunction with real time polymerase chain reaction (RT-PCR) for COVID-19 screening, as a significant proportion of CT scans were acquired from infected individuals. CT scanning diagnostic investigations on individuals with COVID-19 illness show that regions of infection may emerge on CT scans prior to the onset of disease symptoms. Thus, these regions of COVID-19 infection can be detected in asymptomatic individuals by evaluating ground glass opacity (GGO) and indications of lung consolidation that may develop at various phases of the illness. Visual CT imaging analysis may aid in the identification of COVID-19 illness by providing techniques for classifying prevalent infection patterns such as ground glass opacity (GGO) and pulmonary consolidations. Systems in this field perform three distinct image processing tasks. Second, CT categorization, in which the patient is classified as having the illness or not having it. Second, illness infection identification, with bounding boxes highlighting infection regions. Segmentation of the infection area and quantification of disease load by pixel-level classification is the third job. Manual segmentation of COVID-19 infections is a lengthy and time-consuming operation. The purpose of the general FCN is to categorise pictures, where an image is input and one mark is output. However, in the COVID-19 chest CT examination, in addition to categorization, the field of abnormality must be localised and segmented. Researchers began utilising FCN for COVID-19 illness to aid physicians and radiologists in diagnostic and prognostic activities, which led in increased accuracy and decreased inspection time. Numerous researchers have proposed the use of deep learning systems to aid in the fight against the increased spread of COVID-19 illness. The majority of recently suggested deep learning methods are based on the identification (classification) of patients infected with COVID-19 illness by CT screening, which is possible owing to the widespread availability of CT scans and does not need extremely uncommon radiological annotations. COVID-Net, as described in the literature, is a deep neural network optimised for identifying COVID 19 instances from chest X-ray pictures that is open source and accessible to the general public as an example of the categorization of COVID-19 diagnostic systems. Using pulmonary CT scans, another study suggested a 3D deep learning system capable of differentiating COVID-19 pneumonia from Influenza-A viral pneumonia and healthy patients. To identify COVID-19, another software system based on weakly-supervised deep learning was created utilising 3D CT volumes.
Numerous deep learning algorithms have been proposed to aid in the diagnosis of COVID-19 in clinical practise, however only a handful are connected to the delineation of infection from CT images. The majority of these recommended approaches rely on the installation of U-net FCN as its backbone. Other works have proposed their own deep COVID-19-oriented networks. For the majority of proposed methods, the common issue is insufficiently labelled CT images for deep network training that cannot be retrieved quickly. Because the procedure of marking infection sites is time consuming, expensive, and reliant on the radiologist's skill. This invention offers an automated segmentation technique based on deep learning for the purpose of detecting and delineating COVID-19 infections in CT images. The approach begins by segmenting the lung organ from the CT image as a potential source of infection, followed by segmenting the infections contained inside it. The approach enables physicians to track the course of the disease, as well as to quantify the disease's burden and severity. framework that is proposed: The suggested segmentation method's flowchart is depicted in Fig. 1, and it comprises of two major stages that are carried out consecutively. The initial stage is lung segmentation from standard chest CT slices, followed by COVID-19 infection. Open Access COVID-19 Datasets The proposed FCN networks in this invention are trained and tested using COVID-19 chest CT datasets obtained from a number of sources. The requisite training classes were identified using all CT datasets: history, lung organ, and COVID-19 infection. The first COVID-19 dataset will be made publicly available by the Italian Society of Medical and Interventional Radiology (SIRM). The COVID-19 CT segmentation dataset consists of 100 axial slices from over 40 distinct COVID-19-infected CT pictures, each having radiologist-segmented lung infections. The second collection of public datasets comes from Radiopedia and consists of nine COVID-19 chest volumetric CT scans with accompanying ground-truth. A radiologist diagnoses about 373 slices from each of the nine datasets as positive and defined. Ma et al. are compiling and presenting a third recently published public dataset, which includes 20 annotated COVID-19 chest CT images, ten Coronacases Initiative CT scans, and ten more Radiopedia CT scans. These CT scans are publicly accessible under a Creative Commons BY-NC-SA licence and have been labelled by two radiologists and verified by an expert radiologist on all twenty COVID-19 CT images. Enhancement of the appearance of COVID-19 infections: Segmentation of infection areas within the lung ROI presents a number of difficulties, including the presence of fuzzy edges and a high degree of inhomogeneity within infection areas. The COVID 19 infection area is faintly contrasted on chest CT scans; the surrounding tissues lack a distinct border. Additionally, infection areas in CT slices exhibit a high degree of variability in terms of texture, size, and location. To increase the identification of infection regions, the segmented lung (ROI) from the CT image is enhanced using tensor-based Edge Enhancing Diffusion (EED) filtering. EED filtering makes use of a diffusion tensor to adjust the diffusion to the picture structure. EED filters aid in increasing contrast, filtering noise to improve strength uniformity, and preserving form boundaries. EED filtering is used to enhance the contrast of infection regions in order to facilitate the detection and segmentation of COVID-19 infection sites by increasing the uniformity of severity within these areas while maintaining lung parenchyma limitations. This approach attempts to improve the process of FCN training by isolating and learning the essential characteristics that differentiate infection sites from surrounding tissues. Extraction of Training Patches: The distribution and position of infection regions within CT scans are unclear, which is a key issue of the proposed deep learning FCN for improved training. Additionally, if the CT slice contains regions of infection, the distribution of infection within the slice is significantly skewed, as the infection may comprise just a tiny proportion of the slice. Thus, utilising the complete CT slices as training patches may result in a severe bias against the context, which is a well known issue of semantic segmentation in medical imaging. To address this issue, training slices are generated from infection-containing slices using random patch sizes. The training datasets consist of lung slices generated solely from the lung ROI (not the whole CT slice). Due to the fact that the lung organ appears in numerous slices of varying sizes, the patches excised have varying resolutions. Two cascaded deep FCNs are coupled successively in this study to partition the COVID-19 lung organ and subsequently the infection regions. A modification to the U-net design, which consists of five layers and serves as the backbone of the planned FCN network. A U-network is made up of two paths: one for encoding and one for decoding. At each stage of the encoding process, three operations are performed: convolution, activation function (ReLU), and batch normalisation. These procedures are performed twice sequentially in each level block, followed by a max-pooling operation before proceeding to the next point. The kernel size is 3x3 for convolutions and 2x2 for max-pooling. After each level, the resolution of the feature is halved. The network decoding approach restores the original input size by following the same operation sequence conyy, ReLU, BN) but substituting upsampling for max-pooling at each step. Additionally, the appropriate function is concatenated to each decoding stage's input from the encoding path. The last stage of the decoding route is a 1x convolution with a sigmoid activation function to categorise the feature map and generate the final binary prediction map using the dice coefficient metric. While the growth in network depth is unavoidable in FCN, it introduces the problem of the vanishing gradient, as additional layers are heaped on top of each other and gradient information is washed out, slowing down training and degrading efficiency. Numerous deep network topologies have been proposed to address this problem, however DensNet and ResNet are often recognised as advances in terms of performance. Each layer is connected to all subsequent layers in DensNet, where feature maps produced using various filter sizes are concatenated from previous layers, significantly thickening the model as channels are merged following each convolution operation. In ResNet, on the other hand, this does not occur since the addition operation is utilised to merge the prior input identity with the output function map. Within the
ResNet block, a shortcut (skip link) from the block input (identity) links to the block output feature, bypassing the stacked layers. DensNets are also known as memory hungry networks, as back-propagation enables the storage of the whole layer outputs, which consumes more memory and operates slowly. On the other side, ResNet's suggestion is to include tensors, however it has been claimed that adding function mappings directly affects the gradient flow across the network, as it sums the values of the features. As a result, the concatenation operation is preferred, as it preserves the function maps, whereas the summing procedure corrupts the feature maps for both the convolution operation and the source of the skip connections. ResDense is a critical component of the proposed network design. In the proposed ResDense, dense ties (concatenation) are employed between residual blocks rather than convolution layers. In terms of flow and memory feature maps, the suggested ResDense block suitably refines feature values via the use of residual blocks and occasionally memorises the refined feature values through the use of dense connections between residual blocks. The encoder and decoder pathways for the proposed FCN network are dependent on ResDense blocks in this study. Each level in the proposed network's contracting and extending pathways is created using the ResDense block. As a result, at the conclusion of each stage of the encoding route, the feature map's depth is doubled and concatenated with the block input. Resized annotated 2D slices with a patch size of (256 256) are used to train the proposed network. Using zero mean and unit variance normalisation, all CT slices were patch-wise corrected. Keras1 and the TensorFlow backend are used to implement the ResDense FCN architecture. To split the lung and then the diseased region sequentially, two ResDense networks were constructed. The Adam optimizer with a learning rate of 0.0001 is used to train the network and modify the parameters. For lung segmentation and infection segmentation, both networks are trained with 20 epochs and batch sizes of 32 and 16, respectively. To update the model parameters and track the convergence of the network training, the soft dice coefficient loss was utilised. In network design, the Batch Normalization
(BN) layer is used to prevent erroneous increases or decreases in produced data between network levels. The pixel-wise sigmoid activation feature is used in the network's last layer. Many efforts have been taken to increase the certified network's efficiency as a result of the uneven distribution of lung tissue classes and COVID-19 infections. To begin, the soft dice loss measure is utilised throughout the training phase to determine the overlap between the ground-truth patches and the area within the lung ROI designated as infected by the network. Additionally, the second FCN network is trained exclusively within the lung ROI to identify characteristics that distinguish COVID-19 infections from lung tissues. After cutting the lung region, the training patches are retrieved only from the CT slice. Additionally, it is guaranteed that the training patches utilised contain the appropriate mask and any patches without an annotation mask are removed from the training phase. Performance Measures: A battery of performance tests is performed to evaluate the proposed technique quantitatively by segmenting COVID-19 infection and the lung organ from CT images. To begin, the Dice coefficient (DSC) is a measure of overlap between the successfully segmented class and the ground truth in terms of the average scale. The dice metric (DSC) is defined as DSC = (2*TP)/(2*TP+FN+FP), where TP (true positive) and TN (true negative) are the number of voxels accurately categorised in the segmented class and context, respectively. The terms FN (false negative) and FP (false positive) refer to the number of voxels that are neither contextually relevant nor accurately categorised as segmented. The second metric is sensitivity, which is defined as the ratio of correctly segmented voxels (TP) to ground truth voxels (TP+FN). Sensitivity = TP/(TP+FN). The sensitivity metric indicates the system's ability to separate the desired class voxels properly. The accuracy of the ratio of correctly segmented non-class voxels (TN) to the total number of non-class voxels measured is the third metric. Specificity =
TN/(TN+FP) is used to demonstrate that the technique is capable of segmenting voxels that do not belong to the desired class.
This invention offers a deep learning framework for COVID-19 lung infection segmentation on chest CT images. The created FCN employed a U-net architecture as a backbone for each level of the encoding and decoding pathways, utilising proposed ResDense blocks. Due to the concatenation skip relation in each ResDense block, the feature maps of the infection regions and lung history pass through the network without significantly changing their values, which increased network learning and segmentation efficiency. Additionally, the technique incorporates an EED phase that enhances the appearance of infection regions in CT slices by increasing their contrast and strength uniformity. The qualitative and quantitative evaluation findings demonstrate the system's efficacy and capability for segmenting COVID-19 infection regions from CT images. The system is trained and validated using a variety of datasets from a variety of sources, demonstrating its generalizability and making it one of the most promising methods for automated COVID-19 infection diagnosis and clinical routine analysis.

Claims (5)

CLAIMS: We Claim:
1. We assert that the given invention offers a deep learning-based framework for COVID-19 lung infection segmentation in chest CT scans.
2. As stated in claim 1, this invention provides an automated segmentation technique for deep learning for the purpose of detecting and delineating COVID-19 infections in CT images.
3. As stated in 1 and 2, this suggested innovation will need less time and money to identify the infection and annotate the infection regions as compared to existing techniques.
4. We assert that the developed method identifies infection by segmenting the lung organ from the CT image as a potential source of infection and then segmenting the infections contained inside it.
5. As stated in 1, 2, and 4, this innovation will assist physicians in determining the course of the covid-19 illness, as well as quantifying the disease's burden and severity.
AU2021103578A 2021-06-24 2021-06-24 A Novel Method COVID -19 infection using Deep Learning Based System Ceased AU2021103578A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2021103578A AU2021103578A4 (en) 2021-06-24 2021-06-24 A Novel Method COVID -19 infection using Deep Learning Based System

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2021103578A AU2021103578A4 (en) 2021-06-24 2021-06-24 A Novel Method COVID -19 infection using Deep Learning Based System

Publications (1)

Publication Number Publication Date
AU2021103578A4 true AU2021103578A4 (en) 2021-08-19

Family

ID=77274258

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2021103578A Ceased AU2021103578A4 (en) 2021-06-24 2021-06-24 A Novel Method COVID -19 infection using Deep Learning Based System

Country Status (1)

Country Link
AU (1) AU2021103578A4 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114822871A (en) * 2022-07-01 2022-07-29 北京超数时代科技有限公司 Self-learning and data protection-based fever accompanying respiratory syndrome monitoring system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114822871A (en) * 2022-07-01 2022-07-29 北京超数时代科技有限公司 Self-learning and data protection-based fever accompanying respiratory syndrome monitoring system

Similar Documents

Publication Publication Date Title
Gozes et al. Rapid ai development cycle for the coronavirus (covid-19) pandemic: Initial results for automated detection & patient monitoring using deep learning ct image analysis
Oulefki et al. Automatic COVID-19 lung infected region segmentation and measurement using CT-scans images
Malik et al. CDC_Net: Multi-classification convolutional neural network model for detection of COVID-19, pneumothorax, pneumonia, lung Cancer, and tuberculosis using chest X-rays
Soomro et al. Artificial intelligence (AI) for medical imaging to combat coronavirus disease (COVID-19): A detailed review with direction for future research
Kamil A deep learning framework to detect Covid-19 disease via chest X-ray and CT scan images.
Hryniewska et al. Checklist for responsible deep learning modeling of medical images based on COVID-19 detection studies
AU2005207310B2 (en) System and method for filtering a medical image
Hassan et al. Review and classification of AI-enabled COVID-19 CT imaging models based on computer vision tasks
Abdulkareem et al. [Retracted] Automated System for Identifying COVID‐19 Infections in Computed Tomography Images Using Deep Learning Models
Putha et al. Can artificial intelligence reliably report chest x-rays
Putha et al. Can artificial intelligence reliably report chest x-rays?: Radiologist validation of an algorithm trained on 2.3 million x-rays
Khaniabadi et al. Two-step machine learning to diagnose and predict involvement of lungs in COVID-19 and pneumonia using CT radiomics
Alruwaili et al. COVID‐19 Diagnosis Using an Enhanced Inception‐ResNetV2 Deep Learning Model in CXR Images
El Naqa et al. Lessons learned in transitioning to AI in the medical imaging of COVID-19
Sanyal et al. An automated two-step pipeline for aggressive prostate lesion detection from multi-parametric MR sequence
Ye et al. Severity assessment of COVID-19 based on feature extraction and V-descriptors
Garlapati et al. Detection of COVID-19 using X-ray image classification
Amin et al. Diagnosis of COVID-19 infection using three-dimensional semantic segmentation and classification of computed tomography images
AU2021100007A4 (en) Deep Learning Based System for the Detection of COVID-19 Infections
Ghomi et al. Segmentation of COVID-19 pneumonia lesions: A deep learning approach
Kaya et al. A CNN transfer learning‐based approach for segmentation and classification of brain stroke from noncontrast CT images
Liong-Rung et al. Using artificial intelligence to establish chest x-ray image recognition model to assist crucial diagnosis in elder patients with dyspnea
AU2021103578A4 (en) A Novel Method COVID -19 infection using Deep Learning Based System
Ahmed et al. Achieving multisite generalization for cnn-based disease diagnosis models by mitigating shortcut learning
Srivastava et al. Diagnosing Covid-19 using AI based Medical Image Analysis

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
HB Alteration of name in register

Owner name: ESWARAIAH, R.

Free format text: FORMER NAME(S): ESWARAIAH, RAYACHOTI; TIRUMALASETTY, SUDHIR; GUNDABATINI, SANJAY GANDHI; PUTHETI, SUDHAKAR; KOLLURU, SURESH BABU; NALLAMALA, SRI HARI; PINNAMANENI, SIVA PRASAD; JANJANAM, MADHU BABU; PUJARI, JEEVANA JYOTHI; PRASANN, N. LAKSHMI

Owner name: TIRUMALASETTY, S.

Free format text: FORMER NAME(S): ESWARAIAH, RAYACHOTI; TIRUMALASETTY, SUDHIR; GUNDABATINI, SANJAY GANDHI; PUTHETI, SUDHAKAR; KOLLURU, SURESH BABU; NALLAMALA, SRI HARI; PINNAMANENI, SIVA PRASAD; JANJANAM, MADHU BABU; PUJARI, JEEVANA JYOTHI; PRASANN, N. LAKSHMI

Owner name: GUNDABATINI, S.G.

Free format text: FORMER NAME(S): ESWARAIAH, RAYACHOTI; TIRUMALASETTY, SUDHIR; GUNDABATINI, SANJAY GANDHI; PUTHETI, SUDHAKAR; KOLLURU, SURESH BABU; NALLAMALA, SRI HARI; PINNAMANENI, SIVA PRASAD; JANJANAM, MADHU BABU; PUJARI, JEEVANA JYOTHI; PRASANN, N. LAKSHMI

Owner name: PUTHETI, S.

Free format text: FORMER NAME(S): ESWARAIAH, RAYACHOTI; TIRUMALASETTY, SUDHIR; GUNDABATINI, SANJAY GANDHI; PUTHETI, SUDHAKAR; KOLLURU, SURESH BABU; NALLAMALA, SRI HARI; PINNAMANENI, SIVA PRASAD; JANJANAM, MADHU BABU; PUJARI, JEEVANA JYOTHI; PRASANN, N. LAKSHMI

Owner name: KOLLURU, S.B.

Free format text: FORMER NAME(S): ESWARAIAH, RAYACHOTI; TIRUMALASETTY, SUDHIR; GUNDABATINI, SANJAY GANDHI; PUTHETI, SUDHAKAR; KOLLURU, SURESH BABU; NALLAMALA, SRI HARI; PINNAMANENI, SIVA PRASAD; JANJANAM, MADHU BABU; PUJARI, JEEVANA JYOTHI; PRASANN, N. LAKSHMI

Owner name: NALLAMALA, S.H.

Free format text: FORMER NAME(S): ESWARAIAH, RAYACHOTI; TIRUMALASETTY, SUDHIR; GUNDABATINI, SANJAY GANDHI; PUTHETI, SUDHAKAR; KOLLURU, SURESH BABU; NALLAMALA, SRI HARI; PINNAMANENI, SIVA PRASAD; JANJANAM, MADHU BABU; PUJARI, JEEVANA JYOTHI; PRASANN, N. LAKSHMI

Owner name: PINNAMANENI, S.P.

Free format text: FORMER NAME(S): ESWARAIAH, RAYACHOTI; TIRUMALASETTY, SUDHIR; GUNDABATINI, SANJAY GANDHI; PUTHETI, SUDHAKAR; KOLLURU, SURESH BABU; NALLAMALA, SRI HARI; PINNAMANENI, SIVA PRASAD; JANJANAM, MADHU BABU; PUJARI, JEEVANA JYOTHI; PRASANN, N. LAKSHMI

Owner name: JANJANAM, M.B.

Free format text: FORMER NAME(S): ESWARAIAH, RAYACHOTI; TIRUMALASETTY, SUDHIR; GUNDABATINI, SANJAY GANDHI; PUTHETI, SUDHAKAR; KOLLURU, SURESH BABU; NALLAMALA, SRI HARI; PINNAMANENI, SIVA PRASAD; JANJANAM, MADHU BABU; PUJARI, JEEVANA JYOTHI; PRASANN, N. LAKSHMI

Owner name: PUJARI, J.J.

Free format text: FORMER NAME(S): ESWARAIAH, RAYACHOTI; TIRUMALASETTY, SUDHIR; GUNDABATINI, SANJAY GANDHI; PUTHETI, SUDHAKAR; KOLLURU, SURESH BABU; NALLAMALA, SRI HARI; PINNAMANENI, SIVA PRASAD; JANJANAM, MADHU BABU; PUJARI, JEEVANA JYOTHI; PRASANN, N. LAKSHMI

Owner name: PRASANNA, N.L.

Free format text: FORMER NAME(S): ESWARAIAH, RAYACHOTI; TIRUMALASETTY, SUDHIR; GUNDABATINI, SANJAY GANDHI; PUTHETI, SUDHAKAR; KOLLURU, SURESH BABU; NALLAMALA, SRI HARI; PINNAMANENI, SIVA PRASAD; JANJANAM, MADHU BABU; PUJARI, JEEVANA JYOTHI; PRASANN, N. LAKSHMI

MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry